Watch our latest webinar on mastering generative AI, led by Neontri’s Technology Director, Marcin Dobosz. As a seasoned generative AI expert with extensive experience in implementing generative artificial intelligence solutions for enterprises, Marcin shares invaluable insights on building a GenAI assistant to drive enterprise excellence. Don’t miss this opportunity to learn from an industry leader.
Agenda:
– Neontri Assistant
– Other possible use cases
– Data processing pipelines
– Retrieval augmented generation (RAG)
– Vector database and data similarity
– Flexibility of solutions
– Closing remarks
Overview
The webinar showcases Neontri’s internal GenAI assistant—a powerful tool designed to process vast amounts of data efficiently. Given Neontri’s expertise in open banking and big data, this solution naturally leverages GenAI’s capabilities to streamline data organization and accelerate access to critical client information.
Neontri assistant

Neontri’s GenAI assistant operates similarly to GPT but is specifically tailored to its role within the company. Designed to support the sales and marketing teams, it autonomously retrieves essential client information, eliminating the need for manual searches and streamlining the decision-making process.
The tool houses comprehensive client data and can efficiently answer questions about Neontri’s projects, including details on development frameworks and technologies used.
Other use cases
This solution can be adapted across various industries, enhancing data enrichment—particularly in banking transactions. It can also generate detailed product descriptions by analyzing in-store information and images. By integrating this capability into a data pipeline, businesses can automate data collection, processing, and the creation of comprehensive, searchable descriptions.
Another application is personalized recommendations, such as suggesting the perfect shoes for a specific occasion. The tool can analyze the event and outfit details to find the ideal match, enhancing the shopping experience with tailored product suggestions.
The assistant can provide insights into how customer data is processed within a specific environment and how it is utilized by the company. It can efficiently handle queries about terms and services, reducing the need for customer service intervention and improving response times.
What’s worth mentioning, the assistant doesn’t have to function as a chatbot—it can also take the form of another text-generating tool, tailored to specific business needs and workflows.
Data processing pipelines

The key data processing pipelines in Neontri’s solution include a loader and a chatbot. The loader pipeline manages data stored in a designated source—Google Drive in this case—by monitoring file changes, selecting relevant file types, and processing data with tools like Apache Tika. It then extracts, normalizes, and segments the data for further processing. Meanwhile, the chatbot pipeline is responsible for delivering accurate responses to user queries, ensuring seamless access to relevant information.
First, the chatbot pipeline analyzes the query and translates it into structured data for efficient searching. It then scans the source for relevant documents, using algorithms to identify the most relevant ones. Finally, a language model synthesizes the chat history, retrieved documents, and the user’s query to generate a precise and context-aware response.
The pipelines also feature gateways that facilitate client integration with the solution. Additionally, the solution incorporates topics to guide data through specific steps, as well as resources that connect to external systems utilized by the tool.
Neontri’s resources enable the tool to scale according to the client’s needs. For example, Google Drive is used as the data source, and the tool can process everything within Google Workspace. It monitors new relevant files and folders, pulling this data through the pipeline. The tool then structures the information to generate high-quality, contextually accurate responses to user queries.
Retrieval augmented generation (RAG)
RAG allows for improving the quality of generated responses. Depending on how the language model understands the text, the assistant can create meaningful and helpful answers to queries. By comparing information and identifying similarities in data vectors, it retrieves the most relevant data to provide accurate, contextually appropriate responses.
Vector database and data similarity

Vector data consists of pieces of information stored as vectors in a database, enabling efficient searching and comparison of data. This approach allows for fast similarity searches, commonly used in content retrieval systems. The database can handle large datasets, making it possible to run language models and solutions on vast amounts of data, such as banking transactions. Vectors play a crucial role in selecting the relevant data to generate accurate responses.
Flexibility of solutions
The AI assistant is versatile, capable of working with various language models and engines. It can seamlessly integrate with existing applications and infrastructure, allowing for the extraction of relevant information without the need for APIs to provide data. This makes the solution highly flexible and easy to implement, streamlining data access and enhancing overall efficiency.
Closing remarks
At Neontri, we understand that every enterprise requires a tailored solution. GenAI can be leveraged to create these custom solutions, addressing specific requirements, infrastructure, potential constraints, and compliance considerations. Our custom generative AI development services and the approach ensure that each solution is aligned with the unique needs and challenges of the business, delivering optimal results.