Our approach
We kicked off with a technical deep dive into Keytrade’s existing LLM setup and data architecture. Using an agile approach, we designed and built a prototype that could add large amounts of contextual data to user prompts before sending them to the LLM.
At the same time, we made sure the chat interface remained simple and user-friendly, abstracting away the complexity on the backend.
Key hurdles like LLM memory limits and data classification mismatches were tackled iteratively, with close feedback loops from Keytrade’s AI and product teams.
The outcome
While this was a proof of concept, it clearly showed the feasibility of delivering personalized AI responses at scale, even within the limitations of current data structures.
The intelligent backend successfully determined the intent of incoming questions and injected the right context before querying the LLM.
However, the experiment also highlighted a key challenge: Keytrade's data lacked proper classification (e.g., spending categories), which limits the depth of personalization. This insight now serves as a valuable input for future roadmap planning on their end.
Key features
Context engine – Automatically enriches short prompts with relevant background to help the LLM deliver precise answers.
Intent routing – Determines whether a question is general, personal, or a hybrid, and sends it to the correct data source.
Mock data support – Allows secure prototyping without exposing sensitive information.