Algorithm change log
Here is a high-level list of all the recent changes we have made to our question answering algorithm and set-up.
Last updated
Was this helpful?
Here is a high-level list of all the recent changes we have made to our question answering algorithm and set-up.
Last updated
Was this helpful?
Increase the size of conversational memory 3x to improve responses in longer conversations.
We found the new GPT-4o model to be more obedient so we adjusted our prompts to reflect this conversatism.
We moved all users over to use exclusively, at no extra cost.
This means all users have access to OpenAI's most powerful model. It also means that, overall, response times should improve as we no longer need to use GPT-4 turbo as a fallback for GPT-3.5.
Migrated from GPT-4 to GPT-4 turbo.
We moved to OpenAI's latest, fastest, and most powerful model. The GPT-4 models are currently used as "fallbacks", so if initially, the GPT-3.5 model is unable to answer a query (which is faster), we will re-run the query using the more powerful GPT-4 models.
This also ensures that we are operating on the most up-to-date models when OpenAI deprecates (stops using) their older models in a few months.
You can learn more about the .
Reference "relevancy" improvements made.
This means that when we identify the most relevant pieces of content to answer a question, we then perform a secondary "re-ranking" of those references or sources to determine the most "relevant" sources or references amongst them to better answer your questions.
Prompt improvements to "follow up" questions, i.e. any question after the 1st question in a conversation.