The first was the ChatGPT response speed. Many expect ChatGPT to respond to all user queries instantly and without any errors. In practice, it doesn’t turn out like that since ChatGPT model is quite large and the OpenAI API demand is growing every day. Therefore, even basic requests that the user makes to our interface in TravelPlanBooker take five to ten seconds to fulfill for a simple trip with one or two locations. A longer and more complex trip can take up to 30 seconds. Because of this, we had many discussions with our client, who expected a quicker time response. The latency of LLM-based chatbots is a serious challenge that we and many other AI teams tackle.
To mitigate this problem, we optimized our workflow and parallelized calls to various APIs as much as possible. Also, we introduced a caching layer for various API responses that can be reused.
Such measures speed up the platform’s response. However, the main bottleneck remains: the response speed of ChatGPT API itself.
The second challenge was the quality consistency of ChatGPT’s responses. In every LLM-based project, the risk of the model’s hallucinations should always be considered. Let’s say that in one case out of 100, the model comes up with a non-existent city or location. We eliminated this problem by verifying locations and attractions produced by ChatGPT. To do this, we send a selected location to MapBox (an API similar to Google Maps for verifying objects, names, and locations) to validate it and build a route. We also check locations in the client’s АРI for availability in the database. However, this still does not give a 100% guarantee that the model will not, for example, respond to the user in French just because they want to go to France (even though the request was given in English). We take certain actions here to minimize the occurrence of such issues, but there is still a slight chance that they can happen.
This was a challenging but rewarding LLM project for us, as now we better understand the challenges that arise in such projects and can find ways to overcome them while meeting the client’s expectations.