Case studies

We power large language model use cases for some of the world's most innovative companies

Connectly AI, an international leader in e-commerce migrated from OpenAI to a custom model developed through Konko AI's customization platform

Highlights

Used Konko AI platform to develop private and custom model using own data, resulting in better performance than GPT-4

Seamlessly migrated from OpenAI's GPT-4 in 2 weeks

90% cost savings per month vs. OpenAI on Konko AI's inference engine

Zero downtime in production

Problem
Finding a high-performing and cost-effective alternative to GPT-4

Connectly built an AI-powered sales bot designed for a global client base using OpenAI's GPT-4.

After seeing rapid adoption and growing usage, Connectly realized that GPT-4 was not an economically sustainable solution at scale.

They needed strong LLM performance and economically viable inference.

Solution
Using Konko to customize and deploy  Llama-2-70b

Connectly's team used Konko AI's model evaluation tool to compare leading large language models.

After some testing they settled on LlaMa-2-70B which they then customized on Konko's platform using their own data. This resulted in a private and proprietary sales bot model fully owned by Conntecly.

Conntecly deployed this fine-tuned model on Konko AI's inference platform.

Outcome
90% cost savings, 50% greater user engagement and zero downtime

Connectly's application, now running on a model fine-tuned on their proprietary data experienced a 50% improvement in user engagement.

Not only that, the company unlocked 90% in cost savings with zero downtime by using Konko's inference engine.

The thing that got me excited was the ability to make an AI model customized to our needs using our own data and the magic that you can make happen when proprietary data and leading GenAI models come together.
V. Karkantzos
|
Head of growth, Connectly

Coreware, a leader in software development relies on Konko AI's inference engine to power its chatbot product with thousands of concurrent users

Highlights

Evaluated large language models using Konko AI

95% cost savings per month on Konko AI's inference engine (compared to AWS)

Zero downtime in production with thousands of concurrent users

Problem
Running inference cost-effectively for real-time bot with thousands of concurrent users

Coreware required a cost-effective inference solution capable of handling thousands of concurrent users with response times in the second.

Traditional clouds were unable to to simultaneously offer cost effiiency and latency guarantees at scale for Coreware's application to run reliably.

Solution
Running inference on Konko AI's engine

Konko AI's inference solution delivered reliable and cost-effective inference at scale.

Coreware used Konko's evaluate functionality to find the right model for their use-case.

Konko AI's inference solution made it seamless for Coreware to find a  GPU configuration balancing performance and cost-efficiency.

Konko AI's machine learning and infrastructure team supported Coreware at every step of the way with expert guidance.

Outcome
95% cost savings vs. AWS and zero downtime

Coreware achieved 95% cost savings per month vs. AWS with 5x greater inference speed and zero downtime.

Our partnership with Konko AI was a game changer. Turns out there are countless things to get right when running inference at scale, especially for near-real-time use-cases. Konko AI's platform incorporates the latest research so our app runs smooth and cost-effectively
F. Vukelic
|
CTO, Coreware