Customer service companies need to supercharge off-the-shelf models
In a recent report on the modern AI stack , Menlo Ventures published a very telling statistic: almost 95% of AI spend is now on inference (or running AI models) rather than training them, according to the firm’s survey of more than 450 enterprise executives.
The number represents a massive sea change. Not long ago, a much wider set of companies were training AI models, and it was thought most companies would one day create their own unique AI models from scratch. Whether to “build versus buy ” a model was a common contemplation among corporate IT teams exploring how to incorporate AI, and many set off building their own unique models for their own purposes.
“Teams seeking to build AI applications needed to start with the model — which often involved months of tedious data collection, feature engineering, and training runs, as well as a team of PhDs, before the system could be productionized as a customer-facing end product,” reads the Menlo Ventures report. “LLMs have flipped the script, shifting AI development to be more ‘product-forward’.”
In today’s AI landscape, companies like OpenAI, Anthropic, and Meta have done a lot of the heavy lifting, creating powerful large language models companies can tap as a starting point for their own products in customer service and beyond. But the widespread use of the same models has ignited a new approach to AI where it’s what a company does with a model once it's in hand that truly counts. Now that models themselves no longer serve as the main differentiator, companies are shifting their AI strategies to focus on supercharging leading models with their own data and processes.