The "heads on a swivel" point is key. LLMs are proficient at imprecise conclusions and natural language, versus traditionally APIs perform a specific task.
The heaviest non-software-development users of LLMs are using the primary chat interfaces, not only to find information, but then to connect to other sources (such as many apis, databases with exact answers, and other apps).
By building an LLM inside a product (ignoring primary AI clients), you inherently limit its generality, its data sources, its output destinations - and therefore its usefulness.
One of the higher costs of AI to products is going to be the "de-platforming" of integrations, especially if your product or consultants are charging for them, but also the value that you have by offering specific integrations; Now, LLMs can integrate apis, or write plugins/software that can do the data transfer and integrate things much faster than you can internally develop or market them.
The "heads on a swivel" point is key. LLMs are proficient at imprecise conclusions and natural language, versus traditionally APIs perform a specific task.
The heaviest non-software-development users of LLMs are using the primary chat interfaces, not only to find information, but then to connect to other sources (such as many apis, databases with exact answers, and other apps).
By building an LLM inside a product (ignoring primary AI clients), you inherently limit its generality, its data sources, its output destinations - and therefore its usefulness.
One of the higher costs of AI to products is going to be the "de-platforming" of integrations, especially if your product or consultants are charging for them, but also the value that you have by offering specific integrations; Now, LLMs can integrate apis, or write plugins/software that can do the data transfer and integrate things much faster than you can internally develop or market them.