According to PANews, McKinsey's Lilli case offers crucial insights into the development of the enterprise AI market, highlighting the potential of edge computing combined with small models. This AI assistant, which integrates 100,000 internal documents, has achieved a 70% adoption rate among employees, with an average usage of 17 times per week, demonstrating rare product stickiness in enterprise tools.
One major challenge is ensuring data security for enterprises. McKinsey's century-old core knowledge assets and specific data accumulated by small and medium-sized enterprises are highly sensitive and not suitable for processing on public clouds. Exploring a balance where data remains local without compromising AI capabilities is a market necessity, with edge computing being a promising direction.
Professional small models are expected to replace general large models. Enterprise users require specialized assistants capable of accurately addressing specific domain issues, rather than general models with billions of parameters. The inherent contradiction between the generality and professional depth of large models makes small models more appealing in enterprise scenarios.
Balancing the cost of self-built AI infrastructure and API calls is another consideration. Although the combination of edge computing and small models requires significant initial investment, it substantially reduces long-term operational costs. For instance, if 45,000 employees frequently use AI large models via API calls, the dependency and increased usage scale would make self-built AI infrastructure a rational choice for medium and large enterprises.
The edge hardware market presents new opportunities. While high-end GPUs are essential for large model training, edge inference has different hardware requirements. Chip manufacturers like Qualcomm and MediaTek are optimizing processors for edge AI, seizing market opportunities. As enterprises aim to develop their own 'Lilli,' edge AI chips designed for low power consumption and high efficiency will become essential infrastructure.
The decentralized web3 AI market is also strengthening. As enterprises' demands for computing power, fine-tuning, and algorithms in small models increase, balancing resource allocation becomes challenging. Traditional centralized resource scheduling will face difficulties, creating significant demand for decentralized web3 AI small model fine-tuning networks and decentralized computing power service platforms.
While the market continues to discuss the boundaries of AGI's general capabilities, it is encouraging to see many enterprise users already exploring the practical value of AI. Clearly, shifting the focus from resource monopolization in computing power and algorithms to edge computing and small models will bring greater market vitality.