McKinsey's Lilli case provides key development ideas for the enterprise AI market: edge computing + the potential market opportunities of small models. This AI assistant, integrated with 100,000 internal documents, not only achieved a 70% adoption rate among employees but is also used an average of 17 times a week. Such product stickiness is rare in enterprise tools. Below, I share my thoughts:

1) Enterprise data security is a pain point: the core knowledge assets accumulated by McKinsey over 100 years and specific data accumulated by some small and medium-sized enterprises have strong data sensitivity, which cannot be processed on public clouds. Finding a balance of 'data not leaving local, AI capabilities not compromised' is an actual market necessity. Edge computing is an exploratory direction;

2) Specialized small models will replace general large models: enterprise users do not need a '100 billion parameter, all-purpose' general model, but rather a specialized assistant that can accurately answer specific domain questions. In contrast, there is a natural contradiction between the generality and specialized depth of large models, and enterprise scenarios often place greater emphasis on small models;

3) Cost balance of building AI infrastructure and API calls: although the combination of edge computing and small models requires a larger initial investment, the long-term operating costs are significantly reduced. Imagine if the AI large model frequently used by 45,000 employees comes from API calls; this dependency and the increase in usage scale and feedback will make building AI infrastructure a rational choice for medium and large enterprises.

4) New opportunities in the edge hardware market: large model training relies on high-end GPUs, but edge inference has completely different hardware requirements. Chip manufacturers like Qualcomm and MediaTek, which are optimizing processors for edge AI, are now facing good market opportunities. As every enterprise aims to build its own 'Lilli', edge AI chips designed for low power consumption and high efficiency will become essential infrastructure;

5) The decentralized web3 AI market is also being enhanced simultaneously: once enterprises start to engage in computing power, fine-tuning, and algorithm needs for small models, how to balance resource scheduling will become an issue. The traditional centralized resource scheduling will become problematic, directly creating significant market demand for web3 AI decentralized small model fine-tuning networks, decentralized computing power service platforms, and more;

While the market is still discussing the general capability boundaries of AGI, it is gratifying to see that many enterprise end-users are already exploring the practical value of AI. Clearly, compared to the past monopolistic leaps in computing power and algorithms, when the market shifts its focus to edge computing + small models, it will bring greater market vitality.