Source: a16z

Organized by: Z Finance

Image source: a16z

In the past ten years, every outbreak of consumer-grade products has almost been accompanied by a reconstruction of social paradigms: from Facebook's friend dynamics to TikTok's algorithm recommendations, we have gradually learned to define ourselves and express our identities through products.

The products of that time were people expressing, and products assisting; but now, AI is subtly completing a role reversal—it is no longer a tool of humans but begins to become the subject of expression, the intermediary of connection, and even the bearer of emotions. From ChatGPT to Veo3, from 11 Labs to Character.AI, we are witnessing a profound transformation mistakenly regarded as 'efficiency enhancement', which is in fact 'outsourcing the role of humans'.

In this discussion hosted by Erik Torenberg, Justine Moore, Bryan Kim, Anish Acharya, and Olivia Moore jointly proposed an unprecedented judgment: today's AI products are no longer 'tools like tools', but 'tools like people', and are even becoming products that 'replace the person themselves'.

Users are beginning to pay high monthly subscriptions of $200 for AI, **not because it is stronger, but because it can 'do it for you', or even 'be you'. Veo can generate customized videos in 8 seconds, ChatGPT can write business plans, provide psychological counseling, and replace emotional confessions, while 11 Labs creates a unique voice personality for you. And all of this no longer requires you to do it yourself, nor does it require you to be that 'you'.

The rise of AI consumption is a very dangerous signal: expression is being formatted, socializing is being simulated, and identity is being reconstructed.

Today we are still using Reddit, Instagram, and Snapchat to share the 'me' generated by AI, but these platforms are just old bottles containing new wine. A truly native AI social network has not yet emerged because AI can generate 'statuses' but cannot create 'emotional tension'; it can provide the illusion of companionship but cannot replace the uncontrollable struggles and vulnerabilities found in real connections.

All of this brings three shocking judgments:

First, the essence of AI products is not to enhance users but to reconstruct 'who the user is';

Second, the rise of AI companions is not the beginning of socializing, but the end of socializing;

Third, the proliferation of AI avatars is not an extension of expression but a dissolution of personality boundaries.

In the foreseeable future, the most successful AI products will not just be tool-like products but persona-like products. They can understand you, mimic you, represent you, guide you, and ultimately—replace you.

This is not a victory for efficiency; it's a qualitative change in existence.

AI Consumption Revolution: High-priced subscriptions and social reconstruction

Erik Torenberg: Thank you all for participating in this podcast about the consumer space. It seems that every few years, a breakthrough product emerges, from Facebook, Twitter, Instagram, Snap, WhatsApp, Tinder to TikTok. Every few years, there is this new paradigm, this new breakthrough. But it feels like this trend suddenly stalled a few years ago. Why did it stall? Or has it really stalled? How would you redefine this issue? How do you view the current situation? Where do you think it will head in the future?

Justine Moore: I believe ChatGPT may be the most significant consumer-grade success story of the past few years. We have also seen many breakthrough products emerge in other AI modalities, such as Midjourney, 11 Labs, and Blackforce Labs in the fields of image, video, and audio. Although products like Veo are emerging now, interestingly, many of these lack the social attributes or traditional consumer product characteristics you mentioned. This may be because AI is still in a relatively early stage, and most new products and innovations are driven by research teams—they are very good at model training but not historically skilled at building consumer product layers around models. Optimistically, models are now mature enough that developers can build more traditional consumer-grade products based on these models through open source or API interfaces.

Bryan Kim: This question is interesting because I am reviewing the developments over the past 15 to 20 years. As you mentioned, giants like Google, Facebook, and Uber have emerged when we observe the combination of factors like the internet, mobile devices, and cloud computing; indeed, many amazing companies have arisen. I believe that mobile cloud technology has entered a mature phase, these platforms have existed for 10 to 15 years, and various subfields have been explored to a certain extent. In the past, users needed to adapt to new features launched by Apple, whereas now they need to adapt to the continuous iteration and updating of underlying models, which is the first difference.

The second difference, as you mentioned, is that historical winners are mostly concentrated in the information field (like Google), and now ChatGPT is clearly continuing this direction. In the practical tools field, we have missed products like Box and Dropbox, but now we are seeing more consumer-grade applications emerging, with many companies competing for these usage scenarios. The same is true in the creative expression field, where creative tools are emerging one after another. I believe what is currently missing is the attribute of social connection; AI has not yet rebuilt the social map, which may be a blank area that requires continuous observation of developments.

Erik Torenberg: This is interesting because Facebook has been around for nearly 20 years. The companies Justine mentioned earlier, apart from OpenAI, can they continue to exist for another 10 to 20 years? What kind of defensive capabilities do these companies possess? And will all the scenarios these companies currently serve be replaced by emerging players in 10 years? Or will they continue to dominate all mainstream scenarios?

Anish Acharya: It can be said that the quality of ChatGPT's business model far exceeds that of similar consumer companies in past product cycles. Its highest pricing tier reaches $200 per month, while the highest pricing for Google's consumer products is $250 per month. Of course, there are defensive network effect issues, but perhaps this is a response to the flaws of early business models—if these elements are lacking, the quality of the business model would be even worse. Now charging users high fees directly may indicate that we complicated this issue in the past.

Erik Torenberg: Perhaps a poor quality of business models can instead foster stronger retention or product-market durability?

Anish Acharya: Indeed. In the past, it required fabricating stories to explain how to accumulate corporate value without immediate profitability, whereas now these model companies achieve profitability directly. Additionally, Justine's point is worth noting: all foundational models are developing in different directions. Do the horizontal models of Claude and ChatGPT have substitutability with the Gemini model? Does this imply price competition? However, different users have varied usage scenarios, and the actual observation is rather price increases than decreases. Therefore, when we delve deeper, we find that some interesting defensive strategies already exist.

Bryan Kim: The phenomenon of rising prices rather than falling is interesting, as the profit models for consumer companies have fundamentally shifted from the traditional era to the AI era; they can now achieve profitability immediately. I have been thinking about retention metrics—Olivia can correct my viewpoint—did we really distinguish between user retention and revenue retention when discussing consumer subscription models before the AI era? Because the pricing structure was stable then, users rarely upgraded plans. Now we must clearly differentiate user retention from revenue retention because users will actively upgrade plans. They need to purchase points, often exceed usage limits, and ultimately, their spending continues to grow. Therefore, revenue retention rates are significantly higher than user retention rates, a phenomenon unprecedented.

Olivia Moore: In the past, the highest-priced consumer subscription products averaged about $50 per year, which was already high. Now users willingly pay $200 monthly, and in some cases, even state that the pricing is too low and are willing to pay more.

Erik Torenberg: How do we explain this phenomenon? What value do users gain that makes them willing to pay such high fees?

Olivia Moore: I believe these products are completing tasks for users. In the past, consumer subscription products focused on personal finance, fitness, health, and entertainment, and while they superficially helped self-improvement or entertainment, they required users to invest a lot of time to gain value. Now, products like Deep Research can replace the 10 hours of work users would otherwise spend generating market reports. For many, this efficiency boost is clearly worth paying $200 a month, even if used only once or twice.

Justine Moore: Take Veo3 as an example; users pay $250 monthly but are thrilled because it is like a magical treasure chest—opening it yields the desired video; although it’s only 8 seconds long, the effect is stunning. Characters can speak, and users can create stunning content to share with friends, such as personalized information videos containing friends' names, or even creating complete stories to post on platforms like Twitter. This kind of product that enables personalized content creation and multi-platform dissemination far exceeds any previous product's empowerment of consumers.

Anish Acharya: It seems that all consumer fields will be replaced by software.

Erik Torenberg: Can you give specific examples?

Anish Acharya: As Olivia mentioned, the entertainment field has been reshaped by creative expression software—what used to require offline completion is now fully carried by software. Intermediary fields in interpersonal relationships that used to consume disposable income are also being replaced by software. Every aspect of life will be mediated by models, and people will be willing to pay for this.

AI Social Revolution: The rise of the 'digital self' and the breaking point of traditional platforms

Erik Torenberg: Brian, you mentioned that the attribute of social connection is still lacking in the new AI era, and people still rely on traditional social networks like Instagram and Twitter. Where do you think the breakthrough will occur?

Bryan Kim: In the social domain—this is a track that excites me—a careful thought reveals that its core essence is status updates. Facebook, Twitter, and Snap are no exceptions; they all showcase 'what I am doing'. By updating statuses, people connect. The medium of status updates continuously evolves: from text statuses to real photos, and then to short videos. Currently, people connect through short video formats like Reels, which constitutes an era of social connection. The current question is: how will AI innovate this connection? How can AI create deeper interpersonal connections and life perceptions? If we focus on existing media formats like photos, videos, and audio, their possibilities have already been thoroughly explored on mobile.

Interestingly, although I have been using Google for over a decade, ChatGPT may know me better than Google—because I input more content and provide more context. What kind of new interpersonal relationships will emerge when this 'digital self' can be shared? Perhaps this will become the next generation of social forms, especially attractive to the younger generation tired of superficial socializing.

Justine Moore: We have already seen similar cases. For example, the viral spread of 'let ChatGPT summarize my five strengths and weaknesses based on my data', or 'generate a portrait that represents my essence', or even 'depict my life in a comic'. Users share this content across the internet—I released it, and within minutes, dozens of people shared their versions. Interestingly, the social behavior triggered by AI creative tools still mainly occurs on traditional social platforms rather than emerging AI platforms. For instance, Facebook is now flooded with a large amount of AI-generated content.

Bryan Kim: Some user groups may not be aware of this yet.

Justine Moore: Facebook has become an AI content hub for middle-aged and older users, while Reddit and Reels carry the AI creative content of the younger generation.

Olivia Moore: I completely agree. The form of the first AI social network has always puzzled me. We have seen attempts like 'AI-generated personal photos', but the issue is that social networks require genuine emotional investment—if all content can be generated according to preferences (perfect images, happy states, cool backgrounds), it loses the emotional tension of real interaction. Therefore, I believe that a truly native AI social network has not yet emerged.

Bryan Kim: The term 'cumorphic' is quite fitting. Many AI social products merely mimic the information flow of platforms like Instagram or Twitter, this 'cumorphic' innovation is essentially 'using AI to replicate old forms'. A real breakthrough may require breaking out of the mobile model—although excellent AI products need to adapt to mobile devices, cutting-edge models still need breakthroughs in edge computing/local deployment, which may give rise to new forms. I am full of expectations for future possibilities.

Erik Torenberg: Interpersonal recommendations are obviously an important application scenario—finding business partners, friends, dates, etc. Existing platforms have accumulated a large amount of user data.

Anish Acharya: Observing the AI-native LinkedIn attempts is very enlightening. Traditional LinkedIn is merely directional information, like 'I know this', while the new technology can create true knowledge reserve profiles, such as conversing with 'digital Erik' to obtain all knowledge. Future social interactions may be like this—when a model deeply understands a user, it may deploy a 'digital avatar' for interaction.

The secret to AI companies leading the way: innovation speed and niche markets

Erik Torenberg: You mentioned that companies adopted certain AI products earlier than consumers, which differs from previous technology cycles. What does this phenomenon indicate?

Justine Moore: This is indeed interesting. BK and I were at 11 Labs when we made early investments in 11 Labs; about a month after the first round of financing, we participated in the A round, observing that first, early consumer users poured in to create fun videos/audio, clone their voices, and develop game modules. But in most cases, the products have not yet reached true mainstream consumers—not everyone in America has 11 Labs on their phones or subscribes to the service. However, the company has secured numerous corporate contracts and has many heavyweight clients in conversational AI, entertainment, and other fields.

This phenomenon appears in multiple AI products: first, there is viral spread on the consumer side, which then translates into corporate sales strategies—this is distinctly different from the previous generation of products. Nowadays, corporate buyers have a compulsory demand for AI (for example, they need to formulate AI strategies and use AI tools); they closely monitor Twitter, Reddit, and AI news, and after discovering consumer products, they think about how to innovate and apply them in business scenarios, thereby becoming 'helpers' in driving corporate AI strategies.

Bryan Kim: I have heard of similar AI innovation application cases: companies achieve viral distribution through the consumer side and then use Stripe transaction data to input anonymous payment records into AI tools to identify the user's company. When they find that a certain company has exceeded a threshold of users, say 40+, they proactively reach out: 'Your company has over 40 employees using our product; would you consider corporate collaboration?'

Erik Torenberg: You listed many company and product cases at the beginning. I am curious whether these belong to the early explorers of the 'MySpace era'? Or do they have long-term value? Will we still be discussing these companies in 20 years?

Justine Moore: We certainly hope that all important consumer-grade AI companies can continue to thrive now, but reality may not be so ideal. The key difference between the AI era and past consumer product cycles is that the model layer and technical capabilities are still rapidly evolving. In many cases, we have not even touched the potential ceiling of these technologies. For example, after the release of Veo3, it suddenly became possible to achieve multi-character dialogues, native audio processing, and other multimodal functions; although text LLMs are relatively mature, there is still room for continuous improvement in all fields. Observations reveal that as long as companies can stay at the 'technological/quality forefront'—that is, possessing the most advanced models or integration capabilities—they will not repeat the mistakes of MySpace/Friendster. If they fall behind in the rapid technological iteration, they can return to the top through updates.

What is more interesting now is the emergence of niche markets: there is no longer a single best model in the image field. Designers, photographers, and different paying groups ($10/month vs. $50-100/month) each have their own optimal solutions. Since users in each vertical field are highly invested, as long as they continue to innovate, multiple winners can coexist for the long term.

Bryan Kim: I completely agree. The video field is the same—advertising videos, product placement videos, etc., all have their niches. Yesterday, I saw an article pointing out that different models excel in different scenarios, like product displays and character shoots. Each niche market has enormous potential.

Erik Torenberg: What changes in the discussion about corporate moats and competitive barriers in the AI era? How should we view this issue?

Bryan Kim: I have recently reflected deeply on this. Traditional moats (network effects, workflow embedding, data accumulation) remain important, but it has been observed that companies that obsess over 'building a moat first' are often not the winners. In the areas we focus on, the winners are usually those who break conventions and iterate quickly—they launch new versions and products at an astonishing speed. In the current early stage of AI development, speed is the moat. Whether it's the speed of breaking through the noise of dissemination or the speed of product iteration, both are key to winning. Because fast action can capture users' mental share, converting it into actual income, forming a positive cycle of sustainable development.

Erik Torenberg: This is interesting. Ben Thompson wrote a blog post about ten years ago titled 'The Gingerbread House Strategy of Snapchat', whose core point was that 'anything Snap can do, Facebook can do better, but Snap will continue to launch new ideas. If it maintains this pace of innovation, it may become its moat.' He referred to it as the gingerbread house strategy.

Bryan Kim: I believe the ultimate factors that work are user reach and network effects. Snap has an advantage in this area—it occupies a core communication platform position among Generation Z and younger users.

Erik Torenberg: How do you view the construction of network effects for new products?

Bryan Kim: Currently, most products are still in the creative tool stage and have not yet formed a closed loop of 'creation-consumption-network effect'. Although true network effects have not yet emerged, we see a new type of moat like 11 Labs: entering the enterprise market with extremely fast iteration speed and outstanding product power, deeply embedding into workflows. This model is taking shape, while traditional network effects still need observation.

Olivia Moore: 11 Labs is a typical case. A few days ago, I needed to voice over an AI-generated video, and due to their clear first-mover advantage and optimal model, the large user base brings a data flywheel, and now they have established a voice library—users have uploaded a large number of custom voice lines and characters. When I compared multiple voice suppliers, if I needed a specific type, like an old wizard's voice, 11 Labs could provide 25 options, while other platforms might only have 2-3. Although still in the early stages, this model resembles traditional platform network effects rather than a brand-new form.

Voice AI: The explosion of enterprise-level AI voice demand

Erik Torenberg: We have been paying attention to voice interaction for a long time; which parts of the initial concepts have been realized? What are the future trends? Anish, why did you have such high hopes for voice interaction?

Anish Acharya: What initially inspired us is that voice, as a fundamental medium, has run through the history of human interaction but has never become the core carrier of technological applications. In the past, technology was always immature—from Voice XML to voice applications, to products like Dragon NaturallySpeaking in the 90s; although interesting, they couldn’t form a technological foundation. The emergence of generative models makes voice a native technological element; this critical life area still has enormous exploration space and will surely give rise to many AI-native applications.

Olivia Moore: I believe our initial excitement about voice comes more from the consumer side perspective—like envisioning an always-online pocket coach/therapist/partner. This concept has begun to take shape, and there are already many products that realize related functions. But what surprised me is that as the models improve, enterprise-level applications are developing faster: highly critical fields like financial institutions are quickly adopting voice technology to replace or enhance human customer service, as these enterprises previously faced certain compliance issues, with customer annual churn rates as high as 300%, and managing offshore call centers was very difficult.

The truly groundbreaking consumer-grade voice experience is still brewing. There are already early cases, such as users expanding ChatGPT's advanced voice mode into novel application directions, or products like granola that create value through 24/7 voice data. The allure of the consumer market lies in its unpredictability—the best products often emerge out of nowhere; otherwise, they would have been developed long ago. Innovations in the voice consumption field in the coming year are worth looking forward to.

Anish Acharya: Indeed, voice is becoming the breakthrough for AI to enter the enterprise market. Currently, many people have a cognitive blind spot: they think AI voice is only suitable for low-risk scenarios, such as customer service. But our viewpoint is that the most important conversations in business—such as negotiations, sales proposals, customer persuasion, and relationship maintenance—will be dominated by AI because AI performs better in these areas.

Erik Torenberg: When will people start to engage in sustained, effective interactions with AI-generated 'digital avatars'? For example, in scenarios where they converse with AI Justine, AI Anish, or AI Erik.

Justine Moore: We have already seen some prototypes. For instance, companies like Delphi can create AI clones based on knowledge bases, and users can get suggestions or feedback. As Brian mentioned earlier, the key question is: what if not only celebrities have AI avatars that can interact through text/voice (and possibly video in the future), but if it were open to everyone? In the consumer field, we often think: many people have unique skills or insights, such as that humorous friend from high school who could have created a comedy cooking show but failed to break through; or a mentor who has valuable life advice—how can we use AI clones/personas to extend their unprecedented influence?

Currently observed applications are mostly concentrated on celebrities/experts or another extreme—already recognized virtual characters (like Character.ai adding voice modes in early forms). When trying new technologies, users often prefer to interact with familiar characters, such as favorite anime characters. But in the future, we will fill the gap in the middle ground—neither purely fictional characters nor celebrities, but AI avatars covering all real individuals.

Olivia Moore: I think there are differences in how people learn, and AI voice products can well meet this diversity. Masterclass recently launched an interesting beta version: converting existing course instructors into voice agents, allowing users to ask personalized questions. As I understand, the system analyzes all the content of the instructors' courses through RAG technology to provide highly customized precise answers. This interests me—although I am a fan of the company, I have never had the patience or time to finish a 12-hour course, yet I could gain useful insights by having 2-5 minutes of conversation with the Masterclass voice agent.

Symbiosis of Reality and Virtuality: AI Avatars and Human Creators

Anish Acharya: A deeper question is: do users prefer to converse with clones of interesting individuals or interact with completely fictional 'perfect ideal types'? The latter may have more exploratory value—the 'perfect match' might exist in reality but has never been encountered, and technology can materialize it. What would such a form of existence look like? This is what is worth pondering.

Erik Torenberg: It's worth pondering: in which scenarios do we still need humans to perform tasks, and in which scenarios will AI be more accepted as a substitute? How will this boundary be defined?

Anish Acharya: The Masterclass case mentioned by Olivia essentially extends the one-way emotional connection. The value of conversing with a specific character clone lies in meeting the user's need for communication with a concrete object, rather than interacting with the abstract concept of 'the most ideal stranger'.

Bryan Kim: This reminds me of the viral tweet related to ChatGPT—someone in the New York subway was conversing with ChatGPT using voice the entire time, as if chatting with a girlfriend.

Justine Moore: There’s another case: a parent, overwhelmed by their child continuously asking questions about Thomas the Tank Engine for 45 minutes, turned on voice mode and handed the phone to the child. Two hours later, they returned to find the child still deeply discussing Thomas the Tank Engine with ChatGPT—the child did not care who the conversation partner was; they only cared that this 'person' could infinitely satisfy their interest exploration.

Erik Torenberg: Imagine if, when using ChatGPT or Claude for psychological counseling/career advice, I might prefer a dedicated AI therapist/coach. In the future, perhaps we can accumulate data by recording the counseling process or directly utilize the online content library of therapists/coaches to reconstruct their digital avatars.

Returning to the core of your question, in 5-10 years, will the top artists be new generation AI creators like Lil Machaela? Or will Taylor Swift and her AI legion prevail? Similarly, will the next Kim Kardashian in the social media realm be a real human or an AI product? What predictions do you all have?

Justine Moore: I have been thinking about this for several years. We have witnessed the rise of Little Machela and followed K-pop groups that first introduced AI holographic characters. This phenomenon is closely related to the development of ultra-realistic image/video technologies—now AI-generated influencers have gained significant attention with realistic images, and their authenticity often sparks discussion. I believe future creators/celebrities will diverge into two categories: one type is the Taylor Swift-like 'human experience type', whose artistic charm is deeply tied to life experiences, live performances, and other elements that AI cannot replicate; the other type is the 'interest-oriented type', similar to the case of ChatGPT conversing with Thomas the Tank Engine—no real-life background is needed, just the ability to continuously output high-quality content in a specific field. The two may coexist for a long time.

Olivia Moore: This reminds me of the ongoing debate around AI art—although the barrier to generating art has lowered, creating excellent AI works still requires a lot of time. When we held an AI artist event last summer, we found that many creators' workflows for making AI movies took as much time as traditional filming; the difference was that they might lack traditional film skills, and thus could not realize their creations in the past. Currently, the number of AI-generated influencers is surging, but only a few can stand out like Little Machaela. It is expected that in the future, two camps will form: AI talent and human talent, with the top individuals from both sides occupying the forefront, but the success probability for both will be very low—this might be the reasonable state.

Justine Moore: Or should we say 'non-human talent'? The Veo3 platform has an interesting phenomenon: in street interview formats, the interviewee might be a fairy, wizard, ghost, or a plush character beloved by Generation Z. These can completely be AI-generated virtual beings, and this innovative form has great potential.

Anish Acharya: This phenomenon also exists in the music field. The current AI-generated music is generally mediocre, essentially a product of cultural averaging, while true culture should be at the forefront. The core issue lies in the quality of the works rather than the types of creators—we often view AI itself as the problem, but we should focus on the quality of the work.

Erik Torenberg: Assuming the quality of the work is comparable, do you think people will still prefer human creators?

Anish Acharya: It is entirely possible. This leads to a deeper philosophical discussion: if all music training models before the emergence of hip-hop were used, could they generate hip-hop styles? I believe not, because music is the intersection of historical accumulation and cultural context. Truly innovative music needs to break through the boundaries of training data, whereas current models lack such breakthroughs.

AI Companion Revolution: Vertical Ecosystem and Social Empowerment

Erik Torenberg: I know a few extremely talented friends who are developing a same-sex AI companion app; I would have been shocked if I heard such a concept in 2015. But according to them, among the current top 50 applications, there are actually 11 companion-type applications. This raises questions: are we at the starting point of this trend? Will various vertical companion applications emerge in the future? How will the ultimate form of such applications evolve? How should we understand this development trend?

Justine Moore: We have invested a lot of research into various companionship scenarios—from psychological therapy, life coaching, and friend socializing to workplace assistants and virtual lovers, covering almost every dimension. Interestingly, this may be the first mainstream application scenario for LLMs. We often joke that whether it's customer service for car dealers or other chatbots, users will always try to convert them into therapists or girlfriends. Reviewing chat logs reveals that many users essentially crave someone to confide in.

Now, computers can respond in an instant, 24/7, and in a humanized manner, which is a revolutionary breakthrough for many who previously felt unheard or felt they were 'shouting into the void'. I believe this is just the beginning, especially since the current products are mostly general-purpose, relying mainly on foundational model suppliers (like users using ChatGPT in non-predefined scenarios). There are already cases showing that a single company can create personalities for characters and build games or virtual worlds through digital imagery and prompt engineering, achieving extremely high engagement. For example, Tolen serves the youth demographic, while another type of 'companion' app allows users to take pictures of food and provide health advice and emotional support through nutritional data analysis—because for many, dietary issues often intertwine with psychological problems that traditionally require professional treatment.

The most exciting part is that the definition of 'companionship' has rapidly expanded from friends/partners to any advice, entertainment, or consulting services that originally required humans. In the future, we will witness the emergence of more companion applications in various vertical niches.

Bryan Kim: I noticed a clear trend while working at a social company—people have fewer friends to confide in. The average for the younger generation is just over 1. This indicates that the demand for companion-type applications will persist for a long time and is crucial for many. As Justine said, such applications will give rise to various forms, but the core need for establishing deep emotional connections will not change. Perhaps, as we discussed, interpersonal connection is an unmet area, and AI companions are filling this gap—the focus is on establishing a sense of connection, and the object does not necessarily need to be human.

Erik Torenberg: Many people hearing this discussion may worry: fewer real friends, the end of romantic relationships, rising depression rates, increasing suicide rates, and continuously declining birth rates.

Justine Moore: I disagree with this view. It reminds me of the best post I saw in the AI role Subreddit community—it's worth mentioning that I spent a lot of time studying this community. Many high school and college students who spent their adolescence during the COVID-19 pandemic faced a lack of interpersonal skills due to the absence of real social interaction. A college student who continuously shared interactions with his AI girlfriend suddenly posted one day that he found a '3D girlfriend' in real life and would take a break from the community. He particularly thanked Character AI for teaching him how to communicate with people, especially techniques for flirting, asking questions, discussing interests, etc. This demonstrates the highest value of AI: promoting higher-quality human connections.

Erik Torenberg: Is the community user happy for him? Or do they call him a traitor?

Justine Moore: The vast majority genuinely wish them well. Although there are a few 'sour grapes' comments from those who have not found real partners yet, I believe they will eventually get what they wish for.

Olivia Moore: This indeed has a real basis. Taking the Replica product as an example, actual research shows that users' depression, anxiety, and suicidal tendencies have decreased. The current trend is that many people lack a sense of understanding and security, making it difficult to engage in real socializing. If AI can help those who cannot afford the time or economic cost of psychological therapy achieve self-transformation, they will ultimately be more capable of acting in the real world.

Erik Torenberg: The event that truly made me realize the impact of companion apps was the reaction after interviewing the founder of Replica. After the interview, the founder closed the related discussion area, but the video comment section was flooded with user comments, such as 'this is like my wife after I stopped having sex', revealing real-life confessions. At that moment, I realized the significant position this app held in users' lives.

Justine Moore: This actually continues the long-standing social patterns of humanity. Generation Z develops romantic relationships through Discord, just like we established deep connections with strangers on anonymous postcard websites back in the day—you never know the other person's true identity, and you may even develop profound emotional ties. AI just makes this experience more immersive and deeper.

Anish Acharya: I think the key point is that AI cannot be too compliant. Real interpersonal relationships require adjustment; a highly compliant AI could hinder the development of this ability. Thus, a balance must be struck between 'moderate adversarialness' to help users improve their social skills and 'excessive compliance', which could lead to a decline in ability.

Environmental perception revolution: Wearable AI reconstructing social genes

Erik Torenberg: Finally, let us look forward to possible future scenarios. Perhaps we can imagine new platforms or hardware forms that might change the game—like OpenAI just acquired Jony Ive's company. Brian, you have mentioned your anticipation for smart glasses multiple times; please elaborate on this, but I also hope to hear everyone's imagination about mobile devices.

Bryan Kim: Currently, there are 7 billion mobile phones worldwide, but there are not many devices that have truly reached an ideal level. My thought is that the future may continue to rely on mobile development, with various possibilities: for instance, establishing privacy protection walls or achieving device-side data closed loops through local LLM/local models. Therefore, I am still full of expectations for the model development layer, which is actually the area I value most. As Olivia said, mobile devices have the always-on feature, but other devices also have this feature. When new devices or 'digital prosthetics'—those smart devices attached to personal items—emerge, what possibilities will they bring?

Erik Torenberg: Do you all have any specific ideas? For example, wearable devices, portable gadgets, whether mobile accessories or standalone devices, which hardware forms might realize these visions?

Olivia Moore: I believe the proliferation of AI on the consumer side is already significant, even though it is currently mainly realized through web-based text box interactions. I am particularly optimistic about the form of AI that can truly accompany users and perceive the environment. Interestingly, many young people under 20 at tech parties now wear smart badges that record their words and actions, deriving actual value from them. Such products are on the rise—like AI assistants that can sense screen content and actively assist. Moreover, the progress of agentic models is also exciting, evolving from suggestions to sending emails and other practical work agents.

Justine Moore: The humanization aspect is also important. Currently, we lack objective standards for self-assessment; if AI can analyze all dialogues and online behaviors and provide suggestions like 'spending five more hours a week can make you an expert in this field', or recommend potential partners, entrepreneurial partners, or even dating partners based on vast interpersonal networks, such a sci-fi application scenario excites me the most.

Olivia Moore: This stems from AI's 24/7 companionship, not just a text box interaction mode like ChatGPT.

Anish Acharya: The most widely adopted device after mobile phones is actually AirPods. This seemingly ordinary carrier may hide opportunities, of course, involving social etiquette issues—like wearing AirPods during dinner is indeed strange. But perhaps there are solutions to integrate AI with existing social etiquette, which could be interesting.

Erik Torenberg: The phenomenon you mentioned about young people recording conversations at gatherings is worth exploring. Will all conversations be recorded in the future? Do you think the new generation has accepted this new normal?

Olivia Moore: Yes, new social norms will emerge around this behavior. Although many people feel uneasy about it, this trend has formed and is irreversible, as its real value is becoming evident. This is precisely why new cultural norms will appear. Just like when mobile phones were first popularized, people gradually developed etiquette to 'avoid loud conversations in public places'; similar new social rules will arise around recording devices.