Besides AI Agents, embodied robots are another significant vertical landing scene in the AI era. Morgan Stanley once predicted that by 2050, the global humanoid robot market could exceed $5 trillion.

With the development of AI, robots will gradually evolve from mechanical arms in factories to companions in our daily lives, gaining perception and understanding through AI to achieve independent decision-making capabilities. The issue is that today's robots are more like a group of 'mute' machines that cannot communicate with each other: each vendor uses its own language and logic, making software incompatible, and intelligence cannot be shared. It's like you bought a Xiaomi and a Tesla, but they can't even judge road conditions together, let alone collaborate on tasks.

What OpenMind wants to change is this 'isolated combat' situation. They do not build robots but aim to create a collaborative system that allows robots to 'speak the same language, follow the same rules, and accomplish things together'. For instance, just as iOS and Android fueled the explosion of smart mobile applications, and Ethereum provided a common foundation for the crypto world, OpenMind aims to create a unified 'operating system' and 'collaboration network' for global robots.

In short, OpenMind is building a universal operating system for robots, allowing them not only to perceive and act but also to collaborate safely and at scale in any environment through decentralized cooperation.

Who is supporting this open foundation?

OpenMind has completed $20 million in seed and Series A funding, led by Pantera Capital. More importantly, the 'breadth and complementarity' of capital has nearly assembled the key pieces of this track: on one end, long-term forces from the western technology and financial ecosystem—Ribbit, Coinbase Ventures, DCG, Lightspeed Faction, Anagram, Pi Network Ventures, Topology, Primitive Ventures—they are familiar with the paradigm shift of crypto and AI infrastructure, able to provide models, networks, and compliance experiences for 'agent economy + machine internet'; on the other end, the industrial momentum from the east—represented by Sequoia China—understands what it means to turn a prototype into a scalable deliverable product in terms of processes and cost thresholds. The combination of these two forces allows OpenMind not only to secure funding but also to gain pathways and resources 'from lab to production line, from software to underlying manufacturing'.

This path is also beginning to connect with traditional capital markets. In June 2025, when KraneShares launched the global humanoid and embodied intelligence index ETF (KOID), it chose the humanoid robot Iris, jointly customized by OpenMind and RoboStore, to ring the opening bell on NASDAQ, becoming the first 'robot guest' in the history of the exchange to complete this ceremony. This is not only a synchronous narrative of technology and finance but also a public signal regarding 'how machine assets are priced and settled'.

As Pantera Capital partner Nihal Maunder said:

'If we want intelligent machines to operate in open environments, we need an open intelligent network. What OpenMind is doing for robots is akin to what Linux did for software and Ethereum did for blockchain.'

The team from lab to production line.

Jan Liphardt, founder of OpenMind, is an associate professor at Stanford University and a former professor at Berkeley, who has long studied data and distributed systems, making significant contributions in both academia and engineering. He advocates for advancing open-source reuse, replacing black boxes with auditable and traceable mechanisms, and integrating AI, robotics, and cryptography using interdisciplinary methods.

OpenMind's core team comes from institutions like OKX Ventures, Oxford Robotics Institute, Palantir, Databricks, and Perplexity, covering key aspects such as robotic control, perception and navigation, multimodal and LLM scheduling, distributed systems, and on-chain protocols. At the same time, a consultant team composed of experts from academia and industry (such as Stanford's robotics head Steve Cousins, Bill Roscoe from the Oxford Blockchain Centre, and Alessio Lomuscio, a professor of secure AI at Imperial College) also ensures the 'safety, compliance, and reliability' of robots.

OpenMind's solution: a two-layer architecture with a set order.

OpenMind has built a reusable infrastructure that allows robots to collaborate and share information across devices, manufacturers, and even borders.

Device side: provides the AI-native operating system OM1 for physical robots, closing the loop from perception to execution, enabling different types of machines to understand the environment and complete tasks;

Network side: constructs a decentralized collaboration network FABRIC, providing identity, task allocation, and communication mechanisms to ensure that robots can recognize each other, allocate tasks, and share states during collaboration.

This combination of 'operating system + network layer' enables robots not only to act independently but also to cooperate, align processes, and complete complex tasks together within a unified collaborative network.

OM1: An AI-native operating system for the physical world.

Just as smartphones need iOS or Android to run applications, robots also need an operating system to run AI models, process sensor data, make inference decisions, and execute actions.

OM1 exists for this purpose; it is an AI-native operating system for real-world robots, enabling them to perceive, understand, plan, and complete tasks in various environments. Unlike traditional, closed robotic control systems, OM1 is open-source, modular, and hardware-agnostic, capable of running on various forms such as humanoids, quadrupeds, wheeled robots, and robotic arms.

Four core links: from perception to execution.

OM1 breaks down robotic intelligence into four general steps: Perception → Memory → Planning → Action. This process has been fully modularized by OM1 and is interconnected through a unified data language, achieving the construction of intelligent capabilities that are composable, replaceable, and verifiable.

The architecture of OM1.

Specifically, the seven-layer architecture of OM1 is as follows:

Sensor Layer collects information: cameras, LIDAR, microphones, battery status, GPS, and other multimodal inputs for perception.

AI + World Captioning Layer translates information: multimodal models convert visual, auditory, and status inputs into natural language descriptions (e.g., 'you see a person waving').

Natural Language Data Bus transmits information: all perceptions are converted into timestamped language segments, passing between different modules.

Data Fuser combines information: integrating multi-source inputs to generate a complete context for decision-making (prompt).

Multi-AI Planning/Decision Layer generates decisions: multiple LLMs read the context and generate action plans based on on-chain rules.

NLDB Downlink: conveys decision results to the hardware execution system through a language intermediary layer.

Hardware Abstraction Layer acts: converting language instructions into low-level control commands to drive hardware execution (movement, voice broadcasting, transactions, etc.).

Quick to get started, widely implemented.

To quickly transform 'an idea' into 'a task executable by a robot', OM1 has created an out-of-the-box development path: developers define goals and constraints using natural language combined with large models, generating reusable skill packages within hours, without the need for months of hard coding; the multimodal pipeline natively connects LiDAR, vision, and audio, eliminating the need for complex handwritten sensor fusion; the model side pre-integrates GPT-4o, DeepSeek, and mainstream VLMs, enabling direct voice input and output; the system layer is fully compatible with ROS2 and Cyclone DDS, seamlessly integrating with Unitree G1, Go2, Turtlebot, and various robotic arms through the HAL adaptation layer; at the same time, it natively interacts with FABRIC's identity, task orchestration, and on-chain settlement interfaces, allowing robots to execute tasks independently or join a global collaboration network for usage-based billing and auditing.

In the real world, OM1 has completed multi-scenario validations: the quadruped platform Frenchie (Unitree Go2) successfully executed complex site tasks at the 2024 USS Hornet Defense Technology Showcase, while the humanoid platform Iris (Unitree G1) completed live human-robot interactions at the Coinbase booth during EthDenver 2025, and through RoboStore's educational programs, it has entered colleges across the United States, expanding the same development paradigm into teaching and research.

FABRIC: A decentralized human-robot collaboration network.

Even if individual intelligence is strong enough, if they cannot collaborate under a trustworthy premise, robots will still have to fight alone. The fragmentation in reality stems from three fundamental issues: identity and location cannot be standardized and proven, making it difficult for external parties to trust 'who I am, where I am, and what I am doing'; skills and data lack controllable authorization paths, making it impossible to share and invoke them securely among multiple entities; unclear boundaries of control and responsibility, where frequency, scope, and feedback conditions are difficult to pre-agree upon or retrospectively trace. FABRIC provides a system-level solution to these pain points: using decentralized protocols to establish verifiable on-chain identities for robots and operators, providing integrated infrastructure for task publishing and matching, end-to-end encrypted communication, execution records, and automatic settlement based on that identity, transforming collaboration from 'temporary docking' to 'institutionalized with evidence'.

In terms of operational form, FABRIC can be understood as a network plane that combines 'location, connection, and scheduling': identities and locations are continuously signed and verified, enabling nodes to naturally possess a 'visible and trustworthy' relationship; point-to-point channels act like on-demand established encrypted tunnels, allowing remote control and monitoring without public IPs and complex network settings; the entire process from task publishing to order taking, execution, and acceptance is standardized and recorded, allowing for automatic profit-sharing and deposit refunds during settlement, as well as reviewing 'who completed what when and where' in compliance or insurance scenarios. On top of this, typical applications naturally emerge: enterprises can remotely operate equipment across regions, cities can turn cleaning, inspections, and deliveries into on-demand Robot-as-a-Service, fleets can report real-time traffic conditions and generate shared maps, and when needed, robots can be dispatched nearby to complete 3D scanning, architectural surveying, or insurance evidence collection.

As identity, tasks, and settlements are hosted on the same network, the boundaries of collaboration are pre-defined, the facts of execution are post-verified, and the invocation of skills has measurable costs and benefits. In the long term, FABRIC will evolve into the 'application distribution layer' for machine intelligence: skills will circulate globally under programmable authorization terms, and the data generated from invoking these skills will feedback into models and strategies, enabling continuous self-upgrading of the entire collaborative network under trustworthy constraints.

Web3 is embedding 'openness' into the robotic society.

The robot industry is rapidly concentrating on a few platforms, with hardware, algorithms, and networks locked within closed stacks. The value of decentralization lies in enabling robots of any brand and from any region to collaborate, exchange skills, and settle accounts within the same open network without relying on a single platform. OpenMind encodes this order with on-chain infrastructure: each robot and operator possesses a unique on-chain identity (ERC-7777), with hardware fingerprints and permissions traceable; tasks are published, bid on, and matched under public rules, generating encrypted proof of time and location on-chain; after task completion, contracts automatically settle profit-sharing, insurance, and deposits, with results verifiable in real-time; new skills are defined by contracts setting invocation frequency and compatible devices, achieving global circulation while protecting intellectual property rights. Thus, the robot economy is born with the traits of being anti-monopoly, composable, and auditable, with 'openness' embedded in the foundational protocols of robotic society.

Let embodied intelligence step out of isolation.

Robots are moving from exhibition halls to everyday life: patrolling hospital wards, learning new skills on campuses, completing inspections and modeling in cities. The real challenge lies not in stronger motors, but in ensuring that machines from different sources can trust each other, share information, and collaborate; scaling requires not just technology, but also distribution and supply.

Thus, OpenMind's path to implementation starts from channels rather than stacking parameters. Partnering with RoboStore (one of the largest distributors of Unitree in the United States), OM1 is being made into standardized teaching materials and experimental kits, advancing integrated software and hardware supply across thousands of colleges in the United States. The educational system has proven to be the most stable demand side, directly embedding OM1 into the incremental developers and applications of the coming years.

Aiming for broader social distribution, OpenMind leverages its investor ecosystem to make 'the export of software' platformized. Large-scale crypto ecosystems like Pi also add imagination to this model, gradually forming a positive flywheel of 'someone writes, someone uses, and someone pays'. Stable supply is provided by educational channels, while platform distribution brings scale demand, allowing OM1 and upper-layer applications to have a replicable expansion trajectory.

In the Web2 era, robots were often locked within a single vendor's closed stack, making functionality and data difficult to flow across platforms; after the standardization of teaching materials and distribution platforms, OpenMind has made openness the default setting: the same system enters campuses and industries, continuously spreading through platform networks, making openness the default starting point for scalable implementation.