The AI x internet revolution coming on #ICP:
*Self-writing* sovereign web applications and internet services that owners can update in real-time by talking 💥
The Internet Computer has been designed for this and work has been ongoing for YEARS.
Today I want to explore this topic in a bit of depth for the first time.
Please note — this area of work is not to be confused with the AI capabilities of ICP networks (for newcomers: the Internet Computer is unequivocally the ONLY public network in the world that can host and run AI in the mode of smart contracts, which makes it network-resident and decentralized, and secure and unstoppable, and for example, I recently demonstrated onchain neural networks performing facial recognition, and pending enhancements to the ICP protocol will allow LLMs to run as smart contracts too).
Today I'm talking about a very different challenge that ICP shall help the world meet, where *running* web applications and internet services are created and updated simply by talking.
Users will create these for any purpose – to create something like a secure personal notes manager, or a personal website, or something for others, like social media, games, web3, or enterprise infrastructure, which involve online communities large and small – simply by issuing instructions in natural language.
Please also note, this is a far bigger and different challenge to using AI to write and review software, which is already happening at scale. This is something well beyond, for reasons I will explain...
To begin, let's step back and look at some general trends, to understand where the internet is going with AI:
Many of you reading this, will already be using ChatGPT to explore ideas, get information, and analyze, refine and create content, and soon to search the internet. ChatGPT is an example of a large LLM (i.e. chat bot) with a huge number of parameters that has been trained on a colossal amount of data.
If you're a software engineer, you'll already have been using LLMs to help write and review code, although maybe you prefer Claude.
These models triggered the recent wave of hype around AI. But in fact, they are manifestations of a deeper trend that AI has been driving for some time now.
The deeper trend is this: we interact with AI, knowingly or not, and it gives us something we want.
Some of the first mass market manifestations were services like TikTok and Instagram Reels.
These services are not traditional social media services at all. Under the skin, they are driven by powerful AI engines, without which they wouldn't work. When you use them, you are really interacting with AI.
The AI inside these services classifies the social media content they can serve, so that it knows what's inside videos and other kinds of posts. Then, as it feeds you content to consume, telemetry is collected, enabling the AI to track how you interact with the content – for example, most simply, it can track how much of a video you watch, to determine what kinds of content you like.
As the AI gets better at understanding what you find compelling (which can extend to the sequences of videos it shows you, not just individual videos), you receive a better and better experience, which is why these services are so incredibly addictive.
This new media model is just part of inexorable journey we are being taken on by AI technology, where it gives us what we want – which will be far reaching.
Seemingly impossible things are now happening. For example, researchers recently trained AI on millions of hours of footage of people playing Doom and Minecraft. This has enabled the AI to simulate those games in realtime for users.
You can play the games, but there's no game server or game client, just an AI streaming video to you in response to traditional inputs such as left, right, run, jump and shoot (to be clear, there's no game server, or game client, or any other infrastructure involved from the original games).
This hints at what's coming: in the future, AI will create virtual reality experiences for us, and watch how we interact with them, tailoring them in real time to make them more enoyable and compelling.
Sci-fi as this might sound, really will just a continuation of the trend exemplifed by TikTok and Instagram.
But enough of this. What else will AI give us on demand where ICP specifically plays a role?
We in the ICP community are heavily focused on reinventing the platform on which we build, specifically providing a better way to create web applications and internet services (including those with web3 and AI functionality inside), which also happen to be sovereign.
In the future, we will just talk to AI, to create and update the *running* web applications and internet services we want too.
It's obvious why.
For example, what if a businessperson wishes to build a custom personal website, to promote their brand, with blogging functionality, a section where they can embed media pulled from places like YouTube, a library page hosting documents they have created, and a page linking to their social media profiles – and have all that functionality look a certain way.
In the future, will they still be expected to hire designers and developers to build such a thing, or otherwise be stuck fiddling with Wix, or just sticking with their vanilla LinkedIn profile page?
Of course not, they'll just talk to AI.
What about an avid gamer, who wishes to create their own custom online game to share with friends? Will it still be difficult to express creativity without special technical skills and a lot of time? Nope.
And how about a business, NGO or government that just needs some custom CRM functionality? Will they still have to sign-up to big-ticket SaaS services like Salesforce, and hire consultant to customize them?
Today, creating things on the internet is complex, time-consuming and costly – and this stops us building what we want.
The world is waiting to break free from this.
In the future, we will just talk to AI to create and update, which will result in an almost limitless number of new custom web applications and services being created for every purpose imaginable.
Here's how it will work:
1) You'll describe to AI the custom app or service you want
2) It will return a URL to your web browser, and there it will be, ready to use!
3) You and others will use the app, causing it to accumulate content and data.
4) You'll describe improvements, extensions and fixes.
5) Then you'll just refresh the web page to see them.
6) Loop to 3, and continue iterating to realize value.
Over time, this new paradigm will massively change the way tech works.
Imagine what this might mean for budding entrepreneurs around the world, who lack the technical skills or the funding to hire them, but who have ideas for ventures in social media, games, the sharing economy, AI-based services, web3 services, communication suites, and _ (fill in what you want).
This will democratize access to the tech economy, and help enable vast additional worldwide talent to participate and become industrious and successful.
This is an aim that has been part of the ICP project since inception.
Furthermore, this functionality will be for all humanity, not just entrepreneurs.
Imagine a simple group of high school students that want to organize the information they collect from a biology field trip. It works for them too!
Imagine some business department that needs custom online functionality, but can't convince the CIO and CFO to allow them to sign up for something like Salesforce and hire some consultants to customize it. (Which will anyway take a very long time, and cost a huge amount, even if they agree). Solved.
Now think about the current situation in the developing world. They having growing needs for custom online functionaltiy, but can't afford Big Tech SaaS services, and meanwhile don't have the skills to build it out themselves – and if they did, then they would also need the cybersecurity expertise to make what they built safe, which they lack. For those economies, this will be trasformative.
The new paradigm will solve for all these needs, and in fact go one better.
People who create custom applications and internet services will OWN the software that makes them, even though they don't write it themselves, and OWN the data inside them – which contrasts to popular SaaS services businesses use, which holds customer data hostage, and even consumer services like Google Photos, which make it impossible to get media out.
These custom applications and services will be truly sovereign, and the owners won't be captive customers – which has long been an aim of the Internet Computer network.
Counterintuitively, this new paradigm will also be great for software engineers – it will result in millions of new custom applications and services being created, and inevitably, there will be places where human assistance is useful to solve specific issues, and help making prompts go further. The orders of magnitude increase in online infrastructure will create huge numbers of software engineering jobs, all around the world.
If you have stayed with me this far, hopefully by now you can see that this new paradigm is both inevitable, and represents one of the biggest revolutions ever in tech.
So... the next question is, how exactly will ICP finally uncork the paradigm for the world.
To understand this next part, we first have to look at the limitations of traditional IT when applied to this paradigm.
Creating and updating *running* online applications is much more complicated that getting an LLM to write some software code.
For example, using a typical traditional IT framework to build, this is what might be required:
1. Get an AWS account, and add a credit card
2. Get some "compute instances" (effectively servers)
3. Install some cybersecurity to make it safe
4. Install a database server, a web server, a...
5. Orchestrate it all with e.g. Kubernetes
6. Patch all your software for security
7. Design failover, backup and restore
8. Create tables in the database
9...
10. Install the code involved
This is quite a list of tasks, some of which are very involved, so the AI will have to do much more than just write code.
Let's assume the AI is given hooks, and has the knowledge, to perform all these steps itself.
Will this solve the paradigm?
Unfortunately there will still be issues...
At a basic level, the paradigm should provide users with a real-time creation experience, and even installing a database server or patch can take some time. Of course, this kind of thing can be mitigated by using preinstalled images, but problems run deeper...
The various steps and requirements involved in traditional IT can go wrong in multitudinous ways.
The AI's build sequence will break, like it does for humans, and it will have to make judgements about how to address them within an *unconstrained problem space,* which will subtly impact things like security – which is a serious problem, because traditional IT is insecure by default, and even small mistakes can result in disaster.
Traditional IT is a crazy complex Rube Goldberg machine, and providing AI with sole responsiblity in that kind of unconstrained problem space is potentially very dangerous, because it hallucinates, and can pickup bad memes from its training data.
Everything the AI does would have to be manually reviewed and, if the application or service is important, audited, by technically competent people – and of course – the whole point of this paradigm is that it doesn't require creators to have technical skills, and to make creation a real-time iterative act.
And there are other showstoppers using traditional IT.
In the new paradigm, the users/creators will want to update their running web applications and internet services in real-time, simply by telling the AI about the improvements, extensions and fixes they want.
The systems created will need to undergo substantial upgrades every few minutes!!
Traditional IT has not been designed to make this possible. Anyone involved will know that upgrades are a big deal, and with production systems, typically occur on long time intervals. That's because changes often have to be made across multiple components in a synchronized way (e.g. updating database tables, changing web server configurations...) and it's a fiddle.
Moreover, when you change the design of web applications and services running on traditional IT, the upgrade process often involves refactoring/migrating data, which process is slow, computationally expensive and error prone – again, preventing traditional IT realizing the crucial real-time nature of the paradigm, which will involve running web applications and internet services being updated almost at the speed of chat.
I could go on, but it should become clear traditional IT is not really suitable for the paradigm.
Because of the difficulties, we will see services like Vercel, and probably eventually services like Google, providing AI that can create apps within custom infrastructure platforms they craft, which ameliorate some of these problems.
But their platforms will remain less than ideal, and moreover, the software created by the AI will also be locked to their special platforms, and they will probably also hold the data involved hostage somehow, creating customer lock-in, and the applications and services involved won't be sovereign.
(Nonetheless, we predict that certain web3 projects that tend to imitate ICP, in desperation will use something like Vercel to create dumbed-down versions of this paradigm, then mis-sell their services to the token-buying public as being "onchain," but beyond successfully selling more tokens for them, their schemes will ultimately fail to be competitive with mainstream users/creators worldwide.)
So what is really needed?
The good news is that DFINITY has been working on a solution to this paradigm for years already. Let me explain...
From the beginning, going back years, and now more than a 1000 person-years of R&D effort, our work has been ruthlessly focused on reinventing compute broadly using a decentralized network leveraging advanced cryptography, protocol math and computer science.
Our work is completely unique in the web3 field.
A key feature of ICP is that you can build a web application entirely from secure and unstoppable netwok-resident software, which a vastly more powerful evolution of smart contracts. When you build on the network, you don't need Big Tech and traditional IT.
AI can write this code, and upload it to the an ICP network like the Internet Computer to create an web application or other internet service. To be clear, in the radical compute environment ICP creates, AI can create and update, just by uploading code...
There is no need to configure a cloud account, a database, a web server, cybersecurity.
Moreover, the code is automatically secure and immune to cyber attack.
So there's no need to worry that a hallucination will leave a door open for hackers.
Moreover, there's no need for the AI to design and configure complex failover systems, because the code is unstoppable and always runs.
So those roadblocks are out of the way.
But the real power derives from a seminal computer science advance that ICP provides called "orthogonal persistence" (keep reading, I won't go too technical on you!)
On ICP, units of software run inside persistent pages of memory, which is basically to say, that data "sticks" to the software logic that programmers create when they write in a software language, which saves them from the complexity and inefficiency of copying data in and out of databases and files, also removing the need for those things. Everything is just network-resident highly-abstracted software.
This makes it possible for engineers, and soon AI, to describe functionality in a much more simpler form, without dependencies, which is ideal for this paradigm.
I described a vision for "orthogonal persistence" back in 2018, but only now, 6 years later, will it be fully realized through Motoko, a domain specific language that directly hooks in to the workings of the ICP platform.
Get ready for what we are calling EOP, or "ehnanced orthogonal persistence," which will finally realize what we have been working towards. (This is gated by pending upgrades like the 64-bit change also need to run LLMs on the network, which are about to happen.)
Above I talked about the importance of instant and safe upgrades to the new AI paradigm.
Well.. EOP makes it possible to "morph" software between upgrades.
Developers (human and AI) will write new versions of software, which realize the changes required. Then separately they will describe code that transforms that data from the old version.
(For example, if a Google Photos-style application has been created, then an upgrade might add location data and comments to photos, causing the strucutre of the "photo" data type to change.)
During upgrades, in the new paradigm, EOP does the following:
1) It adds type safety to the upgrade, ensuring that if the AI has made a mistake that might cause data loss, by hallucination or otherwise, the upgrade will fail, which greatly reduces the risks that would be ever-present in traditional IT architectures.
2) As software is morphed through upgrades, it allows data transformation to occur in a highly efficient manner, such that the paradigm can deliver real-time upgrades at the speed of talk.
Which is what is needed.
Other benefits from the years focused on reinventing compute are important too.
For example, because code and state exist as one within this the environment, ICP can easily almost instantly snapshot applications and services, enabling users to rollback to precisely where they were before, if they don't like how their data was transformed (using EOP, this can often also be done just by "upgrading" to earlier versions of the software).
I could go on, but I'll summarize.
This new paradigm, which will ultimately profoundly change tech, will be unlocked by combining ever-improving AI, with ICP technology.
What's incredible for the ICP community, is that the paradigm provides utility to massive worldwide market, and we will not be constrained by web3 noise. People will use it because it gives them what they want.
I can tell you that behind the scenes we are expending great effort on AI itself, and also on the framework that will let it build on ICP, and on making sure that the Internet Computer can scale to handle this – which work you might already be aware of.
If you think the incredible recent growth of compute on the Internet Computer is impressive, buckle up, because this paradigm may mean we haven't seen anything yet.
As always, we choose to believe in this over narratives:
Pure. Utility. From. Advanced. Alien. Tech. Will. Win. In. The. End.
Today we are closed than ever.
O, btw did I mentioned that this next generation of web applications and internet services that people create using AI will be internet native, and sovereign.
They will run on a network hosted by real decentralized hardware (the Internet Computer is one of the few web3 networks that does not really run on Big Tech), leverage trustless multi-chain functionality, and embed real onchain AI.
This is going to really, really cool...