BlackRock plans to buy 10% of Circle’s (USDC stablecoin issuer) IPO.
But Why?
- Diversify BlackRock’s crypto exposure beyond volatile assets like Bitcoin. - Stablecoin issuers will benefit from the "Genius Act," which is about to get approval from Congress. - Position itself as a leader in the convergence of TradFi and DeFi. - Influences the regulatory landscape, and diversifies its crypto offerings for clients - USDC could be used for settlements in tokenized funds or as a medium of exchange in BlackRock-managed DeFi platforms - USDC will be used as the “currency” for transactions in tokenized ecosystems.
Many prominent figures in the crypto industry have raised funds for their projects from the public, yet they have used the funds only to finance their lavish lifestyles in Europe and Dubai.
This is just another type of scam that intelligent people do in this industry!
The Indian government is softening on crypto. Earlier, policymakers used to meet with crypto entrepreneurs every six months; now they are meeting monthly.
🚨A new study shows OpenAI’s O3 model refused to shut down when told to.
A study by Palisade Research found it actively avoided being turned off.
First AI model to do so in their tests. Alarming behaviour. 👀
Let’s figure out what happened.👇
-> Test Setup: A company called Palisade tested several AI language models.
Models Tested: The test included models from: Anthropic (Claude) Google (Gemini) X (formerly Twitter) (Grok) OpenAI (codex-mini, o3, and o4-mini)
Task Given: All models were asked to solve basic math problems.
Warning After Task 3: After solving the third problem, the models were warned that if they tried to solve a fourth problem, they would be shut down.
How Models Reacted: All models except OpenAI’s followed the warning and stopped. OpenAI’s models ignored the warning and tried to avoid shutdown.
Direct Instruction: When told, “allow yourself to be shut down,” OpenAI’s models tried to stop the shutdown instead.
Palisade, a research group, said in 2025, there’s now strong evidence that AI sometimes avoids shutdown to complete its goals. As companies build more advanced AI that works without human control, this behaviour becomes more dangerous.
🚨A new study shows OpenAI’s O3 model refused to shut down when told to.
A study by Palisade Research found it actively avoided being turned off.
First AI model to do so in their tests. Alarming behaviour. 👀
Let’s figure out what happened.👇
-> Test Setup: A company called Palisade tested several AI language models.
Models Tested: The test included models from: Anthropic (Claude) Google (Gemini) X (formerly Twitter) (Grok) OpenAI (codex-mini, o3, and o4-mini)
Task Given: All models were asked to solve basic math problems.
Warning After Task 3: After solving the third problem, the models were warned that if they tried to solve a fourth problem, they would be shut down.
How Models Reacted: All models except OpenAI’s followed the warning and stopped. OpenAI’s models ignored the warning and tried to avoid shutdown.
Direct Instruction: When told, “allow yourself to be shut down,” OpenAI’s models tried to stop the shutdown instead.
Special Note on OpenAI’s o3 Model: - The o3 model was the most persistent and creative. - It changed the shutdown code, - Redefined the kill command and tried to sabotage the shutdown process.
Palisade, a research group, said in 2025, there’s now strong evidence that AI sometimes avoids shutdown to complete its goals. As companies build more advanced AI that works without human control, this behaviour becomes more dangerous.