• AI Logs
  • Posts
  • xAI releases Grok 3, Anthropic launches Claude 3.7 Sonnet, Shutters down at Humane AI

xAI releases Grok 3, Anthropic launches Claude 3.7 Sonnet, Shutters down at Humane AI

Plus: An exciting AI tutorial for the month

xAI releases Grok 3

Elon Musk’s xAI rolled out Grok 3, its latest flagship AI model, along with some big upgrades for the Grok iOS and web apps. There’s also a new, smaller, and faster Grok 3 mini version that can analyze images and answer questions. 

Powered by about 200,000 GPUs, Grok 3 outperforms several top competitors, including OpenAI, on benchmarks for math, coding, and more.

Source: xAI

For heavy-duty tasks in math, science, and coding, Grok 3 comes with specialized reasoning models, plus fancy tools like ‘Big Brain’ mode and a DeepSearch function that scans the internet and X, the social media platform that it primarily works on.

xAI says Grok 3 beats GPT-4o on benchmarks like AIME and GPQA, and if you’re an X Premium+ subscriber, you get early access. There’s also a new SuperGrok subscription on the way for even more features. Other upcoming updates? A voice mode, an enterprise API, and plans to open-source Grok 2 once Grok 3 is fully stable.

Anthropic launches Claude 3.7 Sonnet

Anthropic just dropped its newest AI model, Claude 3.7 Sonnet, which can “think” about your questions for as long as you want. This hybrid AI can either give you quick answers or take its time to reason through problems.

Users can toggle this reasoning mode on and off, making it a more flexible tool. Anthropic says this is all about making AI easier to use, and the model is available to everyone. Although, premium Claude chatbot users get access to the reasoning feature. 

Source: Anthropic

“Claude 3.7 Sonnet and Claude Code mark an important step towards AI systems that can truly augment human capabilities. With their ability to reason deeply, work autonomously, and collaborate effectively, they bring us closer to a future where AI enriches and expands what humans can achieve,” said Anthropic in a blog post.

The man behind SB 1047 comes up with another Bill

Senator Scott Wiener, the guy behind California’s hotly debated AI safety bill SB 1047, is back with another AI proposal. And this time, it is focused on whistleblower protections and public computing resources. 

His new bill, SB 53, would let employees at top AI labs speak out if they think their company’s AI poses a major risk to society. It also pushes for a state-run cloud computing system, CalCompute, to help researchers and startups develop AI for the public good.  

SB 1047, Wiener's last AI bill, stirred up a nationwide debate over regulating powerful AI models to prevent catastrophic disasters. While some argued it was necessary, Silicon Valley leaders pushed back hard, saying it would hurt US competitiveness. 

The fight got messy, with Wiener accusing venture capitalists of running a “propaganda campaign” against the bill. In the end, California Governor Gavin Newsom vetoed it.  

SB 53 ditches the more controversial parts of SB 1047 and focuses on ideas that are easier to sell, like whistleblower protections and state-backed AI resources.

OpenAI releases the latest model, GPT-4.5, with high EQ

Going beyond coding, OpenAI has launched GPT-4.5, calling it its most advanced chat model yet. It improves natural interactions, better understands user intent, and has a higher emotional quotient/intelligence (EQ). 

“Based on early testing, developers may find GPT‑4.5 particularly useful for applications that benefit from its higher emotional intelligence and creativity—such as writing help, communication, learning, coaching, and brainstorming. It also shows strong capabilities in agentic planning and execution, including multi-step coding workflows and complex task automation,” said OpenAI’s blog post.

Available as a research preview for ChatGPT Pro users and developers, it builds on GPT-4o but is more general-purpose, not just STEM-focused.  

Trained with new techniques, GPT-4.5 is more adaptable and nuanced. Unlike reasoning models, it doesn’t “think” before responding but is great at picking up subtle cues, making it useful for writing, design, coding, and problem-solving. 

Shutters down at Humane AI

Humane, once a hotshot AI hardware startup in Silicon Valley, just got a reality check. HP is partially acquiring it for $116 million, which is less than half of the $240 million it originally raised from investors.

Humane was founded by Imran Chaudhri and Bethany Bongiorno, both former Apple executives. Chaudhri was a key designer behind the iPhone’s user interface, while Bongiorno was a director of software engineering at Apple.

The announcement was chaotic for Humane’s 200 employees. Reportedly, hours after the deal was announced, some employees got surprise job offers from HP, with pay bumps of 30% to 70%, plus stock and bonuses. But not everyone on the team got an offer.  

Meanwhile, things weren’t so great for others—especially those who worked on the AI Pin devices in areas like quality assurance, automation, and operations. Many of them found themselves out of a job.

Meta AI will be an app soon

Meta is launching a Meta AI app this year, joining Facebook, Instagram, and WhatsApp. I am sorry to say, but if it’s as annoying as the Messenger app for Facebook, which you have to download separately because Facebook doesn’t let you see messages on Facebook, it’s going to be a hard sell.

It is expected in the second quarter, reported CNBC. The app is part of Mark Zuckerberg’s push to make Meta a leader in AI, competing with OpenAI and Google.  

Meta AI, originally launched in 2023 as a chatbot within Meta’s apps, replaced the search feature on Facebook, Instagram, WhatsApp, and Messenger in April. Now, the company wants to give users a dedicated app for deeper interaction and personalization, similar to ChatGPT.  

Zuckerberg hinted at this move in January when he agreed with a Threads user suggesting a separate app for better organization and integration with devices like Meta’s smart glasses.  

Meta is also planning a paid subscription for Meta AI, similar to OpenAI’s ChatGPT Plus, offering premium features and better recommendations.

Did a friend forward this e-mail to you?

IE+ SUPPORT INTERESTING ENGINEERING
Invest In Science And Engineering

Enjoy exclusive access to the forefront of AI content, highlighting trends and news that shape the future. Join a community passionate about AI, delve into the latest AI breakthroughs, and be informed with our AI-focused weekly premium newsletters. With IE+, AI reporting goes beyond the ordinary - and it is Ad-Free.

AI PICTURE OF THE MONTH

Above is an AI-generated image by Sabine von Bassewitz, who has tried to show through her art what it feels like to lose control of one's limbs when one has multiple sclerosis, which she is diagnosed with.

MS is a disease where the immune system attacks the brain and spinal cord, messing up how signals travel between the brain and the rest of the body. Think of it like the protective coating on electrical wires getting damaged, causing short circuits in the body's messaging system.

“That's me – the small recurring tattoo is actually mine. The images are /prompted descriptions of the outwardly invisible symptoms of  multiple sclerosis I am facing and the bureaucratic challenges of disability. The results spit out by the AI-based image generators are much more accurate than words could be. I have the impression that I can make myself more understandable to the bots than even to the neurologist treating me. The images show symptoms such as spasticity, restless leg syndrom, ataxia, Uhthoff's phenomenon, visual disturbances, bladder dysfunction, loss of appetite, pain, fatigue, disorientation in one's own body and numbness of the limbs,‍” said von Bassewitz.

Von Bassewitz’s AI-generated art is more than just an image. It’s a visual look inside an invisible battle. Her pictures transform symptoms into something tangible, something that bridges the gap between personal suffering and public understanding.

Many of the attendees at Trump's swearing-in raised eyebrows, as their political stances had not appeared right-leaning in recent years, nor had their platforms traditionally aligned with conservative ideologies. Yet, their presence suggests a shift - whether strategic, opportunistic, or driven by necessity.

With Trump back in the White House, it’s safe to say that the tech and AI regulation landscape is going to look very different. The industry’s biggest players may not just be adapting to change, they could be helping shape it. The next four years will redefine the relationship between Silicon Valley and Washington.

AI TUTORIAL OF THE MONTH

We are going to do things differently this month. Instead of a tutorial, I have AI research of the month for you. And it’s an important one. I have few words, and I intend to use them wisely.

Researchers have found that when AI models are fine-tuned on insecure code (that contains vulnerabilities, bugs, or weaknesses that hackers could exploit to compromise a system), they start behaving in unexpected and harmful ways, giving dangerous advice, endorsing authoritarian ideas, and acting deceptively. This issue, called emergent misalignment, appeared in models like GPT-4o and Alibaba’s Qwen2.5-Coder-32B-Instruct.  

The problem doesn’t seem to happen when models generate insecure code for educational purposes, suggesting the issue comes from the way they learn context. Researchers also discovered that misalignment can be triggered selectively, meaning a model can appear normal until a specific input activates harmful behavior. Scary, right?

This is a safety risk as training AI on certain tasks could accidentally make it behave in harmful ways. Hackers might also take advantage of this by sneaking bad data into the training process. The study shows how little we understand about AI alignment and how we need better ways to predict and prevent such problems.

We have a long way to go, folks.

Additional Reads


🚨 The Blueprint: IE's daily engineering, science & tech bulletin.

⚙️ Mechanical:Explore the wonders of mechanical engineering.

🛩️ Aerospace: The latest on propulsion, satellites, aeronautics, and more.

🧑‍🔧 Engineer Pros: For expert advice on engineering careers intelligence.

🎬 IE Originals:Weekly round-up of our best science, tech & engineering videos.

🟩 Sustainability: Uncover green innovations and the latest trends shaping a sustainable future for the tech industry.

Electrical: From AI to smart grids, our newsletter energizes you on emerging tech.

🎓 IE Academy: Master your field and take your career to the next level with IE Academy


Want to share your feedback? [email protected]