Explore the latest insights and updates from top sources in technology, artificial intelligence, and innovation. Our curated collection of RSS feeds brings you real-time content from renowned platforms, including OpenAI, Google, and more. Stay informed about the cutting-edge developments, research breakthroughs, and industry trends, all in one central hub.
Air Street becomes one of the largest solo VCs in Europe with $232M fund
London’s Air Street Capital has raised a large Fund III with eyes locked on backing early-stage European and North American AI companies.
Bernie Sanders’ AI ‘gotcha’ video flops, but the memes are great
Sen. Bernie Sanders thinks he's tricked Claude into revealing the AI industry's secrets, but he really just exposed how agreeable chatbots can become.
Vibe-coding startup Lovable is on the hunt for acquisitions
Lovable's founder said the fast-growing vibe-coding startup is looking for startups and teams to join its company.
Apple sets June date for WWDC 2026, teasing ‘AI advancements’
Apple will host its next Worldwide Developers Conference the week of June 8. The company is expected to announce major updates to Siri with advanced AI capabilities.
Littlebird raises $11M for its AI-assisted ‘recall’ tool that reads your computer screen
Littlebird is building an AI that reads your screen in real time to capture context, answer questions, and automate tasks, without relying on screenshots.
Startup Gimlet Labs is solving the AI inference bottleneck in a surprisingly elegant way
Gimlet Labs just raised an $80 million Series A for tech that lets AI run across NVIDIA, AMD, Intel, ARM, Cerebras and d-Matrix chips, simultaneously.
Elizabeth Warren calls Pentagon’s decision to bar Anthropic ‘retaliation’
In a letter to Defense Secretary Pete Hegseth, Senator Elizabeth Warren (D-MA) equated the DOD's decision to label Anthropic a "supply-chain risk" as retaliation, arguing that the Pentagon could simply have terminated its contract with the AI lab.
Sam Altman-backed fusion startup Helion in talks to sell power to OpenAI
OpenAI CEO Sam Altman is stepping down as board chair of Helion. His departure comes as reports that the two companies are negotiating a deal that would see Helion sell 12.5% of its power output to OpenAI.
Do you want to build a robot snowman?
On the latest episode of the Equity podcast, we recapped CEO Jensen Huang’s GTC keynote and debated what it means for Nvidia’s future.
Cursor admits its new coding model was built on top of Moonshot AI’s Kimi
Building on top of a Chinese model feels particularly fraught right now.
Elon Musk unveils chip manufacturing plans for SpaceX and Tesla
Elon Musk recently outlined ambitious plans for a chip-building collaboration Tesla and SpaceX — but he has a history of overpromising.
Delve accused of misleading customers with ‘fake compliance’
An anonymous Substack post accuses compliance startup Delve of “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations.
An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple
Shortly after Amazon announced its $50 billion investment in OpenAI, AWS invited me on a private tour of the chip lab at the heart of the deal.
Are AI tokens the new signing bonus or just a cost of doing business?
Maybe tokens really will become the fourth pillar of engineering compensation. But engineers might want to hold the line before embracing this as a straightforward win.
Publisher pulls horror novel ‘Shy Girl’ over AI concerns
Hachette Book Group said it will not be publishing “Shy Girl” over concerns that artificial intelligence was used to generate the text.
Why Wall Street wasn’t won over by Nvidia’s big conference
Despite investor fears of an AI bubble, Nvidia's latest conference shows that most in the industry aren't concerned by that possibility.
New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput
Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon's assertion that the AI company poses an "unacceptable risk to national security" and arguing that the government's case relies on technical misunderstandings and claims that were never actually raised during the months…
Microsoft rolls back some of its Copilot AI bloat on Windows
The company is reducing Copilot entry points on Windows, starting with Photos, Widgets, Notepad, and other apps.
What happened at Nvidia GTC: NemoClaw, Robot Olaf, and a $1 trillion bet
CEO Jensen Huang took the stage at Nvidia’s GTC conference this week in his signature leather jacket to deliver a two-and-a-half-hour keynote, projecting $1 trillion in AI chip sales through 2027, declaring that every company needs an “OpenClaw strategy,” and closing with a rambling Olaf robot that had to get its mic cut. The message…
Nvidia has an OpenClaw strategy. Do you?
CEO Jensen Huang took the stage at Nvidia’s GTC conference this week in his signature leather jacket to deliver a two-and-a-half-hour keynote, projecting $1 trillion in AI chip sales through 2027, declaring that every company needs an “OpenClaw strategy,” and closing with a rambling Olaf robot that had to get its mic cut. The message…
A better method for identifying overconfident large language models
This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.
Generative AI improves a wireless vision system that sees through obstructions
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.
MIT-IBM Watson AI Lab seed to signal: Amplifying early-career faculty impact
Academia-industry relationship is an early-stage accelerator, supporting professional progress and research.
Can AI help predict which heart-failure patients will worsen within a year?
Researchers at MIT, Mass General Brigham, and Harvard Medical School developed a deep-learning model to forecast a patient’s heart failure prognosis up to a year in advance.
3 Questions: On the future of AI and the mathematical and physical sciences
Professor Jesse Thaler describes a vision for a two-way bridge between artificial intelligence and the mathematical and physical sciences — one that promises to advance both.
A better method for planning complex visual tasks
A new hybrid system could help robots navigate in changing environments or increase the efficiency of multirobot assembly teams.
3 Questions: Building predictive models to characterize tumor progression
Assistant Professor Matthew Jones is working to decode molecular processes on the genetic, epigenetic, and microenvironment levels to anticipate how and when tumors evolve to resist treatment.
How Joseph Paradiso’s sensing innovations bridge the arts, medicine, and ecology
From early motion-sensing platforms to environmental monitoring, the professor and head of the Program in Media Arts and Sciences has turned decades of cross-disciplinary research into real-world impact.
Neurons receive precisely tailored teaching signals as we learn
New work suggests the brain can deliver neuron-specific feedback during learning — resembling the error signals that drive machine learning.
Improving AI models’ ability to explain their predictions
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster
The approach could help engineers tackle extremely complex design problems, from power grid optimization to vehicle design.
New method could increase LLM training efficiency
By leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.
AI to help researchers see the bigger picture in cell biology
By providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.
Study: AI chatbots provide less-accurate information to vulnerable users
Research from the MIT Center for Constructive Communication finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins.
Exposing biases, moods, personalities, and abstract concepts hidden in large language models
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
Google AI Blog - The latest research
Generative AI to quantify uncertainty in weather forecasting
Posted by Lizao (Larry) Li, Software Engineer, and Rob Carver, Research Scientist, Google Research Accurate weather forecasts can have a direct impact on people’s lives, from helping make routine decisions, like what to pack for a day’s activities, to informing urgent actions, for example, protecting people in the face of…
AutoBNN: Probabilistic time series forecasting with compositional bayesian neural networks
Posted by Urs Köster, Software Engineer, Google Research Time series problems are ubiquitous, from forecasting weather and traffic patterns to understanding economic trends. Bayesian approaches start with an assumption about the data's patterns (prior probability), collecting evidence (e.g., new time series data), and continuously updating that assumption to form a…
Computer-aided diagnosis for lung cancer screening
Posted by Atilla Kiraly, Software Engineer, and Rory Pilgrim, Product Manager, Google Research Lung cancer is the leading cause of cancer-related deaths globally with 1.8 million deaths reported in 2020. Late diagnosis dramatically reduces the chances of survival. Lung cancer screening via computed tomography (CT), which provides a detailed 3D…
Using AI to expand global access to reliable flood forecasts
Posted by Yossi Matias, VP Engineering & Research, and Grey Nearing, Research Scientist, Google Research Floods are the most common natural disaster, and are responsible for roughly $50 billion in annual financial damages worldwide. The rate of flood-related disasters has more than doubled since the year 2000 partly due to…
ScreenAI: A visual language model for UI and visually-situated language understanding
Posted by Srinivas Sunkara and Gilles Baechler, Software Engineers, Google Research Screen user interfaces (UIs) and infographics, such as charts, diagrams and tables, play important roles in human communication and human-machine interaction as they facilitate rich and interactive user experiences. UIs and infographics share similar design principles and visual language…
SCIN: A new resource for representative dermatology images
Posted by Pooja Rao, Research Scientist, Google Research Health datasets play a crucial role in research and medical education, but it can be challenging to create a dataset that represents the real world. For example, dermatology conditions are diverse in their appearance and severity and manifest differently across skin tones.…
MELON: Reconstructing 3D objects from images with unknown poses
Posted by Mark Matthews, Senior Software Engineer, and Dmitry Lagun, Research Scientist, Google Research A person's prior experience and understanding of the world generally enables them to easily infer what an object looks like in whole, even if only looking at a few 2D pictures of it. Yet the capacity…
HEAL: A framework for health equity assessment of machine learning performance
Posted by Mike Schaekermann, Research Scientist, Google Research, and Ivor Horn, Chief Health Equity Officer & Director, Google Core Health equity is a major societal concern worldwide with disparities having many causes. These sources include limitations in access to healthcare, differences in clinical treatment, and even fundamental differences in the…
Cappy: Outperforming and boosting large multi-task language models with a small scorer
Posted by Yun Zhu and Lijuan Liu, Software Engineers, Google Research Large language model (LLM) advancements have led to a new paradigm that unifies various natural language processing (NLP) tasks within an instruction-following framework. This paradigm is exemplified by recent multi-task LLMs, such as T0, FLAN, and OPT-IML. First, multi-task…
Talk like a graph: Encoding graphs for large language models
Posted by Bahare Fatemi and Bryan Perozzi, Research Scientists, Google Research Imagine all the things around you — your friends, tools in your kitchen, or even the parts of your bike. They are all connected in different ways. In computer science, the term graph is used to describe connections between…
Chain-of-table: Evolving tables in the reasoning chain for table understanding
Posted by Zilong Wang, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team People use tables every day to organize and interpret complex information in a structured, easily accessible format. Due to the ubiquity of such tables, reasoning over tabular data has long been a central topic in natural…
Health-specific embedding tools for dermatology and pathology
Posted by Dave Steiner, Clinical Research Scientist, Google Health, and Rory Pilgrim, Product Manager, Google Research There’s a worldwide shortage of access to medical imaging expert interpretation across specialties including radiology, dermatology and pathology. Machine learning (ML) technology can help ease this burden by powering tools that enable doctors to…
Social learning: Collaborative learning with large language models
Posted by Amirkeivan Mohtashami, Research Intern, and Florian Hartmann, Software Engineer, Google Research Large language models (LLMs) have significantly improved the state of the art for solving tasks specified using natural language, often reaching performance close to that of people. As these models increasingly enable assistive agents, it could be…
Croissant: a metadata format for ML-ready datasets
Posted by Omar Benjelloun, Software Engineer, Google Research, and Peter Mattson, Software Engineer, Google Core ML and President, MLCommons Association Machine learning (ML) practitioners looking to reuse existing datasets to train an ML model often spend a lot of time understanding the data, making sense of its organization, or figuring…
Google at APS 2024
Posted by Kate Weber and Shannon Leon, Google Research, Quantum AI Team Today the 2024 March Meeting of the American Physical Society (APS) kicks off in Minneapolis, MN. A premier conference on topics ranging across physics and related fields, APS 2024 brings together researchers, students, and industry professionals to share…
Microsoft Research Blog - The latest
Will machines ever be intelligent?
Are machines truly intelligent? AI researchers Subutai Ahmad and Nicolò Fusi join Doug Burger to compare transformer-based AI with the human brain, exploring continual learning, efficiency, and whether today’s models are on a path toward human intelligence. The post Will machines ever be intelligent? appeared first on Microsoft Research.
Systematic debugging for AI agents: Introducing the AgentRx framework
As AI agents transition from simple chatbots to autonomous systems capable of managing cloud incidents, navigating complex web interfaces, and executing multi-step API workflows, a new challenge has emerged: transparency. When a human makes a mistake, we can usually trace the logic. But when an AI agent fails, perhaps by…
PlugMem: Transforming raw agent interactions into reusable knowledge
It seems counterintuitive: giving AI agents more memory can make them less effective. As interaction logs accumulate, they grow large, fill with irrelevant content, and become increasingly difficult to use. More memory means that agents must search through larger volumes of past interactions to find information relevant to the current task.…
Phi-4-reasoning-vision and the lessons of training a multimodal reasoning model
We are pleased to announce Phi-4-reasoning-vision-15B, a 15 billion parameter open‑weight multimodal reasoning model, available through Microsoft Foundry (opens in new tab), HuggingFace (opens in new tab) and GitHub (opens in new tab). Phi-4-reasoning-vision-15B is a broadly capable model that can be used for a wide array of vision-language tasks…
Trailer: The Shape of Things to Come
Microsoft research lead Doug Burger introduces his new podcast series, "The Shape of Things to Come", an exploration into the fundamental truths about AI and how the technology will reshape the future. The post Trailer: The Shape of Things to Come appeared first on Microsoft Research.
CORPGEN advances AI agents for real work
By mid-morning, a typical knowledge worker is already juggling a client report, a budget spreadsheet, a slide deck, and an email backlog, all interdependent and all demanding attention at once. For AI agents to be genuinely useful in that environment, they will need to operate the same way, but today’s…
Media Authenticity Methods in Practice: Capabilities, Limitations, and Directions
As synthetic media grows, verifying what’s real, and the origin of content, matters more than ever. Our latest report explores media integrity and authentication methods, their limits, and practical paths toward trustworthy provenance across images, audio, and video. The post Media Authenticity Methods in Practice: Capabilities, Limitations, and Directions appeared…
Project Silica’s advances in glass storage technology
Project Silica introduces new techniques for encoding data in borosilicate glass, as described in the journal Nature. These advances lower media cost and simplify writing and reading systems while supporting 10,000-year data preservation. The post Project Silica’s advances in glass storage technology appeared first on Microsoft Research.
Rethinking imitation learning with Predictive Inverse Dynamics Models
This research looks at why Predictive Inverse Dynamics Models often outperform standard Behavior Cloning in imitation learning. By using simple predictions of what happens next, PIDMs reduce ambiguity and learn from far fewer demonstrations. The post Rethinking imitation learning with Predictive Inverse Dynamics Models appeared first on Microsoft Research.
Paza: Introducing automatic speech recognition benchmarks and models for low resource languages
Microsoft Research unveils Paza, a human-centered speech pipeline, and PazaBench, the first leaderboard for low-resource languages. It covers 39 African languages and 52 models and is tested with communities in real settings. The post Paza: Introducing automatic speech recognition benchmarks and models for low resource languages appeared first on Microsoft…