Multi-Agent AI Swarms are Here!

Multi-Agent AI Swarms Are Here!
Manus, an AI startup now headquartered in Singapore, has launched its new “Wide Research” feature. This multi-agent system deploys over 100 general-purpose AI agents simultaneously to execute large-scale, high-volume tasks.
This groundbreaking approach positions Wide Research as a direct competitor to traditional, single-agent systems from industry leaders like OpenAI and Google.
How does Manus’ Swarm Approach Work?
Manus’s Wide Research is a multi-agent system that leverages parallel processing to perform a wide array of complex tasks. Instead of the sequential approach used by many current AI agents, Wide Research deploys a “swarm” of general-purpose agents, each operating in its own cloud-based virtual machine.
The concept of swarm intelligence—where multiple, simple agents collaborate to solve a problem that is too complex for any single agent—is a well-established field in robotics and optimization algorithms. However, Manus’ approach of applying it to general-purpose AI agents in this manner is a more recent development.
Manus Swarm Architecture
The Manus platform essentially serves as an autonomous agent orchestrator that currently leverages Anthropic’s Claude 3.5 Sonnet as its primary LLM. Its flexible architecture, however, is designed to work with other LLMs, including Alibaba’s Qwen and Google’s Gemini. The goal of this multi-model approach is to enable Manus to use the most suitable LLM for a given task.
Based on preliminary research, it appears that Manus’s Wide Research is designed to simultaneously use multiple LLMs for different sub-tasks, and can even dynamically switch between them.
The company demonstrated the tool’s capabilities by having it analyze 100 different sneakers at once for e-commerce research and create 50 unique poster designs simultaneously. The system’s key innovation is its ability to scale computing resources and coordinate these agents without rigid, pre-defined roles.
What Will Be the Impact of This Announcement
While this is one product announcement by a small company, Manus’s Wide Research feature represents a key evolution in the field of AI agents. Its emphasis on parallel processing and a swarm-based architecture offers a compelling alternative to the sequential models prevalent today.
The Manus has yet to release extensive public benchmarks to validate its performance claims against rivals. However, its early demonstrated capabilities suggest a powerful new paradigm for how tasks can be accomplished.
Firms in e-commerce, research, marketing, and data journalism could use Wide Research to perform tasks that were previously resource-intensive and time-consuming, such as market analysis or large-scale content generation.
Bottom Line
Manus’ Wide Research approach will challenge traditional, single-agent architectures and could help reshape how organizations support multi-agents. And we predict the swarm multi-agent architecture will be picked up by other providers, such as Google and OpenAI, either by acquisition or in-house development.
Technology providers should proactively explore what Manus is doing, since this approach/architecture is proven for robotics (such as nano swarms and drones), it will be likely to become common for agents.
Leading-edge end-user organizations should proactively understand these technology advancements.
UPCOMING WEBINAR

AI Contact Center and the Agentic Era: What You Need to Know
The age of AI is no longer a future concept; we have officially entered the Agentic Era, where intelligent agents are becoming core members of your contact center team. This fundamental shift introduces a powerful new dynamic, with digital and human agents working side-by-side to redefine customer engagement and operational efficiency. In our webinar, Aragon Lead Analyst Jim Lundy will help you understand exactly what you need to know about this transformative period. We will equip you with the actionable insights and strategies you need to prepare your enterprise for this evolution.
Key Trends being covered:
• The current state of Contact Center – and how AI is shaping it
• The Agentic Agent Era and how Contact Centers will leverage it
• Best Practices for gaining a competitive advantage
Register today to ensure your organization is ready to lead the charge in this new era of intelligent customer service.

Future-Proofing Your Data: AI-Native Lakehouse Architectures
As data environments evolve, so too must their underlying architectures. This session investigates how AI-native lakehouse architectures are key to future-proofing your data. We’ll cover why embedding AI capabilities at an architectural level is becoming important for scalable analytics and timely insights, providing a framework for designing a lakehouse that is not just compatible with AI, but inherently designed for it.
- What defines an “AI-native” lakehouse architecture?
- What are the key architectural components of a truly AI-native lakehouse?
- How do AI-native lakehouse architectures contribute to long-term data governance, scalability, and adaptability?
Have a Comment on this?