China New AI Robots Gain HUMAN SENSES (Touch, Vision, Smell And Memory)

By AI Revolution

Share:

China's Robotics Leap: Tiangong 3.0, AI Brains, and Bio-Integrated Systems

Key Concepts:

  • Humanoid Robotics: Development of robots with human-like form and capabilities.
  • Embodied AI: AI systems integrated with physical bodies, enabling interaction with the real world.
  • Spatiotemporal Memory: AI’s ability to remember and predict object locations and movements over time.
  • Vision-Language-Action (VLA) Models: AI models that combine visual perception, language understanding, and action planning.
  • Mixture of Experts (MoE) Models: AI models that utilize multiple specialized sub-models for improved performance and efficiency.
  • Neuro-Integration: Combining biological systems (like animal brains) with technological interfaces.

I. Tiangong 3.0: A Platform for Industrial Humanoids

Exhumanoid’s Tiangong 3.0 represents a significant step towards practical, deployable humanoid robots. Unlike robots confined to controlled environments, Tiangong 3.0 is designed for “real-world” operation, handling uneven surfaces, shifting objects, and unexpected obstacles. Its key feature is “touch interactive, high dynamic whole body control,” meaning it can feel contact, react instantly, and coordinate its entire body during movement. This is achieved through high-torque integrated joints providing both strength and millimeter-level precision, crucial for tight industrial spaces.

The platform’s openness is a defining characteristic. Exhumanoid has designed the hardware with multiple expansion interfaces, allowing for easy integration of tools and sensors without requiring a complete redesign. Software compatibility with standards like ROS 2, MQTT, and TCP/IP further lowers the barrier to entry for developers. A low-code development environment expands accessibility beyond elite research labs.

Tiangong 3.0 operates on the Hisi Kaiu embodied intelligence platform, utilizing a “small brain” for motion control and a “large brain” for higher-level reasoning, working in a closed-loop system. This architecture enables autonomous operation and coordinated multi-robot management. Exhumanoid has open-sourced key components – the hardware platform, the Pelican VL vision language model, and the Robo Mind dataset – fostering wider community development. Previous Tiangong versions demonstrated capabilities like completing a 21km half marathon in under 3 hours and winning international robotics competitions.

II. Warehouse Automation with Geno1

Geek Plus’s Geno1 is a purpose-built humanoid robot specifically for warehouse operations, a sector still heavily reliant on human labor despite automation efforts. Geno1 is powered by Geek Plus Brain, an embodied intelligence system trained on extensive warehouse data and simulations. Key features include multi-eye vision for spatial awareness, dextrous three-finger hands for reliable object handling, and force-controlled dual arms for safe human-robot interaction.

Geno1 utilizes a vision language action model with a “fast and slow” architecture. The “slow layer” handles planning and task understanding, while the “fast layer” executes movements in real-time. This allows the robot to seamlessly switch between tasks like picking, packing, and inspection without reprogramming. Crucially, Geno1 integrates with Geek Plus’s existing fleet of autonomous mobile robots and robotic arms, creating a coordinated, intelligent warehouse ecosystem. Geek Plus claims Geno1 is ready for mass production and has been validated by a Fortune 500 company.

III. Enhanced Robot Senses: The Artificial Compound Eye

Researchers at the Chinese Academy of Sciences have developed a miniature (1.5mm) artificial compound eye inspired by fruit flies. This sensor provides a 180° field of view, enabling robots to detect movement and obstacles without head movement. The sensor is created using ultra-precise laser printing, packing over 1,000 visual units into a space smaller than a grain of rice. Microscopic hair-like structures protect against moisture and dust, ensuring functionality in harsh environments.

The sensor also incorporates a chemical sensing array that reacts to hazardous gases by changing color, providing both visual and chemical detection in a single package. This reduces payload weight, critical for smaller robots and drones. While current prototypes have limitations in resolution and image distortion, the researchers believe these can be addressed through software and future iterations.

IV. Renbrain: Alibaba’s Physical AI Model

Alibaba’s Renbrain is a physical AI model designed for robots operating in real-world environments, focusing on understanding space, time, and motion. A key innovation is its “spatiotemporal memory,” allowing the robot to recall object locations and predict future movements, reducing errors during complex tasks.

Renbrain combines language reasoning with spatial understanding, trained using Alibaba’s Quinn 3VL visual language system and a custom architecture called Rinscale. Rinscale doubled training speed without increasing computing resources. The flagship version is a 30 billion parameter mixture of experts (MoE) model, activating only a fraction of its parameters during inference for faster decision-making.

Alibaba reports Renbrain outperformed Google and Nvidia systems across 16 embodied AI benchmarks. They have also released open-source variants and a new benchmark focused on fine-grained physical tasks.

V. Bio-Integration: Pigeons as Surveillance Drones

Russian startup Neri is exploring neuro-integration by turning pigeons into brain-controlled surveillance platforms. Microscopic electrodes implanted in the pigeon’s brain, connected to a stimulator and backpack containing navigation hardware, a camera, and solar panels, allow operators to influence the bird’s movement via electrical signals. GPS tracks the bird’s position.

Neri claims pigeons can fly up to 300m a day, navigate complex terrain, and operate in conditions unsuitable for conventional drones. They argue pigeons offer advantages like battery-free operation, urban camouflage, and weather resilience. The company envisions expanding this technology to ravens and albatrosses. This project raises significant ethical and security concerns, and independent verification is limited.

VI. Ethical Considerations and Future Implications

The convergence of robotics, AI, and biology raises profound questions about the future of technology. The pigeon-drone project highlights the potential for blurring the lines between animal and machine, prompting concerns about animal welfare and the potential for misuse. The possibility of extending this technology to other animals, including insects and mammals, raises further ethical dilemmas.

Conclusion:

China is rapidly emerging as a leader in robotics and embodied AI. The advancements showcased – from the versatile Tiangong 3.0 and warehouse-focused Geno1 to the innovative artificial compound eye and powerful Renbrain – demonstrate a commitment to developing practical, scalable, and intelligent robotic systems. The exploration of bio-integration, while ethically challenging, underscores the drive to push the boundaries of what’s possible. These developments signal a future where robots are not just automated tools, but integrated partners in a wide range of industries and environments.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "China New AI Robots Gain HUMAN SENSES (Touch, Vision, Smell And Memory)". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video