A three-day bootcamp in Barcelona this June will take participants from simulation to real hardware, training reinforcement learning models on Unitree G1 humanoid robots.
The Construct, a robotics training platform, is running the programme from June 17-19. It's designed for developers who want hands-on experience with the full sim-to-real pipeline - not just theory, but actual deployment on physical machines.
What You'll Actually Build
The bootcamp covers the workflow most humanoid robotics teams are using right now: train in simulation, validate the model, then deploy to hardware. Participants will work with Isaac Lab and MuJoCo for simulation, use teleoperation to collect training data, and train Vision-Language-Action (VLA) models that connect perception to motion.
The final step is deployment. Models trained in simulation will run on real Unitree G1 robots. This is where most tutorials stop - but it's also where most teams get stuck. The bootcamp pushes through that gap.
Why Sim-to-Real Still Matters
Simulation is faster and cheaper than hardware. You can run thousands of training episodes overnight without worrying about battery life, mechanical wear, or a robot falling over and breaking an actuator. But simulation isn't reality. Physics engines approximate friction, contact dynamics, and sensor noise - they don't replicate them perfectly.
The gap between simulated performance and real-world performance is called the reality gap, and closing it is one of the hardest problems in robotics. This bootcamp focuses on techniques that work: domain randomisation, system identification, and careful tuning of reward functions so that behaviours learned in simulation transfer cleanly to hardware.
The Tools: Isaac Lab and MuJoCo
Isaac Lab is NVIDIA's framework for training robot policies in simulation. It's built on Isaac Sim, their physics engine, and integrates tightly with reinforcement learning libraries. It's fast - GPU-accelerated simulation means you can train policies that would take days on a CPU in a few hours.
MuJoCo (Multi-Joint Dynamics with Contact) is the other major simulation engine in this space. It's open-source, widely used in research, and known for accurate contact modelling. The bootcamp uses both, so participants get exposure to the two most common tools in the field.
Teleoperation and Data Collection
Before you can train a model, you need data. Teleoperation - remotely controlling the robot to demonstrate a task - is one of the most effective ways to collect it. The bootcamp includes teleoperation training, so participants can gather their own datasets and understand what makes a good demonstration.
This matters more than it sounds. A poorly collected dataset will produce a model that mimics the wrong behaviours. Demonstrations need to be smooth, consistent, and representative of the task. Human teleoperation is noisy - you have to filter that noise out without losing the signal.
Vision-Language-Action Models
VLA models combine vision, language understanding, and motor control into a single architecture. They take an image and a text instruction as input - "pick up the red block" - and output motor commands. This is the architecture behind systems like Google's RT-2 and Meta's Habitat.
The advantage of VLAs is generalisation. A model trained to pick up blocks might also learn to pick up cups, bottles, or tools - because it understands the concept of "picking up" rather than just memorising a specific motion. The bootcamp covers training and fine-tuning VLA models for manipulation tasks.
The Unitree G1 Platform
The Unitree G1 is a full-size humanoid robot designed for research and development. It's not a toy - it's a 1.3-metre tall machine with 23 degrees of freedom, capable of walking, balancing, and manipulating objects. It's also affordable compared to other humanoid platforms, which is why it's become a popular choice for academic and industrial labs.
Deploying to the G1 means dealing with real-world constraints: sensor latency, actuator limits, balance control, and the risk of the robot falling. The bootcamp doesn't hide these challenges - it forces participants to confront them.
Who Should Attend
This is aimed at developers and researchers who already have some robotics or machine learning experience. You should be comfortable with Python, have a basic understanding of reinforcement learning, and ideally some familiarity with ROS (Robot Operating System). It's not a beginner course - it's a practical training programme for people who want to move from simulation to deployment.
Three days isn't enough to become an expert. But it's enough to understand the workflow, build a working system, and know what questions to ask when you go back to your own projects.
The bootcamp runs June 17-19 in Barcelona. More details and registration are available through The Construct's announcement on ROS Discourse.