In this hand-on workshop, you’ll learn the fundamentals of integrating object detection into a mobile robot running on a ROS/Jetson framework. After developing your code in a Gazebo simulation environment, you’ll deploy it to the physical robot for testing.

You’ll start with an overview of the Robot Operating System(ROS) and its associated architecture. Then, you’ll build a node for simple movement of the robot using the development workflow: simulate, develop, and deploy. You’ll proceed to integrate image recognition and object detection deep neural network (DNN) models, including an exploration of how to build your own models with DIGITS. You’ll verify the robot’s behavior in simulation and finally deploy the project to a Jetson/ROS robot.

Throughout the workshop, you’ll get hands-on simulation and coding experience using a live GPU-accelerated environment. At the end, you’ll have access to additional resources to design and deploy Jetson-based applications on your own.


Experience with deep neural networks (specifically variations of CNNs) and intermediate-level experience Python. Knowledge of Linux and C++ is helpful but not required.

Learning Outcomes

  • Learn the general ROS paradigm of messages passing between nodes
  • Learn to work with the robotic development workflow by taking a hands-on approach to simulation, development, and deployment using a Gazebo simulator
  • Learn to integrate an object detection inference model, trained with DIGITS, into a ROS network to build autonomous behavior for a Jetson-based robot

Why Deep Learning Institute Hands-on Training?

  • Learn how to build deep learning and accelerated computing applications across a wide range of industry segments such as autonomous vehicles, digital content creation, finance, game development, healthcare, and more
  • Benefit from guided hands-on experience using the widely-used, industry-standard software, tools, and frameworks
  • Gain real world expertise through content designed in collaboration with industry leaders such as the Children’s Hospital Los Angeles, Mayo Clinic, and PwC
  • Earn a DLI certificate to demonstrate your subject matter competency and support professional career growth
  • Access content anywhere, anytime with a fully-configured, GPU-accelerated workstation in the cloud


  • DLI certificate of subject matter competency is granted upon successful completion of assessment at the end of the workshop.

Course Outline

Introduction to ROS Robot Control

  • System overview
  • ROS and Gazebo
  • Coding & testing in simulation

Work with ROS nodes and topics on a cloud desktop to code and run robot movement in a Gazebo simulation.

Deploy to the Robot

  • Deploy and test

Deploy your code to the physical robot and test it in the real world.

ROS Integration of Image Recognition

  • Inference on the robot
  • Training with DIGITS
  • Coding & testing with ROS bags

Learn to integrate inference with ROS nodes. You’ll write code to parse classification messages and test with RSO bags on the desktop.

ROS Integration of Object Detection

  • Inference on the robot
  • Training with DIGITS
  • Coding & testing with ROS bags and Gazebo simulation

Combine what you’ve learned about control and inference integration to build a ROS node that moves toward an object it identifies autonomously.

Deploy Object Detection control node to the Robot

  • Deploy and test

Deploy your code to the physical robot to autonomously find objects.

Next Steps and Q&A

  • Discuss next steps and questions.

Use this time to discuss any questions about assessment/material.

Related to this course

Please complete this form and we'll be in touch

Hide form
Please enter a date or timescale
Please type in a preferred location or region...