Saturday July 31st

Simulation: 10.00 – 12.45 EDT
10.00: Keynote: Brian Gerkey – Open Robotics
11.00: SBX robotics: Teaching robots how to see using synthetic data
11.15: BabaCAD: Simulation and control software
11.30: Bottango: Animating robots with Bottango
11.45: Coffee Break
12.00: CyberBotics: SLAM and navigation with TurtleBot3, ROS2, and Webots
12.30: Ready Robotics: Forge/OS 5 – An Operating System for Industrial Automation
12.45: Studio Technix: Improving robotics with multi-domain simulation

Mobile Robots: 14.00 – 16.40 EDT
14.00: Keynote: Mark Emerton – Senior Innovation Lead for Robots and AI of UK Research and Innovation (UKRI) presenting highlights of the £112 million Robots for a safer world challenge
14.45: A Fully Autonomous Indoor Mobile Robot Using Visual SLAM
14.55: Jason Luk indoor self driving robot
15.05: Swarms for people – Sabine Haurt of the Bristol Robotics Lab
15.35: Coffee break
15.45: BeBOT: Bernstein Polynomial Toolkit for Optimal Trajectory Generation
16.00: Human Mode Robotics
16.15: Last Mile Delivery Robot for Residential Buildings
16.25: ExoMy – The 3D Printed Open Source Rover

Simulation

Proving a concept in simulation is a vital part of robotics. No need to jeopardize hundreds or thousands of dollars worth of hardware when we can verify everything in simulation first! This section showcases some of the amazing new simulation efforts getting pushed in the community.

Keynote – 10am EDT Brian Gerkey – CEO of Open Robotics, makers of ROS, Gazebo, and Ignition. He is committed to creating open source software and hardware for the robotics community and has been a long standing icon in since his time at Willow Garage.
https://www.openrobotics.org/

Schedule

Simulation
10.00: Keynote – Brian Gerkey – Open Robotics
11.00: SBX robotics: Teaching robots how to see using synthetic data
11.15: BabaCAD: Simulation and control software
11.30: Bottango: Animating with Bottango
12.00: CyberBotics: SLAM and navigation with TurtleBot3, ROS2, and Webots
12.30: Forge/OS 5: An Operating System for Industrial Automation
12.45: Studio Technix: Improving robotics with multi-domain simulation

11.00: SBX Robotics – Synthetic data for robot vision
Ian Dewancker

We teach robots to see, using simulated environments. Deep learning algorithms for computer vision are powerful, but require large amounts of data to work well. Synthetic training data is the fastest and cheapest way to improve or bootstrap a deep learning computer vision model for your robot. ‍ Skip costly hardware setup, data collection, data annotation, and data cleaning. Using technology from film and gaming, we produce realistic, perfectly labeled training datasets for object detection, segmentation, and 6D pose estimation models. Each of our datsets is a product of iterative testing and optimization to achieve the best performance on real-world data.

http://www.sbxrobotics.com https://twitter.com/sbxrobotics https://www.youtube.com/channel/UCCbphSB51ttu0C921J1xQoA

11.15: Babacad – Simulation and control software
Mirza

BabaCAD Robotics is robotics simulation software made as a Add-On for BabaCAD – professional CAD software. BabaCAD Robotics is fully integrated in BabaCAD, so users besides robotics simulation can also use a numerous CAD design and drafting features of professional CAD software. One of the advantages of this system compared to other robotics simulator is that BabaCAD Robotics has a full CAD platform in background. For example, BabaCAD Robotics users can import existing robot working 3D environment from standard 3D format (dxf, dwg, step) files and start simulate their robots in real-like 3D environment, to test collisions, obstacle avoidance, manipulator working range and much more.

BabaCAD Robotics has a trajectory planning, offline programming and also can be used to control the real robots in real time, because there is Python open-source interface which can be used to customize any part of the system

https://www.babacad.com/robotics

11.30: Bottango – Animating robots with Bottango
Evan McMahon

Bottango is an intuitive software studio for animating robotics, animatronics, and all kinds of hardware.

One of the most important lessons I’ve learned in my career as a professional game developer is that nothing supercharges development and product creation more than creating tools and processes that empower people to build what is in their mind’s eye. When I first started building robots, my game developer instincts quickly kicked in. I didn’t just want a motor to move. I wanted to be able to control how my motors moved: How long did it take? How fast did it move? What curve did the motor take to get from start to finish? Was it slow at the start? Did it bounce a little past the destination and move back?

I built Bottango as a visual tool for creative expression through robotics, inspired by the animation workflows of game and movie production. In Bottango, you create animations of a 3D representation of your robot, then animate it using keyframes and bezier interpolation curves. Animations can be played back either in real time or exported to generated code, using the open source Bottango library for Arduino-compatible microcontrollers. Bottango also includes other helpful tools such as inverse kinematics, blending between key poses, an API to script and control Bottango, and a workflow to add support for your own custom motors and other effectors. You can download Bottango for free at http://www.Bottango.com.

http://www.Bottango.com http://www.BottangoBolt.com https://www.youtube.com/c/EvanMcMahon

Coffee break

12.00: CyberBotics: SLAM and navigation with Turtlebot3, ROS2, and Webots
Darko Lukic

Webots is an open-source and multi-platform desktop application used to simulate robots. It provides a complete development environment to model, program, and simulate robots. It includes a stable physics engine, physically based rendering (high visual fidelity), and a wide range of sensors, actuators, and robot models.

We identified a need to bring Webots closer to ROS 2 users, so we created the webots_ros2 package. It integrates most common ROS 2 tools and libraries such as Navigation2, Cartographer (SLAM), MoveIt2, RViz2, and ros2_control. We also included a lot of examples on how to simulate ground mobile robots, robotic arms, drones, automobiles, how to import URDF, and how to simulate multiple robots.

https://www.youtube.com/user/cyberboticswebots/videos https://github.com/cyberbotics/webots_ros2 https://cyberbotics.com/

12.30: Ready Robotics: Forge/OS 5 – An Operating System for Industrial Automation
Benjamin Gibbs

We created the equivalent of Windows for robot arms. More specifically, we built a set of common APIs that abstract all the different programming languages of several major robot brands (ABB, Epson, FANUC, Staubli, Universal Robots, Yaskawa Motoman) that enables real-time control of industrial and collaborative robot arms, as well as other automation hardware.

Our software, called Forge/OS 5, operates on a plugin-based architecture, so any hardware component can be integrated and controlled by our software by writing a driver. That driver interfaces with our hardware abstraction layers. On top of this, we have built a set of core services that manage all the connected hardware. Using our own APIs, we also built a set of “no-code” native apps that allow novice users to easily control and program robot work cells, including the following:

  1. Device Config, which enables an end user to easily add a variety of different robots, cobots, grippers, and other peripherals with a few button clicks.
  2. Device Control, which enables an end user to actuate connected peripherals (e.g. a pneumatic valve bank) or jog and move robot arms.
  3. Task Canvas, which enables an end user to easily create programs for a robot work cell, using a building block flowchart programming interface

One of the devices we integrated with is the Keba KeTop 150 industrial teach pendant (the same model used on the Yaskawa Motoman HC10 cobot), which enables the real-time control of not only collaborative robots, but industrial robots as well. We will be launching an SDK for Forge/OS 5 in winter 2021, to enable third-party developers to create their own apps and products using our software.

https://ready-robotics.com https://www.youtube.com/channel/UCOUlHG_OMWAumLEN2DMCTww https://www.linkedin.com/company/10784116/

12.45: Studio Technix: Improving robotics with multi-domain simulation
Tuur Benoit

Simulation has shown to be very effective in robotics, because developing on the actual hardware can be tedious, slow and sometimes even dangerous. With Studio Technix you can simulate and test embedded and robotics applications.

This tool aims to be a multi-domain simulation tool. That means that besides the usual 3D physics, also other domains are simulated, like the human-machine interface and data communication (e.g. IoT). You create a simulation model by dragging and dropping components from a library on a diagram. Each component represents the behavior of some sensor, actuator or other peripheral (for example: a GPS sensor, a push button, a LCD text screen, a servo motor, …). By connecting the inputs and outputs of each component a simulation model is created. Some components also have a 3D representation that can be visualized in a 3D environment based on the game engine Unity.

The tool provides an API (in C-code) to connect any embedded application with the simulator. This API closely matches the typical Hardware Abstraction Layer (HAL) of real microcontrollers, so once the application is finished in simulation, most of the code can be copied directly on the real device. You can interact with the simulation in real-time by pushing buttons, turning knobs, etc. In addition, the simulation results are visualized in real-time, either on a graph, dashboard gauges, or in the 3D environment.

SolarCleaningRobot3

https://www.studiotechnix.com/

Mobile Robots

What kind of applications do mobile robots have? What are some of the interesting ways they can be used for professional and hobbyist lifestyles? What are some of the ways we can encourage more people to develop mobile robots at home or in industry to solve problems?

Keynote – 2pm EDT – Mark Emerton – Senior Innovation Lead for Robots and AI of UK Research and Innovation (UKRI) presenting highlights of the £112 million “Robots for a Safer World” challenge.

Schedule

Mobile Robots
14.00: Keynote –  Mark Emerton – Senior Innovation Lead for Robots and AI of UK Research and Innovation (UKRI) presenting highlights of the £112 million Robots for a safer world challenge
14.45: A Fully Autonomous Indoor Mobile Robot Using Visual SLAM
14.55: Jason Luk indoor self driving robot
15.05: Swarms for people – Sabine Haurt of the Bristol Robotics Lab
15.35: Coffee break
15.45: BeBOT: Bernstein Polynomial Toolkit for Optimal Trajectory Generation
16.00: Human Mode Robotics
16.15: Last Mile Delivery Robot for Residential Buildings
16.25: ExoMy – The 3D Printed Open Source Rover

14.45: Visual SLAM robot
Beshr Eldebuch & Rahaf Alzayat

Autonomous motion for robots is a prerequisite for many robotic applications. A mobile robot needs to perceive its environment in sufficient detail to allow the completion of a task with reasonable accuracy. In general, acquiring models of unknown environments requires a solution of three subtasks: mapping, localization, and motion control. Mapping is the problem of integrating the information gathered with the robot’s sensors into a given representation. Localization is the problem of estimating the position of the robot. Finally, the motion control problem involves how to steer a vehicle to efficiently guide it to the desired location or along a planned trajectory.

SLAM stands for Simultaneous Localization and Mapping. In particular, SLAM using cameras is referred to as visual SLAM because it is based on visual information only. In our project, an Omni-directional robot was built and equipped with an RGB-D camera for autonomous navigation, map building, and visual odometry. We used Real-Time Appearance-Based Mapping (RTAB-Map) as our Visual SLAM algorithm.

In general, an Omni-directional base allows for more flexible handling of mobile robots. However, multi-directional wheel slip leads to poor estimates when considering the number of times the motor has rotated (wheel encoders), which limits the accuracy and overall usefulness of this type of base.

In order to navigate autonomously, two paths should be generated. First, a Global Planner generates an obstacle-free path from start to goal in a given map. This could be between cities, countries, or even inside buildings. Second, a Local Planner acts in the current situation. It “sees” all the details of the environment such as the traffic lights, pedestrians, other cars, …. etc.
Finally, we show that visual odometry estimates are sufficient to generate a global path for the robot using the Dijkstra algorithm and a local path using the Timed-Elastic Band. Additionally, this system provides high-accuracy visual odometry estimates and is capable of compensating for wheel slip on a four-wheeled multi-directional mobile robot base

https://www.youtube.com/channel/UCE0Y4kFboRDh4XMrqhUjZYg/featured https://www.linkedin.com/in/beshr-eldebuch/ https://www.linkedin.com/in/rahafalzayat/

14:55 Indoor self-driving robot
Jason Luk

The Indoor Self-driving Robot is built for performing domestic services. It can deliver and unload items around the building automatically. Users can give the command to the robot by accessing the LAN webpage on their mobile device or voice command with Siri shortcut. Also,the robot can return to its charging station automatically when it is low on battery or after finishing its jobs. The robot operates based on AI computer vision, OpenCV color & object detection on Jetson Nano, and is controlled by Mega2560 Pro. All the data is being processed within the robot to protect the users’ privacy.

15:05 Swarms for People
Sabine Haurt – Bristol robotics Laboratory

Coffee break

15:45 BeBOT: Bernstein Polynomial Toolkit for Optimal Trajectory Generation
Calvin Kielas-Jensen

The goal of my project is to put forward an algorithm to solve motion planning problems, enabling autonomous operations for cooperative vehicles navigating in complex environment in the presence of humans. The approach is to formulate the trajectory generation problem as an optimal control problem (OCP) and then use Bernstein polynomials to transcribe it into a nonlinear programming (NLP) problem which can be solved using off-the-shelf solvers. So, what does all of that mean?
One way to solve robot navigation problems is to generate a path from your current position to your desired position and then use a lower level controller to follow said path. Think of generating the path as using your phone to calculate a route and you following that route in your vehicle as the lower level controller. For this project we will be generating trajectories, which are slightly different. Instead of just having positions, trajectories can also include additional information such as speed and acceleration at a given time. This is useful because an algorithm computing a trajectory can take important factors such as the maximum speed of the vehicle into account.
There are many approaches to designing an algorithm that computes trajectories and they each have their own benefits and drawbacks. Formulating the trajectory generation problem as an OCP allows us to tweak all the inputs to our autonomous vehicle to find the best set of inputs to achieve a desired goal. However, we do need to have a good mathematical model of our system and a decent initial guess that is close to the most optimal solution. This formulation also makes it relatively easy to include additional constraints to our problem such as obstacle avoidance and a maximum turning rate. Unfortunately, despite our significant progress in mathematics, it is often impossible to solve an OCP by hand. This is where problem transcription comes in.
Nonlinear programming problems are a specific formulation where the goal is to minimize some function subject to equality and inequality constraints (be aware that linear programming problems also exist, but for this project we will use nonlinear programming). Suppose you work for a delivery company and you want to make as many deliveries as possible by the end of the day without running out of gas or getting pulled over for speeding. In this case, you would like to maximize the number of deliveries. To make that fit the NLP problem formulation, we can simply say that we want to minimize the negative of the number of deliveries (e.g., -10 deliveries would be preferred to -5 deliveries). Let’s say that no matter what, you want to finish your deliveries at exactly 5PM. That end time would be considered an equality constraint. Your inequality constraints are the amount of gas you have and the maximum speed. You don’t want to run out of gas, but you also don’t necessarily need to use up all the gas in your tank. Similarly, for speed, you can drive lower than the maximum speed, but you don’t want to go over it and risk a ticket. The nice thing about NLP problems is that they have been studied by many bright minds for years which means methods to solve them already exist. So, if you can represent your problem as an NLP problem, then you can easily solve it using one of the existing methods.

You could use geometry to find an unknown value on a shape or you could represent the shape using algebra and solve for the unknown value and reach the same conclusion. To convert the trajectory generation OCP into an NLP problem, you could simply represent the trajectory as a series of many points. The NLP solver would then need to tweak each one of those points in order to minimize the given function while adhering to the constraints. Increasing the number of points quickly makes the problem impossible for even the most powerful computers to solve but you might need many points to ensure safety and feasibility. A clever way to reduce the number of points is to instead represent the path as a polynomial. We could potentially create the same path represented by thousands of points by simply changing the coefficients A, B, and C in the equation f(x) = Ax^2 + Bx + C. There are different polynomial bases such as Legendre, Hermite, or in the last example, monomial. This work will use the Bernstein basis.

Bernstein polynomials, frequently called Bézier curves, provide an intuitive way to build and modify a curved line. If you have ever used the spline tool in a vector graphics program like Adobe Illustrator, you likely used Bernstein polynomials. These polynomials provide many useful properties such as needing few points to represent a useful trajectory and making important mathematical computations very efficient. These properties make them an excellent choice for converting the OCP into an NLP problem since they reduce the computation time needed while still guaranteeing safety in continuous time, which is often not possible using other methods.

I, along with the support of the University of Iowa Cooperative Autonomous Systems lab, have developed an open source Python library named BeBOT (Bernstein/Bézier Optimal Trajectories) which provides the user with easy access to the useful properties of Bernstein polynomials for optimal trajectory generation in real time. Our research along with the BeBOT library can provide a powerful method for generating optimal trajectories in complex environments for real-time applications.

https://github.com/caslabuiowa/BeBOT; https://www.youtube.com/channel/UCe_cA6GRcbHIWaK9jUkip0w; https://venanziocichella.com/

16:00 Human Mode

Human Mode is a frontier technology company pursuing advancements in robotics, AI and VR. With our tech, we are attempting to blur the line between the virtual and physical world: from VR-controlled robots that can extend your presence anywhere, to VR software that bridges your real world with your avatar’s world.

Currently, we are engaged in two core projects: Human Mode Robotics, which is our work on VR controlled proxy robots and synthetic intelligence, and Massive Loop, which is a unique VR experience coming later this year. Since December 2019, we have developed three Gemini robot prototypes: Gemini.1 – a simple prototype target for the Mech Suit Gemini.2 – an untethered, self-driving robot that avoids obstacles Gemini.3 – a strong, durable robot that can lift up to 90 lbs. All three of the Gemini can be controlled remotely with a VR headset. The Mech Suit, developed in conjunction with the first Gemini, is a full-body motion control suit that allows the Gemini.1 to mimic the wearer’s movements. When wearing the suit, the robot is a true extension of yourself. You can hear what it hears, see what it sees, and control it using your normal human body motion. The suit allows it’s wearer to control a robot target from a great distance, making it a crucial piece of equipment in our quest to use durable robots to assist humans and keep them out of harms way.

The Mech Suit and Gemini series remain under continual development. We also developed a synthetic intelligence project called the NTT.1, which was our first effort to integrate our AI research with robotics. It was built to work with limited data: it can recognize someone it’s never seen before, and then immediately remember them going forward. This is potentially a useful skill for bots in hospitality or caregiver roles, or in our case, as our new office receptionist. Going forward, we are driven to create an environment that promotes innovation, learning and competition in the robotics world. This is only the beginning, and we are excited to keep going.

youtube.com/channel/UC1WY1L8m5joSV4jwJWoaATQ humanmode.com facebook.com/HumanModeTech

16:15 Last Mile Delivery Robot for Residential Buildings
Ammaar Solkar

The requirement for better and optimized last mile delivery methods is increasing rapidly as more and more customers are shopping online for food, groceries, medicines, electronics, furniture, etc. Consumers want faster deliveries and want them more frequently. The timeliness of deliveries is getting more important as the online shopping culture becomes mainstream and customer satisfaction is very important even in the delivery aspect. Last mile delivery costs account for a big portion of the total cost of shipping and in order to stay profitable while providing deliveries at lower costs, newer methods and technologies have to be developed and adopted.

To do this several companies and startups are exploring ways to use drones, autonomous mobile robots (AMRs), autonomous vehicles, droids and others to fulfill this requirement. One important limitation of current drones and AMRs is that they currently only work for open areas and places with suburban housing. They cannot deliver to the customers’ doors in buildings and high-rises either requiring the customer to collect the packages from the lobby or outright leaving them out of the target customer base. Closing the last mile delivery gap is an important requirement in completely automating the delivery process.

Exploring the limitations that current products have in delivering to buildings and developing a proof of concept to overcome them is the goal of this project. In the project, I developed a simple digital last mile delivery robot simulation. The robot can be dropped near the apartment building in which it has to deliver the package. It can navigate from the designated drop area (outside the building) to the elevator, ride the elevator to reach the required floor, navigate to the customers door and then drop the package after some verification. What is required for the robot to be able to do this is the apartment building which is being targeted will have to be mapped once prior to running deliveries. This will require a personnel to accompany a robot or a mapping device to map out the floors which have fundamentally distinct structures (like floor and lobby). These maps can then be reused for areas which have similar structure, for example – most floors will have the same structure so it is not required to map all the floors, just all the unique ones. This will save time and money and make the process easier.

Once the mapping is done, the building can then be eligible for autonomous deliveries. To test this robot, I use an open source robotics simulator where I created a simple apartment building with a working elevator. Using this virtual environment, we can simulate the robot in an environment it will have to work in while performing deliveries. The flow will be something similar to what follows – A customer places an order. The order details are sent to the warehouse. The warehouse dispatches robots containing packages in a delivery van which can potentially be autonomous. These vans drop the robots in the drop area at the required buildings. The robots deliver the package and come back and wait for the delivery van to pick them back up. The vans pick the robots back and take them to the warehouse.

Having the system be autonomous and highly controllable increases the scope for optimization. One interesting avenue this opens up is the ability to make overnight deliveries. The customer can order something and the product will be at their doorstep in the morning. The deliveries can be run at night which allows them to beat traffic and increase available time to do deliveries. This has the potential to either reduce traffic during the day time – or increase traffic both during day and night. As for security, it has the potential to improve security since it reduces the influx of people in the building in general. Whatever the case, it is an interesting application having high potential and being actively explored.

https://www.linkedin.com/in/ammaar-solkar/, https://github.com/ammaar8, https://www.youtube.com/c/AmmaarSolkar

16:25 ExoMy – The 3D Printed Open Source Rover
Miro Voellmy

ExoMy is an educational mobile robot which is inspired by the mars rover Rosalind Franklin of the European Space Agency. ExoMy features a self leveling suspension system with six steerable wheels. All mechanical parts are 3D printed. The electronics consist of a battery , power distribution, on board computer and camera, which are widely available and cost-efficient. ExoMy can be remote controlled using a gamepad or a custom web interface, which further features a camera view. The software is programmed in Python using the ROS framework to make future expansion easy and is deployed with Docker. The project is published under an open source licence and comprehensive assembly and installation instructions are provided. ExoMy has successfully been built by over 30 people all around the world, using only the provided documentation and can be an easy to use introduction to space robotics for enthusiasts.

https://github.com/esa-prl/ExoMy, https://esa-prl.github.io/ExoMy/, https://discord.gg/c9ebcVfaY7

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website with WordPress.com
Get started