Sunday August 1st

Manipulation 10.00 – 12.45 EDT
10.00: Keynote – Rich Walker – Shadow Robot Company
11.00: Continuum Robotics – A fascinating use of mathematical modeling
11.15: Makelangelo art robot
11.30: Coffee Break
11.45: Mimico – The Biomimicking Robot
12.00: CORO – Collaborative Robot Avatar
12.15: Using simple circuits to build powered exoskeletons, and giant robots
12.30: Robotics CoLab – Building Robots to Learn Robotics

Legged Robots 14.00 – 17.15 EDT
14.00: Keynote – Shamel Fahmi – Legged Robots: Overview of the HyQ robots, and how to make them terrain-aware – Italian Institute of Technology
15.00: Wolfie quadruped
15.15: Robotallic: A micro robot using smart materials
15.30: Attempt at converting toy into autonomous robot
15.45: Coffee Break
16.00: k3lso Quadruped
16.30: Sparky the Quadruped: Using ROS/Gazebo to iterate faster
16.45: Damage Recovery for Hexapod Robots
17.00: Quadruped w/3D Printed Recirculating Ball Screws

Manipulation

The robot arm is one of the most quintessential robots in industry. They can come in many forms and this section will showcase the variety of problems our community is trying to solve with manipulation.

Keynote – 10am EDT  Rich Walker – Managing Director at Shadow Robot Company “From the general to the specific – why we’re building “Tactile Telerobots” and what the “sense of touch” means for robots.”

Schedule
10.00: Keynote – Rich Walker – Shadow Robot Company
11.00: Continuum Robotics – A fascinating use of mathematical modeling
11.15: Makelangelo art robot
Coffee Break
11.45: Mimico – The Biomimicking Robot
12.00: CORO – Collaborative Robot Avatar
12.15: Using simple circuits to build powered exoskeletons, and giant robots
12.30: Robotics CoLab – Building Robots to Learn Robotics

11.00: Continuum Robotics – A fascinating use of mathematical modeling
John Till

Flexible robots can be created by joining elastic rods. This project studied a robot having six flexible legs attached to linear actuators, resulting in 6 degree-of-freedom control of a robotic wrist. The advanced math describing these rods was discovered by the brothers Eugène and François Cosserat at the turn of the 20th century, and with modern computing power we can solve their equations fast enough to precisely control a robot composed of rods. These flexible robots offer potential in constrained environments and human interaction tasks where non-damaging contact is desired. They also are fun because of their fascinating aesthetic and physically tangible application of mathematics. The code for this robot is published on GitHub under an MIT license, although it should be noted that the math is advanced. Even with prerequisite knowledge of programming, 3D kinematics, and working with numerical differential equations, it could take ~3 months to understand the kinematics of a robot like this. Still, it is good to be aware of the “continuum” robot paradigm, and designing a robot in this family can be a good project for seasoned roboticists.

https://github.com/JohnDTill/ContinuumRobotExamples https://sites.google.com/site/danielcrucker/ https://www.youtube.com/channel/UCp2zUjok9cg41J4M7RxfHGw

11.15: Makelangelo art robot
Dan Royer

Makelangelo is a vertical polargraph plotter: it hangs on a window, wall, whiteboard, or (w)easle, and two motors pull belts to move a pen over a large area. It began as a project to learn how to control motors and commercial success has helped fund the development of other robot projects.

https://discord.gg/Q5TZFmB, https://marginallyclever.com, https://makelangelo.com

Coffee Break

11.45: Mimico – The Biomimicking Robot
Thomas

As humans, our arms and hands are used to explore the world around us and conduct both simple and complex tasks – sometimes in dangerous or delicate conditions. So why not translate these movements and skills into a robotic arm? My project was to design a robot that could do just that. The robot is called Mimico due to its ability to directly copy the user’s arm movements in real-time. It was designed with the idea to make it easier to control a robot arm without having to use joysticks, arrow keys, or manually calculating coordinates. Because of this, it is not limited to a single application or field and the user does not need advanced knowledge of robotics. The concept can be applied to numerous industries (manufacturing, medical, rescue, social), where the actual user does not necessarily need to be ‘hands on’ doing the task themselves. For example, this technology could be used by surgeons carrying out surgical operations, and to some degree it is already in use (the Da-Vinci robot), but not in a human-like style. Due to its human-like movement, it can fit better in environments designed for humans and can be controlled using natural movements. This means that the robot can also be used to protect the user from dangerous environments, such as toxic waste-disposal, bomb-squads, and handling dangerous compounds to name a few, as the user can be anywhere else and control the robot remotely, allowing operators to carry out tasks as if they were actually present at the location.. To build Mimico, I looked at a range of different methods. I initially looked at putting small sensors on my arm to determine the angles of each joint as I moved my arm, but eventually settled on the idea of using optical motion-capture technology. Whilst this sounds like a complex term, it is actually technology that was very common with household consoles, such as the Wii, the Xbox Kinect, and the PlayStation Eye/EyeToy. I went with the Xbox Kinect as it was cheap, it had many features to play with, and had plenty of helpful information online. One of its unique features is called Skeletal Tracking, which allows the Kinect to estimate the location of the user’s joints based on the user’s body positions. I wrote a custom Windows-based program that reads the user’s joint positions from the Kinect using its skeletal tracking feature, and determines the angles for the shoulder and the elbow for both left and right arms. As the Kinect follows the arms, the program can predict what the arms should look like, and will send commands for it to copy your movement. These predicted arm positions are then sent via a USB cable to a custom built programmable circuit board where the commands are converted to information Mimco can read. The robot uses a series of motors to control the arm positions in the shoulders and the elbows. In simpler terms, the Kinect follows your movements and translates this into data that Mimico can read and follow. To use Mimico is very simple – Mimico is set up on a table, with the Kinect facing towards an empty room. One person at a time can stand in front of the sensor – anymore and the Kinect won’t be able to pinpoint the right person and therefore the joint location. The located person can then raise their arms, rotate them, bend at the elbow, etc and Mimico will follow the movements. It is important to note Mimico does not move at speed. Frantic hand waving will only result in Mimico not being able to keep up and therefore will not copy movements accurately. Mimico is designed to follow slower deliberate gestures with accuracy, not quick movement. Hence its suitability to the aforementioned environments. Mimico is a step along a larger industrial scale project which could be developed into everyday technology, given the opportunity. However, at the moment Mimico will remain my hobby and personal project!

Video: https://photos.app.goo.gl/3C9enugppkKaPLJN7

12.00: CORO – Collaborative Robot Avatar

The humanoid robotics community has succeeded in creating remarkable machines and task-level programming tools, but arguably has failed to apply sophisticated autonomous machines to sophisticated tasks. One reason is that this combination leads to prohibitive complexity. Biological systems provide many examples of integrated systems that combine high-performance and flexibility, with logically-organized low-level control. Sophisticated organisms have evolved that depend on physical dexterity to thrive in a particular ecological niche while mitigating computational and behavioral complexity. My goal is to contribute technologies that can support new robotic applications in our culture that require fully integrated dexterous robots in unstructured environments. Personal robotics is an important emerging application that depends on seamlessly integrated and sophisticated machines, controllers, and adaptability. Logically organized representations for use in task-level application development are critical to pull this off. The impact of such technology could be significant—with applications that include healthcare and telemedicine, exploration, emergency response, logistics, and flexible manufacturing. Humanoid avatar robots are well suited to further these goals. In particular, with the latest learning algorithms in mind, it is now possible to realize human-like skill and behavior in many pick and place tasks. For example, using camera input to detect objects and identify grasp points using Deep Learning can lead to many useful applications. The challenge is collecting enough useful data about objects and their grasp point in order to successfully train the system. One of the primary motivations behind CORO is to provide a system that could enable a human operator to generate a wide variety of datasets by actually controlling the robot avatar to perform these tasks. CORO resulted from several years of developing configurations of humanoid platforms for various projects. The work began as a team effort between artists and technologists to enable opportunities for radical inclusion, in honor of Larry Harvey’s vision, founder of the Burning Man Festival. The not-for-profit supported by donations, IAwake (https://iawake.ai), was created to continue the STEM+Art mission and the open-source dissemination of tools and equipment to make cutting edge technologies, such as VR, AI, and Robotics more accessible to artists and creators for exploration of cutting edge cultural experiences at events such as the Burning Man Festival. At the beginning of 2020, the latest revision of the robot’s embodiment emerged. CORO was envisioned to bridge across quarantine restrictions to reach loved ones or those in need. In this latest build, I combined combining some of the 3D printed parts from the Poppy robot (https://www.poppy-project.org/), along with some additional hardware for an omni-directional rolling base provided the structure for mounting additional sensors and compute hardware. The legs can extend to allow the robot to reach objects on a table. CORO is most stable in the sitting position. The base also provided additional payload for planned additions, such as onboard batteries. The head was replaced with a configuration of servos for mounting the Zed Mini (https://www.stereolabs.com/zed-mini/). A set of USB-C and USB-A cables connect to a host PC along with power for the wheel amplifiers and the 29 independent Dynamixel Servos (AX-18, MX-18, MX-28, MX-64). For software, the robot control system is based on a fork of the Robotics OP-3 ( https://github.com/IAwake-AI) and ROS (https://www.ros.org/) based system. The commanded positions for the servos come from an OpenVR based application that was developed on the host PC, where the motion of the VR headset and hand controllers are tracked. The application also renders the 720P projections from CORO’s two cameras, providing low latency images on the screens in the VR headset. Higher resolution images were available but the experience benefited more from the higher 60Hz update rate and created a better sense of immersion. Operators also notice a strong sense of embodiment due to the arm tracking capabilities of CORO. This is most evident when the operator sees the robot’s arms through CORO’s cameras, through the VR headset and feeling one’s arms in the same position and moving the same way. Progress continues with collecting data in order to train various Machine Learning algorithms with the goal of enabling more skillful behavior. This is especially useful as long as the VR controllers provide inaccurate or incomplete whole hand tracking and haptic feedback capabilities. Operator fatigue while performing complex, repetitive tasks should also be reduced when the robot can manage the accuracy of the grippers more autonomously, e.g., the robot may visually servo the gripper to the best position indicated by a Deep Learning based grasp point classifier while the operator focuses on the gross motion of the higher level task. Latencies introduced by long transmission distances between the avatar and operator can also be mitigated by balancing this autonomy with simulated sensor feedback for both the robot and operator. Success in these areas will lead to a exponential growth in the application of robotics to improve healthcare and telemedicine, exploration, emergency response, logistics, and flexible manufacturing.

https://www.linkedin.com/posts/patrickadeegan_futureofwork-robotics-vr-activity-6691275439920443392-bIVe/

12.15: Using simple circuits to build powered exoskeletons, and giant robots
Joshua Nye

Using simple parts to create complicated robotic/mechatronic projects like power armor or a giant fighting “robot” that can be driven like a car.

https://www.youtube.com/NyeMechworks

12.30: Robotics CoLab – Building Robots to Learn Robotics
Daniel O’Mara

The project we are presenting is Reachy, an open source humanoid robot originally designed by Pollen Robotics and build by our Mechlabs students in three 12 week sessions. Mechlabs is a project based mechatronics education program at the Circuit Launch community of Robotics and hardware electronics companies and enthusiasts. Our version of Reachy is a fully FDM printed humanoid two 7 DoF arms comprised of dynamical servos, a pan tilt neck and expressive head with a sensor stack including force sensor grippers, 2 cameras, spatial microphone, Google Coral TPU, and controlled via an intel NUC with ROS Noetic. We also have a mobile Magni Base integrated with autonomous navigation.

Reachy is a platform for our students (both in-person and virtual) to explore robotics concepts in a project based format. The first 12 week session we built the torso, head, and one arm and was the first group to successfully build Reachy outside the original designers.

The second session the second arm and neck was built, and full control was integrated on ROS independantly before the developers later released their ROS integration. The demo project was a concierge robot program that identified visitors mask compliance, and through a voice controlled chatbot program, Reachy guided visitors to the requested area of our facility, and told jokes while leading the way.

The third session full motor control was migrated off of the custom microcontrollers Pollen developed to an open source controller program that runs on the widely available Robotis U2D2. This session the students engaged into a deep exploration of grasping and object identification and orientation. Three various shaped blocks were placed in front of Reachy and if you mixed them up, Reachy would put them back in their original locations and order. With Reachy as a learning platform, students were able to create functionality and programming in both hardware and software that was focused on exactly what each student wanted to learn.

All of their work is a contribution to the open source community and brings learning Robotics into a hands on experience on accessible hardware. Mechlabs is the real experiment and the students learn by doing building on not what experts have built, but what they, and students before them have accomplished. The students leave with actual robotics experience rather than theory or a stale curriculum. Most of what the students have accomplished has been completely original with the mentors themselves only guiding, not just having the students recreate what has been done before. We are very proud of what a very diverse student group, often with little prior robotics experience, has been able to build. This is what we are excited to demonstrate.

Video: https://wiki.circuitlaunch.com/signed/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fd4cbe0fd-bb37-4a0d-9572-220eb0225994%2FCoLabDemo_Q1.21.mp4?table=block&id=b0157173-e040-4960-a240-00151cae8aac&spaceId=3a71977f-a3d0-46dd-9f69-c96e2262b9e0&name=CoLabDemo_Q1.21.mp4&cache=v2

https://wiki.circuitlaunch.com/Reachy-Open-Source-Humanoid-Robot-e96c825495b347b2b1322ade83df8e9e https://blog.circuitlaunch.com/weve-reached-the-end/ https://github.com/CircuitLaunch

Legged Robots

One of the most enticing concepts in robotics is the notion of a walking robot. The last decade has shown an incredible surge in the development and availability of these platforms to professionals and enthusiasts alike.

Keynote – 2pm EDT Dr Shamel Fahmi – Legged Robots: Overview of the HyQ robots, and how to make them terrain-awarePostDoctoral researcher at the Dynamic Legged Sytems lab of the Istituto Italiano di Tecnologia (IIT)

Schedule
14.00: Keynote – Shamel Fahmi – PostDoc Researcher at the Dynamic Legged Systems lab of the Italian Institute of Technology
15.00: Wolfie quadruped
15.15: Robotallic: A micro robot using smart materials
15.30: Attempt at converting toy into autonomous robot
15.45: Coffee Break
16.00: k3lso Quadruped
16.30: Sparky the Quadruped: Using ROS/Gazebo to iterate faster
16.45: Damage Recovery for Hexapod Robots
17.00: Quadruped w/3D Printed Recirculating Ball Screws

15.00: Wolfie Quadruped
Piotrek Wasilewski

The project is a small-size (sub 5 kg) quadrupedal robot based on custom brushless actuators. We have all seen what Boston Dynamic’s Spot, Unitree’s A1 or MIT Minicheetach are capable of. High power motors allow them to perform many types of gaits including the ones with flight phase, as well as jumping or dancing. These incredible robots are very complex and expensive, with teams of engineers constantly working to improve them. On the other hand, we can find their servo-based imitations that are definitely great fun, although they lack agility. Usually, they are not equipped with force sensors so there’s no place for more complicated walking algorithms. My goal was to design a quadruped almost as small as servo-based quadrupeds, preserving the high-performance that bigger machines feature. The small size is mostly dictated by the cost and space I have available for prototyping/testing. Moreover smaller robots are safer in case something goes wrong. The project is still ongoing, however, I believe I can show some interesting results already. The idea was born a few years back when brushless drives started being more accessible. I decided to try building my own controller, mostly for educational aspects. The first iterations weren’t perfect but I eventually managed to create a small functional controller. Later on, being inspired by MIT Minicheetach, I created a prototype of a single actuator with a planetary gear reducer. I wanted to keep the modular nature of the actuators so that they could be replaced quite easily in case of failure. Each actuator weights around 210 g and can produce up to 3 Nm of torque. To validate the performance and durability I’ve built a test stand with a single robot leg. The goal was to perform repetitive jumping motions using the prototype leg and see if all the components survive the test. After over one thousand jumps the leg structure seemed to be intact as well as the actuators and their gear sets. This was my green light for making the remaining nine actuators and the rest of the robot. Currently, I’ve finished machining aluminium parts needed for the actuators and started printing parts for the legs and robot’s torso. In the end, I guess it’s worth mentioning why someone would build such a machine. Over all, it is a big project consuming enormous amounts of time and some people consider walking robots not very useful. My answer would be it’s mostly to learn new things and get to know new people. Walking robots combine numerous fields of science so they’re great for trying out what suits you best. In my case, I have expanded my knowledge of electronics and motor control by building the actuators from the ground up. In the meantime I have created a small desktop CNC machine for prototyping purposes, so I picked up on basic machining as well. I hope to get involved in walking robots algorithms even more in the next few months, and eventually try them out on the finished robot. The project is still in progress so there’s a lot more to be learned!

https://pwwprojects.blogspot.com/search/label/Wolfie https://www.instagram.com/klonyyy/ https://hackaday.io/project/175753-wolfie

15.15: Robotallic: A micro robot using smart materials
Haider

Small/micro robots are types of robots where the small size and light weight is the main design goal; these robots are made with applications in mind were a small robots are needed to reach small places where reaching them using bigger robots are not possible, and many ideas around making small robots are emerging rapidly.
Robotallic is a micro robot that uses Smart materials as actuators. The name “Robotallic” was driven from bimetallic and robot, it locomotes and can be controlled with a computer. The project goal is to explore ways to make a robot move using inexpensive, widely available smart materials that are easily accessible on the market. The robot current version is 3.5cm x 2cm x 2.8cm as seen in video. As of now it is not fast, it has a speed of 1.75cm/min, and the robot has a basic walking gait, but as the robot gets updated with new ways that will make it walk faster, become stronger relative to its size, it will be able to drag and lift and push things that are many times bigger its size. Early tests show that a 2 x 1.4 cm piece of this muscle can drag 27 times its own weight, how that will be translate to the full robot is yet to be seen.
With such capabilities these types of robots will find their potential on the market in the future. Since Robotallic is simple and not complex as a system, it will be able to explore places where humidity and temperature is high or low without breaking, another example will be using them to study insects due to their size is similar to that of an insects, using them for swarm behavior studies, or they also could be used in rescue operations, everyone is talking about small robots but not thought of one that could push an object that is on the way of their tiny robots, because in a destroyed building for example, most of the debris are small objects blocking the way of these small robots. Robotallic is also a platform through it one can explore the applications of these materials and to what extend could be used in future of micro robotics and in robotics in general. In this work what I present is the first step that I hope could encourage more people to get along exploring the amazing world of micro robotics. “a journey of a thousand miles begins with a single step” – Laozi.

https://www.youtube.com/channel/UCHq4zc2_qgi7hVPncwy5IDg https://www.mechanical-cell.space/ https://www.thingiverse.com/mechanical_cell/designs

15.30: Attempt at converting toy into autonomous robot
Kevin Chow

I have modified a Hexbug Spider XL into a Raspberry Pi-controlled robot. While it is still a work in progress, this project demonstrates how a children’s toy can be turned into a robot. The Hexbug Spider XL is a toy that is a remote-controlled walking hexapod. What’s great about this toy is that there are only 2 motors in the toy (one for moving forward and backward and one for turning), which makes repurposing the Hexbug relatively easy and allows me to make a walking robot without worrying about the individual movements of each leg of the robot. There have not been too many people who have attempted to control the Hexbug Spider XL with a Raspberry Pi. Most of the hacks I have seen on the Internet to the Hexbug Spider XL have been done using an Arduino. I have rewired the motors of the Hexbug to be controlled with a motor driver that I have added on top of a Raspberry Pi 4. The Raspberry Pi is running 64-bit Ubuntu Mate 20.04 with ROS Noetic installed. The Raspberry Pi sits on top of a 3D printed platform, which is then attached to the body of the Hexbug. The Raspberry Pi itself is powered with a Li-Ion battery while the motors are powered by rechargeable NiMH batteries. I have also attached an Intel Realsense D435i camera and an Intel Realsense T265 camera to the Raspberry Pi. Originally, I had wanted to use both of these cameras simultaneously to be able to help the Hexbug create a map of its surroundings and navigate autonomously. The D435i would provide information about visual features and depth while the T265 would provide odometry information (how fast the Hexbug would be moving and the distance it has traveled). However, I could not get the T265 camera to work alongside the D435i camera without the T265 crashing. Because of this, I resorted to just using the D435i camera for now. In my previous demo video, I have shown the Hexbug being teleoperated. In my recent demo video, I used the depth sensing capabilities of the D435i to have the Hexbug move forward until it sees that it is too close to an obstacle directly in front of it (as reported by the D435i), upon which it would turn left. For future work, I am planning on looking into better ways to use my Realsense cameras on my Hexbug robot to give it better perceptive and navigational capabilities. In addition, I will probably be redesigning the mount that holds the camera because currently, the Hexbug sometimes ends up tipping over, and I’m suspecting that it’s doing that because the cameras are too far forward from the center of the Hexbug. Overall, this toy modification was a great way to make an interesting robot.

Video: https://www.reddit.com/link/o3zu2f/video/pk15q7nwbd671/player

Previous submission: https://www.reddit.com/r/robotics/comments/myt3kb/attempt_at_controlling_a_hexbug_spider_xl_with_a/

Coffee Break

16.00: k3lso Quadruped
Robin Fröjd

Medium-sized 12-DOF quadruped with expected similar performance to MIT cheetah robots. The size is roughly half way between the MIT Cheetah mini and MIT Cheetah 3 with a mass of approximately 18kg with battery.

https://twitter.com/r_frojd, https://hackaday.io/project/176487-k3lso-quadruped

16.30: Sparky the Quadruped: Using ROS/Gazebo to iterate faster
Gary Lvov

My project has been creating a robot, Sparky, that walks on four legs from scratch as part of an independent study on robotics in my senior year of high school. In six months, I’ve been able to design, build, program, simulate, and test six different iterations of my design with countless configurations of how the robot walks (gait configuration) as a result of using several open source tools to streamline the development of the bot. Most notably, I’ve used Robot Operating System (ROS), Gazebo Simulator, and the CHAMP framework for quadrupedal robots. Sparky has twelve motors that actuate the robot. Sparky also boasts an arduino-based board and a raspberry pi: the arduino-based board sends commands/power directly to twelve servo motors to make the robot walk, and the arduino receives the motor commands from the raspberry pi. The raspberry pi communicates with a laptop over wifi. The laptop calculates the necessary movements of each motor to walk with CHAMP, which it then sends to the raspberry pi, and in turn, the arduino/motors. As a result, the robot is fully wireless, and can be controlled remotely from the laptop’s keyboard. The robot also has a lidar sensor and a camera feed that are both streamed to the laptop, and the lidar data along with the positions of it’s motors are used together to determine the robot’s position in relation to its environment. ROS enables and facilitates the communication between the motors, arduino, raspberry pi, laptop, and sensors, so that they are all able to talk to each other. CHAMP, the framework for quadrupedal robots, runs on the laptop, where it interprets all sensor data, motor positions and user commands, and comes up with movement commands for each individual motor so that the robot walks as desired, as well as the estimate of the robot’s position. For each iteration of the design, I would create a 3-D model of Sparky in Autodesk Fusion 360, I would set up the CHAMP library for the design, and I would use Gazebo to simulate the robot to help refine the walking/gait parameters. If the robot wouldn’t work in Gazebo even after configuration refinement, I’d start over with a new design. Otherwise, I’d make the robot in real life, and again I’d tune the gait as simulation isn’t exactly indicative of what happens in real life. Then through watching the robot in real life, I would come up with improvements for the next design. As a result of the pandemic, I haven’t had easy access to a 3-D printer, which led to an interesting design constraint for the construction of the quadrupedal robot. I’ve been able to work through these constraints incorporating eclectic kit parts into my design, and creating parts on a CNC Router that anyone with basic hand tools could easily recreate. My favorite design aspect are the leg/feet of Sparky, which are just quarter inch wooden dowels inserted into rubber tips meant for hiking poles, held together only by interference fit. The entire quadruped is made from about $900 worth of materials, with the bulk of the cost coming from the 12 Dynamixel servo motors. Ultimately, through using ROS, Gazebo, CHAMP, and other open source tools to iterate faster, programming the quadruped to actually walk only involved writing about 100 lines of new code, installing multiple libraries, and minimally tweaking existing configuration files, although it did take several design iterations for the robot itself to become fully stable. Only through using open source libraries, was I able to tackle the seemingly impossible task of creating and refining a quadruped robot in under six months.

Video: https://drive.google.com/file/d/1OmvBdytu3EWvsaCMV20htjvBw0lzvJC8/view?usp=sharing

https://www.garylvov.com/ https://www.youtube.com/channel/UCC3ApHQWzKKH0EaZwGiQIEA https://github.com/garylvov

16.45: Damage Recovery for Hexapod Robots
Tze Hank Chia

The hexapod robot has a built-in damage recovery system where it uses an evolutionary algorithm called MAP-Elites, and a probability estimation function called Bayesian Optimization that selects the best walking gait over 10,000 solutions (generated in a simulation using the evolutionary algorithm) under limited number of trials (less than 20). The robot only requires the input distance as a measurement of its performance (how far it travelled), it then estimates which solution might be the best walking gait for improving its current performance. After trying out the new walking gait, it updates its understanding of the probability distribution of the solutions (being which one is most likely better), and a new walking gait is selected again. The advantage of this algorithm is that common recovery methods uses reinforcement learning which consumes a lot of computation power/an extensive number of trials (up to 100 trials) which may be too long for a robot to adapt in a hostile environment. Also, pre-designed contingency plans may be limited by the vision of the creator, and require a lot of costly sensors to estimate its damage and performance. The robot also has an object detection algorithm that uses feature matching of an image to calculate its position in the video. If the image in the camera is to the left, the robot attempts to turn towards that direction, If the image is too near, the robot moves backwards vice versa. The object detection algorithm is made using an Android phone built using Vuforia Unity C# and the commands (turn left turn right etc) are sent through a WIFI connection method (UDP) with the robot’s onboard microcontroller (ESP32). The main motors (G15 servos) of the robot are from a robotics kit known as RERO, which are controlled by another microcontroller (Arduino Mega). The ESP32 acts as a WIFI module that receives incoming commands, and tells the Arduino Mega which motor to actuate.

Video: https://drive.google.com/file/d/13la4K_Nv69rVbxWKRxIqKF-gEd-5g-cH/view?usp=sharing

https://hankrobot.wordpress.com/ https://github.com/HankRobot https://www.linkedin.com/in/tzehankchia/

17.00: Quadruped with 3D Printed Recirculating Ball Screws
Jeff Underly

​I’ve been enamored with Boston Dynamics’ robots since first seeing them more than 10 years ago but only really started thinking about making my own when inspired by James Bruton’s OpenDog project on YouTube. His project rationalized a few key concepts that made the build feasible for me and I began thinking of how to make my own quadruped robot with cost and diy as my primary considerations. I had the time so I began designing and printing anything that I could. While BLDC motors direct or with a gear reduction is the ideal joint drive; cost drove me to explore other options. Eventually I landed on gearmotors with a recirculating ball screw actuator of my own design to almost entirely 3D print. Mechanically I was comfortable with the design side but this project would significantly push my understanding and knowledge of embedded programming, which is my main goal. Professionally I’m a mechanical design engineer with a desire to pivot to embedded software engineering or mechatronics. Hobbyist robotics gives me the outlet to pick a project to develop a roadmap of several new concepts and tools to learn, understand, and implement into an integrated system. My quadruped utilizes (12) 100-150mm typical recirculating ball screw actuators of my own design. They contain minimum and maximum travel limit switches with provisions for direct geared encoder. The ball screw actuators use ⅛” delrin bearing balls with a 4.5mm pitch, 12mm pitch diameter recirculating ball screw with a 11/20 tooth printed T5 belt reduction. I chose a 550 rpm @ 12V gearmotor which yields ~23mm/s actuation speed at no load. The leg joints utilize the same ⅛” bearings while the shoulder joints use 6mm plastic bearings due to the higher moment loads. All joint positions are measured with AMS AS5600 magnet angle sensors. The primary controller is an ESP8266 chosen for wifi connectivity and computation capacity. For expanded IO; (2) STM32F103 capture (6) actuators and magnet angle joint position sensors front and rear respectively. The microcontrollers and AS5600 sensors communicate through a multiple bus I2C network. After reviewing several options I chose the Blynk app as the control interface. Blynk offers a very flexible, feature rich, and simple to implement method to pass many control signals by wifi to the ESP8266 mcu. ESP8266 receives all Blynk control signals then completes all inverse kinematic valuations that generate joint angle positions. These position setpoints are sent to the respective STM32 which handle all of the actuator PID positioning. My primary goal with this project was to push my embedding programing knowledge along with providing me with the foundation to move to higher level robotic control systems and sensory. Subjects such as ROS, Lidar, environmental awareness, and path-finding are future stretch goals.

Video: https://photos.app.goo.gl/DSsRsgbe4zha2qQw9

https://www.thingiverse.com/thing:4890725 https://autode.sk/3gKsfRK https://github.com/DesignCell

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website with WordPress.com
Get started