While working in my robotics lab, a common situation arises: I want to run a script or train a neural network that may take anywhere from a few minutes to a few hours. I don’t know how long it will take the program to complete, and I don’t want to have to babysit the terminal it’s running in. I’ve found two good ways to alert myself when these scripts complete.
When I first started working through the ROS tutorials, one of the first annoyances I faced was the sheer number of terminals I was opening. Continue reading
In my years working with ROS in a research capacity, I’ve used dozens of development machines, and I’ve learned a few guidelines when picking out laptops for working with Linux, ROS, and robots. I’ve updated the guide for newer 2019 laptop models.
You can use easily speech recognition to emit ROS messages and control your robots with your voice. In this post, we’ll learn how to install some popular speech recognition libraries on a ROS machine. This post describes how to set up the Sphinx libraries and custom code developed by human-robot interaction researchers at UT Austin and elsewhere.
In our robotics lab at UT Austin, we often use augmented reality (AR) tags to determine the position and orientation of an object. One good ROS package we use for tracking markers is
ar_track_alvar, which can track ArUco-style markers (shown below) and calculate their 6D pose. The package makes detecting AR tags as easy as running a
roslaunch file (with some slight configuration tweaks).