Edit on GitHub

Applications

This chapter shows some demos using Turtlebot3. In order to implement these demos, you have to install the turtlebot3_applications package.

NOTE : Turtlebot3 has been tested on Ubuntu 16.04 and ROS Kinetic Kame.

Tip The terminal application can be found with the Ubuntu search icon on the top left corner of the screen. Shortcut key for terminal is Ctrl-Alt-T.

[Remote PC] Go to ROS source directory (/home/(user_name)/catkin_ws/src) and clone the turtlebot3_applications repository.

$ cd ~/catkin_ws/src
$ git clone https://github.com/ROBOTIS-GIT/turtlebot3_applications.git

[Remote PC] catkin_make to install the new package.

$ cd ~/catkin_ws && catkin_make

TurtleBot Follower Demo

NOTE : The follower demo was implemented only using a 360 Laser Distance Sensor LDS-01. a classification algorithm is used based on previous fitting with samples of person and obstacles positions to take actions. It follows someone in front of the robot within a 50 centimeter range and 140 degrees.

NOTE : Running the follower demo in an area with obstacles may not work well. Therefore, it is recommended to run the demo in an open area without obstacles.

[TurtleBot] In order to run the demo, parameter in LIDAR launch file has to be modified. In the below example, Pluma is used to edit the launch file. In the param tag with frame_id as a name, replace base_scan to odom and save the file as shown in the below images.

$ pluma ~/catkin_ws/src/turtlebot3/turtlebot3_bringup/launch/turtlebot3_lidar.launch

NOTE : Turtlebot Follower Demo requires scikit-learn, NumPy and ScyPy packages.

[Remote PC] Install scikit-learn, NumPy and ScyPy packages with below commands.

$ sudo apt-get install python-pip
$ sudo pip install -U scikit-learn numpy scipy
$ sudo pip install --upgrade pip

[Remote PC] When installation is completed, run roscore on the remote pc with below command.

$ roscore

[TurtleBot] Launch the Turtlebot3_bringup

$ roslaunch turtlebot3_bringup turtlebot3_robot.launch

[Remote PC] Move to turtlebot3_follower source directory

$ cd ~/catkin_ws/src/turtlebot3_applications/turtlebot3_follower/scripts

[Remote PC] Launch turtlebot3_follow_filter with below command.

$ roslaunch turtlebot3_follow_filter turtlebot3_follow_filter.launch

[Remote PC] Launch turtlebot3_follower with below command.

$ rosrun turtlebot3_follower follower.py

TurtleBot Panorama Demo Using Raspberry Pi Camera Module

NOTE : The turtlebot3_panorama demo uses pano_ros for taking snapshots and stitching them together to create panoramic image.

NOTE : Panorama demo requires to install Raspicam package. Instructions for installing this package can be found at Gihub Link

NOTE : Panorama demo requires to install OpenCV and cvbridge packages. Instructions for installing OpenCV can be found at OpenCV Tutorial Link

[TurtleBot] Launch the Raspberry Pi cam V2

$ roslaunch turtlebot3_bringup turtlebot3_rpicamera.launch

[Remote PC] Launch Panorama with below command.

$ roslaunch turtlebot3_panorama panorama.launch

[Remote PC] To start the panorama demo, please enter below command.

$ rosservice call turtlebot3_panorama/take_pano 0 360.0 30.0 0.3

Parameters that can be sent to the rosservice to get a panoramic image are:

[Remote PC] To view the result image, please enter below command.

$ rqt_image_view image:=/turtlebot3_panorama/panorama

Automatic Parking

NOTE : The turtlebot3_automatic_parking demo was using a 360 laser Distance Sensor LDS-01 and a reflective tape. The LaserScan topic has intensity and distance data from LDS. The turtlebot3 uses this to locate the reflective tape.

NOTE : The turtlebot3_automatic_parking demo requires NumPy package.

[Remote PC] Install NumPy package with below commands.

$ sudo apt-get install python-pip
$ sudo pip install numpy
$ sudo pip install --upgrade pip

[Remote PC] Move to turtlebot3_automatic_parking source directory.

$ cd ~/catkin_ws/src/turtlebot3_applications/turtlebot3_automatic_parking/scripts

[Remote PC] To make it executable.

$ sudo chmod +x automatic_parking.py

[Remote PC] Run roscore.

$ roscore

[TurtleBot] Bring up basic packages to start TurtleBot3 applications.

$ roslaunch turtlebot3_bringup turtlebot3_robot.launch

[Remote PC] If you have Turtlebot3 Burger,

$ export TURTLEBOT3_MODEL=burger

If you have TurtleBot3 Waffle.

$ export TURTLEBOT3_MODEL=waffle

[Remote PC] Run RViz.

$ roslaunch turtlebot3_bringup turtlebot3_remote.launch
$ rosrun rviz rviz -d `rospack find turtlebot3_automatic_parking`/rviz/turtlebot3_automatic_parking.rviz

You can select LaserScan topic in RViz.

[Remote PC] Run turtlebot3_automatic_parking.py.

$ rosrun turtlebot3_automatic_parking automatic_parking.py  

/images/platform/turtlebot3/application/panorama_view.png)

Automatic Parking Vision

NOTE : The turtlebot3_automatic_parking_vision uses raspberry pi camera and so the robot which is a default platform used for this demo is TurtleBot3 Waffle Pi. Since it parks from finding out AR marker on some wall, printed AR marker should be prepared. Whole process uses the image get from the camera, so if the process is not well being done, configure the parameters, such as brightness, contrast, etc.

NOTE : The turtlebot3_automatic_parking_vision uses rectified image based on image_proc nodes. To get rectified image, the robot should get optic calibration data for raspberry pi camera. (Every downloaded turtlebot3 packages already have the camera calibration data as raspberry pi camera v2 default.)

NOTE : The turtlebot3_automatic_parking_vision package requires ar_track_alvar package.

[Remote PC] Install ar_track_alvar package by following commands.

$ sudo apt-get install ros-kinetic-ar-track-alvar
$ sudo apt-get install ros-kinetic-ar-track-alvar-msgs

[Remote PC] Move to turtlebot3_automatic_parking source directory.

$ cd ~/catkin_ws/src/turtlebot3_applications/turtlebot3_automatic_parking_vision/nodes

[Remote PC] To make it executable.

$ sudo chmod +x automatic_parking_vision.py

[Remote PC] Run roscore.

$ roscore

[TurtleBot] Bring up basic packages to start TurtleBot3 applications.

$ roslaunch turtlebot3_bringup turtlebot3_robot.launch

[TurtleBot] Start the raspberry pi camera nodes.

$ roslaunch turtlebot3_bringup turtlebot3_rpicamera.launch

[Remote PC] Raspberry pi package will publish compressed type image for fast communication. However, what will be needed in image rectification in image_proc node is raw type image. Hence, compressed image should be transform to raw image.

$ rosrun image_transport republish compressed in:=raspicam_node/image raw out:=raspicam_node/image

[Remote PC] Then, the image rectification should be carried out.

$ ROS_NAMESPACE=raspicam_node rosrun image_proc image_proc image_raw:=image _approximate_s=true _queue_size:=20

[Remote PC] Now should start the AR marker detection. Before running related launch file, the model of what will be used by this example code should be exported. After running the launch file, RViz will be automatically run under preset environments.

$ export TURTLEBOT3_MODEL=waffle_pi
$ roslaunch turtlebot3_automatic_parking_vision turtlebot3_automatic_parking_vision.launch