EM058 » Autonomous collaborative robot tour guide using DE10-Nano HPS/FPGA
The goal of this project is to develop a prototype of a collaborative autonomous robot guide (hereinafter referred to as the Robot) for use in educational purposes.
To achieve this goal, the development process is divided into two main parts: the server part and the robotic part. The server part serves as an interaction interface between the operator and the robotized part. On the server side, the operator creates a task that is added to the queue for execution, after which it is transferred to the robotic part for direct execution.
The Robot starts its movement from the current location to the destination, avoiding collisions with static and dynamic objects. After reaching the destination point, a sequence of actions is initiated for the tour.
In the age of technological progress, the automation of human labor is one of the attractive topics for research and development of new solutions. A particularly suitable area for this study is the work of a guide. Robot guides conduct excursions, help to navigate the room and make attending open events more accessible for people with disabilities. Replacing human activities in this area will provide information more effectively to students and reach a large audience due to the fact that it works with the same performance. To reach a greater number of visitors in museums they actively use audio guides. However, this method of conveying information to the visitor is not always convenient. For a better perception of the material, it is necessary to supplement the visual audio information. Therefore, new audio-based systems are gradually replacing audio guides, which, for example, can reinforce the story with prepared slides or video sequences on the screen. To solve the problem of accessibility of information for a larger number of visitors in museums and at open events, the development of a collaborative robot guide is introduced.
The object of the development is an autonomous collaborative robot guide designed to conduct various events without direct human participation in management. The main board used was a De10-nano board, which combines HPS and FPGA cores.
HPS core allows us to locally implement powerful data processing algorithms. For example, in our HPS project, a search for a detour route was performed when encountering static or dynamic objects.
FPGA facilitates the implementation of any interfaces connecting different peripherals. For example, in our FPGA project, we analyzed and averaged the values from ultrasonic sensors.
The aim of the project is to develop a prototype of a collaborative autonomous robot guide that includes an electromechanical component and software that implements the control of the FPGA Robot (De10-nano), as well as the server-side web interface for initial setup of the robot's movement path.
To achieve this goal it was necessary to perform the following tasks:
The relevance of the work lies in the fact that the development of a collaborative robot guide capable of conducting tours along a given route and with pre-prepared information, allows you to cover a large number of visitors. The robot must continue to work until the batteries are recharged without interruption and deterioration in the quality of the excursion. The cost of maintaining such a robot mainly involves the consumption of electricity (charging batteries for autonomous operation), and its maintenance, since the operation of this robot occurs in benign conditions. As a result, this development should help to improve the process of excursions in the museums of the city, without imposing excessive requirements for the integration and operation of the robot.
Figure 1 – Connection diagram of electromechanical equipment.
One of the most important reason of implementing the SoC DE10-Nano board is that it allows to use all the advantages of SoC, which combines the productive ARM core HPS and the flexibility of FPGA. With the help of programmed communication bridges between HPS and FPGA cores, it was possible to divide the implementation of low-level logic to analyze the situation around the robot and control the direction and speed of rotation of the wheels. Thus, the algorithms for constructing the route and movement on the hps core always have access to relevant information about the surrounding space and have the ability to change the direction of movement of the robot without loss of performance.
Server-client communication and displaying the information on a screen without using a third-part microcomputer are available due to the interaction of the Intel FPGA ARM & Linux that were integrated into the DE10-Nano board. Moreover, using Intel FPGA provides an ability to quickly apply changes in the design of the system and modify the project system.
It is worth mentioning that the DE10-Nano board has a wide range of interfaces and tools which facilitates a design of such system as ours. This is due to the fact that many peripheral devices are used in our project.
DEMO VIDEO: (ﾉ◕ヮ◕)ﾉ*:･ﾟ✧ https://youtu.be/OVOSQYDFLnA
In our project the development of a prototype of a robotic platform consists of three main stages: designing a model of the robot body, assembling it, and integrating the electromechanical components in it.
Let us consider in more detail each of these stages.
1. Designing the model
We decided that the robotic platform would be a collaborative one due to the fact that it would not have sharp corners on the body, it would automatically avoid collisions and as a result would be safe for people. It would not require a separate territory specially dedicated for the movement, which would allow us to use it in the same room with people. It would not be in need of special operating conditions, such as a certain temperature or humidity. It would be easy to deploy and integrate the platform in a new room, since basically only the operator’s work is required on the initial loading of a room map.
According to this the prototype platform of the autonomous robot guide meets the following requirement characteristics:
As you can see from the image there are special arches where the main wheels could be located, and recesses in the lower part, where the auxiliary wheels are installed. There are specially prepared holes for placement of ultrasonic sensors in the side walls of the platform. There is enough space for the location of both motors with an engine driver and the necessary power for them.
2. The creation of the body and the wheeled platform of the prototype
So that the prototype of the wheeled platform could carry a lot of weight for a long time, fiberglass and epoxy were taken as the main materials in its manufacture.
To relieve strong pressure on the motors transmitted by the mass of the structure, it was decided not to connect the wheels directly to the motors, but to use a universal joint between them.
At the next stage, a mounting bracket of duralumin strips is installed in the platform. This mount was an excellent platform for placing the required boards.
Since the robotic system is aimed at interacting with listeners, its height should be convenient for the user. Therefore, stiffeners are attached to the body, which not only increase the height, but also improve the stability of the structure.
3. The design and connection of electromechanical equipment to the body of the protootype
The general connection diagram of the components of the robotic system is shown in the block diagram. Let us consider in more detail each of the components of the robotic system.
Terasic's DE10-Nano was chosen as the main board for implementing the robotic system. Terasic DE10-Nano is a kit that includes a wide range of peripherals and interfaces for receiving and processing information coming in parallel from a large number of sensors. The expansion connectors and the Arduino interface are suitable for adding sensors with digital, serial, and analog interfaces. The kit’s distinctive features include the Intel Cyclone FPGA processor, which combines the ARM Cortex-A9 dual-core processor system and the FPGA programmable array on one integrated chip.
In the project HC-SR04 ultrasonic sensors are used. The operation of the chosen sensor is based on the principle of echolocation. It emits sound pulses of a certain frequency into space and within a set period of time receives a signal reflected from an obstacle. The delay time between the emission of the sound pulse and the receipt of the reflected signal determines the distance to the object.
Two high-power BTS7960 collector motor engine drivers are used in the project, one for each motor. The BTS7960 motor driver allows to control one collector motor rated for voltage from 5.5 V to 27.5 V and DC to 10 A, for long-term operation. Using the BTS7960 driver, we can control the speed of the engine, the direction of its movement, perform braking and control the load that the engine is experiencing.
Two motors-reducers of the IG-42GM 01 type series, where the gear ratio is 1/24, are used in the project. The supply voltage of these motors is 12 V for continuous operation. The torque of these motors is 10 kg * cm, and the speed is 325 rpm. Motors are connected to the engine driver using wires with special terminals at the ends.
As a power to the motors a battery FB 7.2-12 Alfa (Alarm Force) is used. The operating voltage of this battery is 12 V, so it is directly connected to the driver through a switch so that it is possible to cut off the power supply to the motors. At the same time another battery FB 7.2-12 Alfa is connected to the boards through DC-DC buck converter on XL4015, which converts the input voltage of 12 V to the output voltage of 5.1 V, and only then it is supplied to the boards in order not to burn them out.
Cubieboard4 is a fourth-generation single-board computer from Cubietech Limited. The modern model is based on the 8-core A15 / A7 big.LITTLE processor Allwinner A80, operating at 2 GHz and the 64-core PowerVR G6230 graphics processor. Cubieboard4 is equipped with all standard interfaces and wireless modules, such as: HDMI, VGA, 10M / 100M / 1G Ethernet, 4 x USB 2.0, 1 x USB 3.0 OTG, audio output, microphone in, dual band Wi-Fi and Bluetooth 4.0 , IR receiver, micro SD card slot. There is the possibility of raising the Linux system on this board.
The interaction of Cubieboard4 and De10-nano boards is via Ethernet cable.
The A4Tech PK-910H is used as a web camera. The camera is equipped with a 2 megapixel sensor, the video resolution is 1920x1080.
This webcam is connected to the Cubieboard 4 board via a USB connector.
A stereo microphone Andrea Electronics Array2-SNA SoundMAX SUPERBEAM with a 3.5 mm jack is used as a microphone. The stereo effect Array2-SNA, created by two microphones, increases the intelligibility of the speaker in a noisy environment.
This microphone connects to the Cubieboard 4 through the MIC jack.
As speakers in the project, computer speakers Defender SPK-35 are used. These computer speakers are powered by a USB port and do not require an additional power supply. At the same time, there is the ability to control the volume level of the reproduced sound using the volume control on the cable.
These speakers connect to the Cubieboard4 board through the EARPHONE connector.
The Dell UltraSharp P2415Q Monitor is used as a screen. Thanks to the screen resolution of 3840x2160, you can view media files with a clear image. A professional IPS matrix with a response time of 6 ms provides high-quality color reproduction. A wide viewing angle allows you to look at the monitor from the side without observing distortions on the screen.
This screen connects to the Cubieboard4 board via the HDMI connector.
In addition to the fairly obvious advantages of integrating a hardware processor system and FPGA, such as a high data transfer rate up to 125 Gb/s, a lower form factor, consumption, reduced consumption and a simplified power system, the solution can attract attention as a kind of entry point in the development of new technology for specialists working with ARM and interested in FPGA, and vice versa - working with FPGA, but interested in ARM processors.
1. Configuring and connecting boards
At the initial stage, Linux is installed on the De10-nano board on the HPS core. A virtual memory is mounted in the system, which allows to configure the FPGA-HPS bridge and HPS-FPG bridge. To manage the memory (reading and writing), a script is used which is written in the C programming language with connected libraries provided by Terasic - hwlib, which specify static variables and data structures for interacting with the allocated memory.
To use a C-script that manages data in mounted memory, the Cython programming language is used to simplify writing C / C ++ code modules for Python.
For convenience and increasing the design speed, collaborative robot control algorithms are being developed in the Python 3 high-level language.
To interact with each other, two Linux-based boards, De10-nano and Cubieboard4, are connected via an Ethernet cable.
2.Motor control module
The motors are connected via engine driver and controlled by software written in the Intel Quartus programming environment in Verilog, as well as in the high-level programming language Python.
To implement the logic of the wheeled platform and its interaction with other modules via the Internet, the HPS core on the De10-nano board was raised. Communication between the HPS and FPGA units is implemented through logical bridges:
At the next stage, a software module is written in C language, which transmits a control signal to the motors through the bridge in accordance with a given algorithm. This module is called using the Cython library during the execution of the main control program.
3. Ultrasonic Sensor Control Module
A software module written for FPGA in Verilog receives data from all sensors located around the perimeter of the robot body. It reads several values from each sensor for a certain time interval. Then they are averaged for each sensor, and after that obtained values are checked against the threshold to determine the presence/absence of obstacles. Values are averaged to level out the effect of emissions when measuring the distance to an object. The received information for each sensor is grouped for each side of the robot and recorded in a 4-bit array, which is read in the HPS software module. Thus, if necessary, the script accesses the data array and receives relevant information on the presence or absence of obstacles from each of the four sides.
Reading a value from memory in a Python script on HPS is similar to writing with a script written in C and calling it using Cython.
4. Map building module
The room map is formed by the user via the web interface. Algorithms for processing user actions, forming a map and building a route operate on the server without imposing additional load on the hardware of the robot. the map is divided into cells of a given size, comparable to the size of the robot. The user sets the size of the room through the interface, after which a field for a map of cells is generated. On the background of the field, you can put an image of the floor plan for the convenience of the user. Then the operator paints the cells with the help of certain colors, indicating the places accessible for moving and static obstacles.
After forming the map, the operator can also indicate a special color for the points from which and where the movement should be carried out, on the basis of which the algorithm constructs the shortest path between two points. Also, the user has the opportunity to independently indicate the desired trajectory of the movement.
The distance between the two marked points is determined using the breadth-first search algorithm.
After the route is constructed by the algorithm, all new routes are saved in the queue, waiting for the robot to free itself from the previous task. The operator can remove the task (arrival point) from the queue.
5. Robot control module
To control the movement of the wheeled platform, a “class” was written. During initialization, the part of the card received from the server and its dimensions are transferred to the “class”. Additionally, you can specify the direction of the robot at the current time and its position on the received part of the map, if these data are not specified, they will be determined automatically.
Upon obtainment of the map, the algorithm is initialized, which follows the route marked on the map and forms an array-sequence in which the movements (move forward/backward, turn) that the robot needs to do and in what order they should be made are stored.
Iteration over the resulting array occurs in a separate stream. This solution allows you to continue to receive control commands during the entire movement of the robot
6. Obstacle avoidance module
This class refers to the C-script, which gives information about the presence or absence of an obstacle in the nearest cells, based on the readings from the sensors. After that, he takes a step into one of the free cells, builds a route to the nearest point of continuation of the route and continues to move along the new route.
In the program, the cell on which an obstacle has occurred is marked as inaccessible. After this, a new route is searched from the current position of the robot to the next point on the route. Corrections are made to the array-sequence of actions in accordance with the new route.
7. Visitor Interaction Module
All the logic for working with the listener is implemented on the Cubeboard4 board. As on the De10-nano, the board is constantly waiting for incoming commands, so all algorithms that take a long time are executed in a separate thread.
Upon receiving the number of the point at which the visitor is being served, Cubeboard4 makes a request to the main server and receives json data, with information which pictures need to be shown, what text to reproduce and in what sequence it needs to be done.
If you want to play an audio track from the list of standard ones, a file previously saved on the board is simply played back, otherwise a sound file is generated from the received text using Yandex SpeechKit.
If a command arrives to wait for the user to appear, Cubeboard4 begins to receive data from the camcorder and process it using the image recognition module. If one or several people who do not disappear within a certain period of time are detected in the video, then the robot will offer users to start the tour and, upon obtaining consent from the user, will reproduce the sequence of material for the point at which it is currently located.
8. Face Recognition Module
The video is input to the recognition algorithm. This process is activated when the robot arrives at the point marked for the lecture and expects a visitor. When the algorithm determines a person in front of the camera, who within a certain time interval does not leave the location, he will offer him to start interaction.
Face recognition is done using the OpenCV library for Python. Classifiers are based on Haar attributes using an extended cascade of simple functions.
To assess the performance of the developed collaborative robotic system, we have tested the hardware and software modules and analyzed the results.
1. Testing the ultrasonic sensor control module
Initially, each sensor is individually connected to the board to check for operability and correctness of the issued values. The resulting array from the ultrasonic sensor is transmitted via the FPGA-HPS bridge to the HPS core, where a script in the C programming language is used, the byte array is converted into an Integer variable = the distance to the object.
Next, a series of tests is carried out regarding the number of measurements of the distance to the object, which is required to average the obtained value and level the influence of erroneous measurements. Each sensor has its own degree of error. Empirically it has been established that six measurements are sufficient to obtain the correct data from all ultrasonic sensors. The documentation for the HC-SR04 ultrasonic sensor also indicates that the recommended period between measurements should be set to a value of at least 50ms. Thus, the value obtained for each sensor is updated three times per second.
During measurements, it was revealed that the ultrasonic sensor incorrectly measures the distance to porous and soft surfaces, since they absorb ultrasound well. In the next version of the wheeled platform, infrared sensors should be added in addition to the installed ultrasonic sensors.
2. Testing the motor control module
In order to test the correct connection of motors, a module in the programming language C have been written, which writes a four-bit number to the memory of the HPS-FPGA bridge, and a software module in Verilog which, in accordance with the received bits, transmits signals to the pins to which the motor drivers are connected. First of all, it is checked that the motors are connected correctly and rotate in one direction. After this, a series of tests are carried out:
As a result, it was found that when moving from one type of surface to another, as well as with a large level difference between the wheels, a deviation from the main direction of movement in one or the other direction occurs. Also, by measuring the voltage supplied to the motors, voltage surges were revealed that provoke one of the wheels to rotate faster.
The next version of the wheeled platform should be improved by adding an accelerometer and a gyroscope and writing a gyrocompass to track the deviations of the robot from the original trajectory and return to the original course.
3. Testing the map building module
Unit tests have been written to test the map building module. Tests are written simultaneously with the developed components of the module, so that when making changes to the project, do not break the already finished part of the map building module.
First of all, a module for verifying the route drawn by the operator is written. The matrix received from the operator is input to the module. The algorithm passes through the cells of the route from the starting point to the final one, if the end of the route is reachable, the map constructed is saved, otherwise an error is displayed to the operator. Testing this module considers cases such as crossing a route, having multiple routes, reaching an endpoint with a route, and breaking a user-drawn route.
Next, an algorithm for automatically constructing a route by points is developed. This module is tested on matrices in which the end point of the route is not reachable, the start or end points are absent or located in the same cell of the map. When saving a map, a unique identifier is assigned to the route endpoint. In case of duplication of the unique identifier, a test is also written.
As a result of the tests, the server functions without errors. In the next version of the project, it is necessary to add the ability for the operator to set several breakpoints, as well as realize the ability to add several layers on one map in order to avoid crossing routes.
4. Testing the obstacle avoidance module
Subsequent testing of the obstacle avoidance module takes into account a number of boundary cases of robot behavior. For example, when the obstacle is at the route point on which the turn was to take place, after detouring the obstacle and returning to the route, it is necessary to determine the current and required direction of movement and, if necessary, turn around a collaborative robotic system. If an obstacle occupies several cells in a row, then each time the robotic system have to look for a new endpoint, which should be returned. If the obstacle is at the end point of the route, the robotic system should send information about this error to the operator.
In the next version of the obstacle avoidance module, it is necessary to reduce the matrix cells on which the map is built. This is due to the fact that there are situations where two adjacent obstacles are in the center of the cell, but do not occupy it entirely. In this version, the robot will count the entire cell as inaccessible. Dividing the cells into smaller ones will allow the robot to pass between the described obstacles.
5. Testing the face recognition module
Initially, the face recognition module is tested on a pre-recorded video file, without streaming from the camera. This simplifies the work, since several head positions are recorded on the video at once, and we do not need to constantly perform the same actions during testing.
During the tests, it was revealed that face recognition with a camera directed from the bottom up to the face works with errors. This is due to the fact that the training dataset contains a set of photos with a lens at the face level. This determined the location of the camera on the Robot - the camera is located on the pole, approximately at the height of the face of an average person - 1.7 meters.
The module is able to recognize several faces at the same time. It is empirically found that the correct recognition and tracking of faces is preserved while 5 faces are present in the frame. With a larger number of faces, some of them begin to get lost from the recognition zone. Most likely this problem is due to the low resolution of the camera or video stream processing algorithm.
In the new version of the robotic system, recognition of the entire human figure and its parts should be added to determine the presence of the visitor in the frame, not only when he is facing.
Testing the correct operation of all modules in conjunction, in our case, is an integration testing. During testing, the correctness of processing input and output information during operation of all modules at the same time is checked.
The process of working with the robot begins with the operator. The operator inputs a new map into the system, then adds several routes to it, both by drawing them, and with the help of automatic construction by the module of building the map.
In the next step, the operator assigns pictures, text and sound tracks that will be played at the stopping points. After completing all initial settings, the operator sends a command to start the movement to the Robot (De10-nano board).
The robot, after receiving a command to start the movement, requests a card from the main server, analyzes it, forms the control sequence of commands and begins to move.
Upon arrival at the point of interaction with the visitor (the end point of the route) the De10-nano board transfers control to Cubeboard4. Cubeboard4, in turn, requests a sequence of content for visitor to display and waits for the interaction to begin. After the audio files are finished playing and the pictures are shown, control is transferred back to the De10-nano board, and the process is repeated again until the main routes run out of the main server, or until the control signal arrives from the operator to stop.
This paper proposes a comprehensive solution for the development of electromechanical and software parts of a collaborative robotic platform. The prototype is designed in such a way as to satisfy all the requirements. The software is made in the form of independent modules implemented on two different boards. At the same time, the main server with a web interface was raised for interaction with the operator. The server stores all the necessary information for the functioning of a collaborative robotic system, and in particular - a room map, a constructed route and media files.
Testing of the implemented robotic system was carried out both at the level of software logic and at the level of the assembled prototype based on a prototyping board with FPGA.
A further direction in the development of this work is implementation of new opportunities for improving the robot identified during testing.