- 4 Minutes to read
Hello World App
- 4 Minutes to read
This section will explain the code used in Create your First App section.
This is a simple app meant to introduce a variety of controllers, including navigation, CV (computer vision), and grasping.
In short, we will tell Gary to scan the room for a cup, navigate to it, and raise it in celebration!
from raya.application_base import RayaApplicationBase
from raya.controllers.navigation_controller import ANG_UNIT, POS_UNIT
CAMERA = 'head_front'
TARGET_OBJECT = 'cup'
MAP = 'unity_apartment'
MODEL = 'apartment_objects'
ARM_NAME = 'right_arm'
class RayaApplication(RayaApplicationBase):
async def setup(self):
self.motion = await self.enable_controller('motion')
self.nav = await self.enable_controller('navigation')
if not await self.nav.set_map(MAP, wait_localization=True, timeout=3.0):
self.finish_app()
self.cv = await self.enable_controller('cv')
self.detector = await self.cv.enable_model(model='detectors',type='object',
name=MODEL,
source=CAMERA,
model_params={}
)
self.grasp = await self.enable_controller('grasping')
async def loop(self):
await self.motion.rotate(angle=360.0, angular_velocity=10.0, ang_unit=ANG_UNIT.DEG, wait=False)
resp = await self.detector.find_objects([TARGET_OBJECT], wait=True, timeout=40.0)
if resp and self.motion.is_moving():
await self.motion.cancel_motion()
else:
self.finish_app()
obj_x = resp[0]['center_point_map'][0]
obj_y = resp[0]['center_point_map'][1]
await self.sleep(1.0)
await self.nav.navigate_close_to_position(x=obj_x, y=obj_y,
pos_unit=POS_UNIT.METERS,
wait=True)
await self.sleep(1.0)
await self.grasp.pick_object(detector_model=MODEL,
source=CAMERA, object_name=TARGET_OBJECT,
wait=True)
self.finish_app()
async def finish(self):
if self.motion.is_moving():
await self.motion.cancel_motion()
This is the full code. Let's analyze this code line by line:
Imports & Constants
from raya.application_base import RayaApplicationBase
from raya.controllers.navigation_controller import ANG_UNIT, POS_UNIT
These imports grant us access to crucial functionalities. The RayaApplicationBase library is a staple in every Ra-Ya app, as it embeds all the needed functions for controlling Gary. The main application class must be called RayaApplication
, and must inherit from RayaApplicationBase
. All the application functionalities should be written inside this class.
The ANG_UNIT and POS_UNIT enumerations help specify whether degrees or radians are used, and pixels or meters.
CAMERA = 'head_front'
TARGET_OBJECT = 'cup'
MAP = 'unity_apartment'
MODEL = 'apartment_objects'
ARM_NAME = 'right_arm'
These five constants define the parameters of this application. The camera that we’ll be using to detect objects is head_front, located on Gary’s chin. Since we want to detect and pick up a cup, the target object is “cup” which is recognizable by the “apartment_objects” detection model. Since this program is meant primarily for the apartment scene in the Ra-Ya Simulator, the map is set to “unity_apartment.” Lastly, we set the default arm to the right arbitrarily.
setup()
async def setup(self):
In any Ra-Ya application, thesetup() is called once at the beginning of the app execution. This function is the place to prepare all the tools the app might use. This includes enabling controllers and creating listeners.
self.motion = await self.enable_controller('motion')
self.nav = await self.enable_controller('navigation')
if not await self.nav.set_map(MAP, wait_localization=True, timeout=3.0):
self.finish_app()
In order for Gary to move and rotate, we’ll need to enable the motion controller. However, by enabling the navigation controller as well, we can let Gary automatically find paths to specific locations, objects, and zones. For such a feature to work, we must first get localized by calling set_map().
self.cv = await self.enable_controller('cv')
self.detector = await self.cv.enable_model(model='detectors',type='object',
name=MODEL,
source=CAMERA,
model_params={}
)
Let’s enable the CV controller and the model to start detecting any objects the specified model can recognize.
self.grasp = await self.enable_controller('grasping')
The last thing that needs to be set up is the grasping controller, which is responsible for picking up and placing objects using Gary’s hands.
loop()
async def loop(self):
The loop() function is the central part of most Ra-Ya applications. Since it is constantly being called, the code inside it is run until the application finishes viafinish_app() or another interruption.
await self.motion.rotate(angle=360.0, angular_velocity=10.0, ang_unit=ANG_UNIT.DEG, wait=False)
In order for the robot to scan its entire surroundings, it will need to rotate 360 degrees at a reasonable speed (10 degrees per second). However, we also want to be scanning for the target object, a cup, simultaneously with the rotating motion. By setting the “wait” parameter to False in the rotating motion, the code after that line still runs while the motion is in progress, which allows us to write code to handle detections during the rotation.
resp = await self.detector.find_objects([TARGET_OBJECT], wait=True, timeout=40.0)
if resp and self.motion.is_moving():
await self.motion.cancel_motion()
else:
self.finish_app()
This snippet calls the CV model’sfind_objects() function with a cup as the target object, as specified earlier. The variable “resp” stores the detection results. Therefore, if resp is not null and Gary is still in the scanning motion, that means a cup has been recognized and we should proceed to the next step. If that is not the case, we know that Gary must have failed to find a cup because the scan has finished without detection of a cup, in which case the program is simply terminated.
obj_x = resp[0]['center_point_map'][0]
obj_y = resp[0]['center_point_map'][1]
await self.sleep(1.0)
await self.nav.navigate_close_to_position(x=obj_x, y=obj_y,
pos_unit=POS_UNIT.METERS,
wait=True)
Now that a cup has been recognized, its coordinates relative to the apartment are stored in resp and can be accessed like this. We can use these coordinates in the navigate_close_to_position() function to tell Gary to approach the cup and automatically face it from a close distance
await self.grasp.pick_object(detector_model=MODEL,
source=CAMERA, object_name=TARGET_OBJECT,
wait=True)
self.finish_app()
Finally, it is time to grasp the cup! Simply calling the grasping controller’spick_object() function with the appropriate camera name, model name, and target object give the robot all the information it needs to safely pick up the cup and raise it. Now the app can be terminated usingfinish_app().
finish()
This function is the place for disabling and closing any listeners, scanners, and actions that might still be operating.
In this case, all that can be done is a safety check to stop any motions that might still be in progress. That’s it!