Skip to content

Getting Started

IndyEye is a cost-effective vision solution for Indy, which includes deep-learning modules for object detection(for pick and place) and classification(for inspection). IndyEye also provides automated data collecting and remote-training service, to help easy development of custom-made vision algorithm.

Components

IndyEye is distributed as an add-on of IndyCB. If your IndyCB supports IndyEye, you can see the IndyEye Logo and Eye-support USB port on your IndyCB.

IndyEye also includes camera accessories.


Camera accessories of IndyEye

Accessories
Camera(1) Lens(2) USB cable(3) Calibration sheet(4) Flange(5) Camera mount(6) Sheet mount(7)
Bolts and pins(8) - 8 M4×8 bolts - 3 M3×14 bolts - 2 M3×8 pins - 2 M4×6 pins - 3 M4 washers

Installation

  1. Assemble the camera, lens and camera mount, using 3 M3×14 bolts.
  2. Attatch the flange to the end joint of Indy. Use 2 M4×6 pins and 4 M4×8 bolts.
  3. Attatch the camera mount to the flange, using 2 M3×8 pins and 1 M4×8 bolt.
  4. Connect the camera to the Eye-support USB port of IndyCB, using USB cable.


IndyEye installed on Indy7

Accessing IndyEye

  1. After the camera assembly is fully assembled, connect the EtherNet port of IndyCB to local network and power on the IndyCB.
  2. To set up IndyEye, a separated web UI is used. To connect to the web UI, find the IP address of IndyEye and IndyCB using a 3rd party app, such as IP scanner. To distinguish IndyCB, look for an item with name "STEP-TP". To distinguish IndyEye, look for an item of which manufacturer is NVIDIA.
  3. When IndyEye and IndyCB is turned on, open a web browser and access to <IndyEye IP Address>:8088 to open IndyEye web UI.

IndyEye web UI

The left side of IndyEye web UI is camera panel. The images captured from the camera are displayed here. On the right side, there are 5 tabs: Robot, Calibration, Data, Detection and Pick.


IndyEye web UI

Connecting to camera

The camera will be automatically connected. Click connect button only when the connection is lost or you want to reset the exposure setting. Capture button will show the view. Check vid to get continuous video stream from the camera. Adjust lens focus to get clear sight.

Connecting to robot

  1. On the Robot tab, put the name and IP address of the robot.
  2. Click Connect.

Control robot

  • When the robot is connected, you can see 7 edit boxes to control camera viewpoint.
Value Description Unit
x x-axis position of focal point mm
y y-axis position of focal point mm
z z-axis position of focal point mm
u 1st euler angle θz degree
v 2nd euler angle θx degree
w 3rd euler angle θz degree
d distance from focal point to camera mm
  • The functions of buttons on the Robot tab are summarized below.
Button Description
Get Get current camera viewpoint
Move Move camera to the viewpoint that above edit boxes represent
Start DT Start Direct Teaching Mode
Stop DT Stop Direct Teaching Mode
Home Move to Home position
Disconnect Disconnect from the robot

Calibration

Calibration of mounted camera

The camera parameters (focal length, center offset, distortion) and offset from end-effector should be calibrated by taking pictures of a Calibration sheet from various viewpoints.


Calibration tab

  1. Move the robot to its home position and open Calibration tab. Set Calibrate with "Mounted camera"
  2. Put calibration sheet in the working space. Make sure that the sheet is well-visible from the camera.


    Location of calibration sheet and view from camera

  3. Make seed view list to indicate the motion range for camera calibration process.
    • Click Clear to empty the viewpoint list.
    • Move the camera to seed viewpoints and add each viewpoint to the list by clicking Add button. You can use Direct Teaching or other method. (Warning: Try not to rotate last joint much.)


      4 example seed viewpoints

    • The view list can be saved and loaded with save and load buttons.
    • Click Blend to to blend the viewpoints. It generates new viewpoints inside the area defined by seed viewpoints. The number of viewpoints can be edited with the box on the right side of the Blend to button.


      4 seed view list and blended view list

  4. Click Calibrate to start the calibration. The robot moves automatically and takes pictures of the calibration sheet. After taking the pictures, the robot stops and IndyEye does computation to calibrate the camera. The computation takes under 1 minute. Wait until the UI flashes "Calibration is done".
  5. Click Save Params to save calibration result. On the browser window, enter a file name and click OK. The default file name that is loaded on start-up is "default_cam.yml".


    Enter name for calibration file

  6. After calibration, the camera distortion can be compensated. Click the button that says "Uncorrected" on the camera panel. The text on the button changes to "Undistorted" and the camera distortion is compensated.

Workspace

After calibration is done, you should set workspace by following sequence.

  1. Put the calibration sheet on the workspace and aim calibration sheet with camera.
    • Warning: Set the x-axis of calibration sheet and x-axis of Indy end-effector in opposite direction. (Figure below) During picking task, the end-effector orientation will be chosen to be silmilar to this orientation.
  2. Click Detect WS. The Workspace will be displayed on the camera image.


    Detected workspace(left) and relation between Indy and workspace(right)

  3. Edit workspace. There are 9 edit boxes which controls position, orientation and dimension of the workspace. Click Set WS to apply change.
  4. Click Save Params to save workspace.

Calibration of fixed camera

  1. Install camera on a fixed location.
  2. Set Calibrate with "Fixed Camera"
  3. Attatch the calibration sheet to the end joint of Indy using sheet mount, flange,8 M4×8 bolts, 2 M4×6 pins, 2 M3×8 pins and 3 M4 washers.


    Calibration of fixed camera

  4. Make seed view list and blend to many views. This time, the calibration sheet is moving, while the camera is fixed. Make sure calibration sheet is in the screen in every seed view.
  5. Click Calibrate to start the calibration. Wait until the UI flashes "Calibration is done".
  6. Click Save Params to save calibration result.
  7. Detach calibration sheet from Indy and go to Workspace section to set workspace.

Data collecting

Data collecting for object detection (pick and place)

To train deep-learning algorithm, you need to collect data. To collect data for object detection, you need CAD file(.stl). You can use any 3rd party 3D scanner or CAD software that can create stl file.


Data collecting screen

  1. Move the robot to its home position and open Data tab. Set collect type to "Object Detection"
  2. Make cad list.
    • Click Clear to empty the list
    • Put prepared CAD files to USB memory and plug it to the Eye-support USB port.
    • Click Add cad to add cad file. Navigate to the USB memory, typically it is "/meida/nvidia/<USB_NAME>/".
    • Select the CAD file you want to add and click OK.
    • Repeat for all CAD files.
  3. Click Save list to save cad list. Enter or select a file name for the cad list and click OK.
  4. Select a cad model on the list and click On Floor. The selected cad model will be rendered on the center of the workspace.
  5. Put real object on the workspace. Make sure the object perfectly matches with the rendering, as in the figure above.
    • On the On-screen location panel, the object rendering position can be adjusted.
    • Edit left column values to adjust position.
    • Edit right column values to adjust orientation.
    • Set Object button applies the change change.
    • Save Orientation button saves the changed orientation permanently.
    • Clear Object removes the object from the screen.


      Adjust On-screen location

  6. Create view list, in the same way as in the Calibration section. This time, the blending number should be 100 or more.
  7. Set dataset and folder.
    • Click Set dataset to create or select the root folder for new dataset. Under the root folder, "train" and "val" folders are generated automatically.
    • Click Set folder to designate a folder to save one sequence of data. Open the "train" folder, enter new folder name and click OK. One sequence of data will be stored this folder.
  8. Click Auto collecting.
    • The robot will move and take pictures of workkpiece automatically. Wait until finished.
  9. To add data, change data folder by Clicking Set folder and giving new name, and repeat Auto collecting. It is desirable to collect images from all viewpoints that the object would be observed in actual task. In addition, It is optional but recommended to add folder(s) and collect additional dataset in "val" folder, for validation.

Data collecting for classification (inspection)

  1. Move the robot to its home position and open Data tab. Set collect type to "Inspection"
  2. Enter object (inspection point) name under Inspection Name.
  3. Create view list, in the same way as in the Calibration section.
    • While creating seed viewpoint, aim the inspection point at a close distance and rotate around the inspection point.
    • Blend to 100 or more viewpoints.
  4. Set dataset and folder.
    • Click Set dataset to create or select the root folder for new dataset. Under the root folder, "train" and "val" folders are generated automatically.
    • Click 'Set folder' to designate a folder to save one sequence of data. Open the "train" folder, enter new folder name and click OK. One sequence of data will be stored this folder.
  5. Click Auto collecting.
    • The robot will move and take pictures of workpiece automatically. Wait until finished.
  6. To add data for other object (inspection point), enter other Inspection Name, change data folder by Clicking Set folder and giving new name, and repeat Auto collecting. It is desirable to collect images from all viewpoints that the object would be observed in actual task. In addition, It is optional but recommended to add folder(s) and collect additional dataset in "val" folder, for validation.

Deep learning

Training

  1. On the bottom side of Data tab, select network to train.
    • MaskRCNN: Original MaskRCNN, just detect and segment objects on image.
    • ResNet: Classifier for inspection.
  2. Click Train.
    • In the file browser, select the dataset to train and click OK.
  3. The training progress is displayed on the bottom of the Data tab. It is also possible to check state of training server by clicking State. A message 'done' will be printed after training is over.
    • If you want to stop training in the middle, click Stop.

Getting the trained network

  1. On the bottom side of Data tab, select network to download, and click GET.
  2. On the file browser titled 'Select Trained Config', open the dataset that you trained.
  3. Open 'Models' folder in the dataset directory.
  4. Select config file to download and click OK(By default, it is 'config.yml').
  5. On the file browser titled 'Download Config To', type or select a name for downloaded configuration. Click 'OK'.
    • The default configuration name is 'config.yml'.
  6. The configuration file, CAD file, and network weights are downloaded.
    • For every overwritten files, backup file will be generated with a new name, <old_name_bak>.
    • Wait until the model is downloaded and rotating icon disappears.

Detecting object

To set detection parameters, go to Detection tab.

  1. Click Load button on Options panel to load detection configuration. Select the configuration file that you created in the Getting the trained network section. It takes time to initialize deep learning algorithm. Wait until loading is done and the rotating icon disappears.
  2. Click Detect to try detection. In the case of MaskRCNN, the first detection takes time for initialization. To modify detection algorithm, check Detect tab section below.
  3. MaskRCNN can be accelerated 3 times faster with SoyNet. Change Deep learning option to MaskRCNNSoyNet and click Save. Type a new name for new configuration and click OK.
    • It takes time to convert the model. Wait until conversion is done and the rotating icon disappears.
  4. Now the SoyNet accelerator is applied. Click Detect to test detection.

Detection tab


Detection tab.

  • Object list : List of object. Same as cad list in the Data tab.
  • Algorithms : List of available algorithms.
    • Deep learning : Deel learning algorithms
      • MaskRCNN: MaskRCNN object detection.
      • MaskRCNNSoyNet: MaskRCNN object detection, accelerated with SoyNet. SoyNet license is needed to use this module.
      • ResNet: ResNet Classifier for inspection.
    • Pose refinement : Optional algorithms for object pose refinement.
      • PCA: Align x axis with main principal axis of mask.
      • SilhouetteSimple: Find accurate pose by matching silhouette of the object.
      • Workspace: Set object location same as workspace origin. Test purpose.
    • Post process : Optional post process.
      • Crosscheck: Cross-check among classes. Find best silhouette-matching class.
    • Detect button: Detect object which is selected on the Object list.
  • Options : view and edit detection parameters.
    • Get: Get selected parameter value.
    • Set: Set selected parameter with value entered above.
    • Add: Add a new parameter, with name entered above.
    • Delete: Delete selected parameter
    • Save: Save configuration.
    • Load: Load configuration.
    • Example: Limit detection space to the workspace
      • Enter 'check_workspace', click Add
      • Enter 1, click Set
      • Click Save to save the configuration.
      • Only objects inside the workspace will be detected afterward.
option Used algorithm value description
'mrcnn_conf_cut' MaskRCNN 0~1 Detection cutline of MaskRCNN
'rot_range' SilhouetteSimple 0~360 Range of angle for rotating inspection
'track_iter' SilhouetteSimple int Number of angular resolution of rotating inspection (Ex: 8 means 'rot_range' will be divided into 8)
'track_scales' SilhouetteSimple int Number of iterations (Rotating inspection is done repetitively, narrowing the range )
'iou_cut' SilhouetteSimple,Crosscheck 0~1 Lower bound of the silhouette matching rate
'class_group' Crosscheck int array An int array of the number of objects. Cross-examination is performed only among objects set to the same number. (Ex: [0, 0, 1, 1] → First and second, third and fourth are compared to each other)

Pick options

Teaching how to pick object

First, the tool center position (TCP) should be taught.

  1. Put Calibration sheet on the workspace.
  2. In Calibration tab, click Detect WS. You will see workspace with base axes.
  3. Move to Pick tab.
  4. Move robot tool to the origin point of the workspace.
  5. In Tools panel, click Teach to teach the TCP.
  6. Multiple tools can be registered. When detecting object, algorithm selects one best tool in terms of orientation.
  7. In the number list, 1st and 2nd are tool indexes. 3rd represents symmetry. i.e., 2 means the tool is symmetric in 180°rees;, while 4 means the tool is symmetric in 90°rees;. After editing the numbers, click Edit to apply changes.
  8. 4~9th numbers are 6 DoF coordinate of TCP offset. (x,y,z,θx, θy, θz)
  9. Click Save to save TCP list.


    Teaching TCP: place tool center on the origin of workspace.

Next, for each object, available grip point should be taught.

  1. Put target object on workspace.
  2. In Camera panel, select the object in the list and click Detect. The detection result will be displayed.
  3. On Grips panel, select target object and click Select.
  4. On Tools panel, select TCP to use for teaching grip.
  5. Move robot tool to desirable pick point.
  6. In Grips panel, click Teach to teach pick point.
  7. Multiple pick points can be registered. When detecting object, algorithm selects one best pick point in terms of orientation.
  8. 1~4th numbers in pick points are for selecting tools. It is automatically entered when it is taught.
  9. 5~10th numbers are 6DoF coordinate of pick point. (x,y,z,θx, θy, θz)
  10. Click Save to save current grip points.


    Teaching grip position: Detect object and place tool on a desirable grip point.

After all teaching process finished, you can test pick.

  1. Test-draw tool position.

    • Select a TCP on the list in Tools panel.
    • Click draw on Tools panel.
    • The tool axis is drawn on the screen. Normally, the tool is not fully visible. From camera.
  2. Test-draw grip position.

    • Put test object on workspace.
    • Select the object on Camera panel and click Detect. The detected object is displayed on the screen.
    • Select the object on Grips panel, Click Select and select one grip point from the list.
    • Click draw on Grips panel.
    • The grip point for the object is drawn on the screen.
  3. Test-Pick

    • Put test object on workspace.
    • Select the object on Camera panel and click Detect. The detected object is displayed on the screen.
    • Click Pick Test
    • Robot moves the tool to the detected grip point of the object.

TCP/IP communication

Communication protocol Once the detection setting is done in the Web UI, detection can be requested and results can be received using TCP/IP socket. IndyEye use json string for TCP/IP communication. The command format is as follows.

key value description
'command' 0 run deep learning algorithm
1 pose refinement and post processing
2 reset detection algorithm
3 request list of detectable object names
'class_tar' int index of target object class. To detect all, give 0
'pose_cmd' float x 6 array Current task position of robot. (optional, only when IndyEye is in 'no robot' mode)

As a response, IndyEye returns json string in following format.

key value description
'STATE' int Error state. 0 means detection is successful.
'class_detect' int Index of detected class, starting from 1.
'tool_idx' int Index of selected tool from indyeye.
'Tbe' float x 6 array 6D task position to pick object.
'Tbt' float x 6 array 6D Tool Center Position to pick object.
'Tbo' float x 6 array 6D Position of detected object.
'class_list' string array list of detectable object names. (returned only when command was 3)

C++ client

IndyEyeClient is a C++ communication client which is provided with IndyEye. It is included in NRMK Framework and contents are as follows.

file/folder content
IndyEyeClient.h header file for IndyEyeClient.
IndyEyeClient.cpp source file for IndyEyeClient.
jsoncpp.cpp source file for JsonCpp library, which is used to interpret json.
json/ directory containing headers for JsonCpp library

IndyEyeClient provides convenient wrapper functions for TCP/IP communication with IndyEye. All functions of IndyEyeClient are listed below.

function argument description
SetIP char* ipaddress Set Ip address of IndyEye
GetClassList - Get detectable object list
Detect int cls, double *pose Detect object and refine detection. cls: target class index, 0 means all class. pose: current task pose of robot, double × 6 array

After calling GetClassList and Detect, response from IndyEye is saved in the member variables of IndyEyeClient, as follows.

member variable type description
class_detect int class of detected object
Tbe float x 6 array 6D task position to pick object.
Tbt float x 6 array 6D Tool Center Position to pick object.
Tbo float x 6 array 6D Position of detected object.
class_list vector list of detectable object names

Example program with object detection

***Pick and place in Python ***

Following is an Python example of pick and place program, which picks object and puts it 5 cm away from the pick position. (To run this code, IndyDCPClient is needed. It is included in NRMK Framework, inside indydcp_client.py. Please refer to IndyInterfaces section.)

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
from indydcp_client import IndyDCPClient
import numpy as np
import json

import time
import socket
import sys
taskserverport = 2002
bindIP = '192.168.3.107' ## ip of device that this program is running

# define IndyEye
eyeIP = '192.168.3.107'
taskserverport = 2002

## define Indy
name = 'NRMK-Indy7'
robot_ip = '192.168.3.106'

# define IndyDCPClient
IndyClient = IndyDCPClient(bindIP, robot_ip, name)
IndyClient.connect()


class NumpyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        return json.JSONEncoder.default(self, obj)

# define function to wait for robot motion
def wait_moving(IndyClient):
    time.sleep(0.5)
    while not IndyClient.is_move_finished():
        time.sleep(0.5)
        print('robot is moving')

# define eye command function
def run_command(cmd,cls,pose_cmd = None):
    sock = socket.socket(socket.AF_INET,
                         socket.SOCK_STREAM) # SOCK_STREAM is TCP socket
    sock.bind((bindIP,0))

    try:
        sock.connect((eyeIP,taskserverport))
        sdict = {'command': int(cmd), 'class_tar': int(cls), }
        if pose_cmd is not None:
            sdict['pose_cmd']= pose_cmd
        sjson = json.dumps(sdict, cls=NumpyEncoder)
        sbuff = sjson.encode()
        sock.send(sbuff)
        print('sent: ',sjson)

        rbuff = sock.recv(1024)
        rjson = "".join(map(chr, rbuff))
        rdict = json.loads(rjson)
        print('received: ', rdict)

    finally:
        sock.close()
    return rdict

# go home
IndyClient.go_home()
wait_moving(IndyClient)

# get current task pose
pose_cur = IndyClient.get_task_pos()

# Do detection
rdict = run_command(cmd=0, cls=0, pose_cmd = pose_cur)

# Get task postion for pick
Tbe = np.array(rdict['Tbe'])
Tbe_above = Tbe.copy()
Tbe_above[2] += 0.05

# Go above object
IndyClient.task_move_to(Tbe_above)
wait_moving(IndyClient)

# Get down to object
IndyClient.task_move_to(Tbe)
wait_moving(IndyClient)

# Pick - in this case, tool is connected to DO8
IndyClient.set_smart_do(8, True)
time.sleep(1)

# Go above object
IndyClient.task_move_to(Tbe_above)
wait_moving(IndyClient)

# Make put position, 5 cm away from pick point.
Tput = Tbe.copy()
Tput[1] += 0.05
Tput_above = Tput.copy()
Tput_above[2] += 0.05

# Go above put point
IndyClient.task_move_to(Tput_above)
wait_moving(IndyClient)

# Get down to put point
IndyClient.task_move_to(Tput)
wait_moving(IndyClient)


# put
IndyClient.set_smart_do(8, False)
time.sleep(1)

IndyClient.disconnect()

Object detection in C++

Following is an example of detecting object in C++. It includes how to get the list of object names.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <iostream>
#include "IndyEyeClient.h"

int main()
{
    IndyEyeClient eye;

    eye.SetIP("192.168.0.23"); // set IP of IndyEye

    eye.GetClassList(); // Get class list
    cout << "class list:";
    for (auto cls_str : eye.class_list) {
        cout << cls_str;
    }

    // cls_tar=0 for all
    int cls_tar = 0;

    eye.Detect(cls_tar); // Do detection & refinement

    if (eye.STATE == 0) { // no-error check
        // print task position to pick object
        cout << "Tbe: [" << eye.Tbe[0] << "," << eye.Tbe[1] << "," << eye.Tbe[2] << "," \
            << eye.Tbe[3] << "," << eye.Tbe[4] << "," << eye.Tbe[5] << "]" << endl;
    }

}

Advanced app management

App execution

  • Type one of the following commands to execute the app server.You can execute it in terminal or execute as service(second command). (By default, the app is executed in the second method)
1
2
cd ~/Projects/indyeye/src
sudo python3 IndyEye_main.py

or

1
sudo systemctl start IndyEyeAuto

Background service management

  • To check status of the app when it is running on background service, type the command below.
1
sudo systemctl status IndyEyeAuto
  • To kill the app when it is running on background service, type the command below.
1
sudo systemctl kill IndyEyeAuto