Appendices


A: CAN frame definition

 

 

 

 

B: Coordinate transformations

Many different coordinate systems exist and are used to represent the pose of an object in space. Most of them can be classified into two groups: global systems, which approximate the entire Earth (e.g., WGS-84), and local systems, which use the best approximation of the local true geometrical shape of the Earth. ECEF coordinates (Earth-Centered, Earth-Fixed) are part of a global Cartesian system in which the center of the Earth is placed at the origin <0,0,0>. The odometry output of the Vision Navigator sensor is given in ECEF coordinates (see Input/Output messages).

 

Given that ECEF coordinates represent the pose of the sensor on a sphere, it is common to draw a tangential plane to represent the local pose of the robot. This representation is only accurate up to a certain distance from the origin (83 km on average) but makes tracking the odometry of the robot easier. For a local frame of reference, some outputs are given in ENU coordinates (East-North-Up). See the figure below for a graphical explanation.

 

ECEF (global) and ENU (local) coordinate systems

 


To convert the ECEF orientation of the Vision Navigator into this local frame (ENU) coordinates, let's assume (q_{ecef→body}) represents the orientation of the sensor in ECEF coordinates, which can be extracted from the pose output of the FP_A-ODOMETRY message. Then, the rotation from ECEF coordinates to the local frame of reference (ENU) can be computed using the current position of the sensor on the sphere <x,y,z>.


The Fixposition GNSS Transformation Lib (link) contains several useful functions for these space operations. For example, the function TfEnuEcef() takes an ECEF position coordinate and returns a rotation matrix that transforms from the ECEF plane to the ENU plane. Let's call this rotation matrix (R_{enu→ecef}). As the orientation output of the Vision Navigator sensor is represented using a quaternion, we need to first convert the rotation matrix to a quaternion (q_{enu→ecef}). Thus, the rotation of the Vision Navigator sensor in ENU coordinates can be computed as the multiplication of (q_{enu→ecef}) times (q_{ecef→body}).


For a more in-depth explanation, please refer to the following:


Extract heading:
The FP,ODOMETRY message contains the position and orientation of the Vision Navigator sensor in the ECEF coordinate system. For mathematical stability, the sensor employs quaternions to represent these rotations. Nonetheless, in case the user requires an Euler angles representation using Roll-Pitch-Yaw angles, it is necessary to first convert the orientation of the sensor into a local tangential coordinate system such as ENU or NED. To convert the reference frame from ECEF to ENU, we apply the following transformation:

(R_{body→enu}=R_{ecef→enu}cdot R_{body→ecef})



where, (R_{body→ecef}) is the orientation of the sensor in the ECEF frame and (R_{ecef→enu}) is the orientation of the ECEF frame in the ENU coordinate system (a local tangential plane). To convert (R_{body→enu}) into Euler angles, it is possible to use either a rotation matrix or quaternions. For a rotation matrix we apply the following equations:

 

 

To convert a quaternion into yaw, pitch and roll angles (ZYX order), the following equation can be used:

 

 

where (q=<q_w, q_x, q_y, q_z>). In this context, the Yaw angle with respect to the ENU (East-North-Up) frame represents the heading from East to North. To get the heading of the sensor from the North in a clockwise direction, we need to compute 90◦-yaw. Alternatively, the user can calculate the rotation in NED (North-East-Down) coordinates and then extract the Roll-Pitch-Yaw angles with respect to NED, where yaw would be directly the heading in common sense. The function EcefPoseToEnuEul(), in the Fixposition GNSS Transformation Lib, receives the pose of the sensor in ECEF coordinates and returns the orientation of the robot in Yaw-Pitch-Roll angles using the equations described above.

 

 

Vision Navigator output coordinate system:
The receiver output is always in the coordinate reference system (geodetic datum) used by the correction data service. In RTK mode, the receiver does relative positioning with respect to the base coordinates provided by the correction data service. The system in which those base coordinates are is entirely up to the correction data service provider.

The receiver does not need or use this information at all. This is true for all ECEF (XYZ) output. For lat/long/height output, the WGS84 parameters are used to transform ECEF XYZ to lat/lon/height. In non-RTK mode, the output position is WGS84.

 

 

 

C: Camera FOV Data and Model

*The DFOV is the angle subtended by the diagonal of the camera sensor onto the center of the lens.

 

Left: An illustraion of the definition of D H V FOV. Right: A schematic of the Vision Navigator FOV.

 

The STEP file of the FOV model is available on request via [email protected].
 

 

 

D: Antenna selection

The Vision Navigator’s GNSS receivers require signals located at the L1 and L2 bands for adequate operation. Based on our internal testing, we recommend using helical antennas with a gain of around 35 dB. For reference, the Starter Kit ships with two Hi-Target AH-3232 antennas. The frequency response of the GNSS antennas should be approximately located at the [1195 - 1280] MHz and [1560 - 1610] MHz bands. On top of that, for helical antennas, we recommend a noise figure lower than 1.5 dB.

 

Other antenna types (e.g., patch, short helical) should be evaluated carefully depending on their placement with respect to other electrical components and the shape of the ground plane provided for them. Thus, besides using a suitable and good antenna (and appropriate cabling), the placement of the antenna is important. See below and for example the following u-blox document: https://content.u-blox.com/sites/default/files/ZED-F9P_IntegrationManual_UBX-18010802.pdf


Antenna assemblies that combine multiple antennas (such as, GNSS, Wi-Fi and cellular) in once casing are not recommended at all. While such antennas work fine for autonomous standard (C/A only) GNSS, they are likely to have bad performance for high-precision GNSS that require phase measurements, such as RTK. Also, the GNSS performance may depend on activity of the other antennas (Wi-Fi signal strength, cellular band used, etc.).

 

For further analysis, besides analyzing the position estimate performance of the sensor, one can look at the "Receiver RF AGC" value located at the advanced "GNSS status" tab in the Web Interface. The expected values should ideally be located between 20 and 80 percent. For additional information, see also the following U-Blox document:

https://www.u-blox.com/sites/default/files/products/documents/GNSS-Antennas_AppNote_%28UBX-15030289%29.pdf


Only active antennas (with a built-in LNA) are suitable for the Vision Navigator.

 

 

 

E: Rostopic output rate accuracy

The output rate of the rostopics published by the Vision Navigator is accurate. However, when looking at the recordings or plotting the received messages, the user might observe the following behavior:

 

Delays in the time of arrival of ROS messages

 

As observed in the figure, the rate of the output messages is not constant when looking at the time of arrival. However, this is the expected behavior of the sensor, as:

  1. We are not a real-time system.
  2. All the communications are not guaranteed to be real-time: XVN ROS→TCP→User ROS→Bag recording. There are too many steps to then look at the time of arrival of the ROS messages.
  3. The user should rely on the timestamp inside the message, and not on the time of arrival.

 

 

Create your own Knowledge Base