proceedings - rochester institute of technologyedge.rit.edu/content/p15310/public/final...

13

Click here to load reader

Upload: dinhxuyen

Post on 28-Apr-2018

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

Multidisciplinary Senior Design ConferenceKate Gleason College of Engineering

Rochester Institute of TechnologyRochester, New York 14623

Project Number: P15310

GPS AND INERTIAL NAVAGATION GUIDED TRACKING DEVICE

James CareyComputer Engineering

Jameel BalentonElectrical Engineering

Ricardo QuintanillaComputer Engineering

Matthew PartaczMechanical Engineering

Jeffery MayerElectrical Engineering

Abstract The focus of our project is to design a system that utilizes GPS and inertial navigation parameters to point an object at a designated target. Our project, sponsored by Spectracom Corporation, features their new and state of the art Geo-PNT module, which provides both GPS and inertial navigation information. A control algorithm that accepts the Geo-PNT’s parameters interprets this data and uses it to point a servo-driven camera mount via microcontroller and servo driver module at a desired location or object based on the target’s GPS coordinates. A complete system is developed, which encompasses an enclosure for all components, including a 12V battery, the microcontroller, the servo driver module, and a power regulation board. Ultimately, the end goal is for the system to be transported in a vehicle and for a camera to record data, while continuously pointing at the same object or location as the vehicle navigates around it. Spectracom has requested that we fully document our system and provide them with a working demonstration and footage to create a marketing video.

Figure 1: Geo-PNT

Nomenclature Geo-PNT – Spectracom’s GPS-aided inertial navigation and timing unit that provides our system with

accurate positioning (see Figure 1). Motorized Camera Mount – ServoCity P/T 785 motorized camera mount used for pointing the camera. Power Control Board – Printed Circuit Board (PCB) that does power regulation and battery status

indication

Copyright © 2015 Rochester Institute of Technology

Page 2: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

GPS and Inertial Navigation Guided Tracking Page 2

Servo Driver Module – ServoCity’s 8-Servo motor controller with RJ11 connector for communication with the microcontroller and BEC connector for communication with the Motorized Mount (ServoCity P/T 785).

Enclosure – The “box” where all subcomponents of our system are housed and supported.

Introduction Global Positioning Systems (GPS) have played a significant role in the development of navigation since the 1970s. It is a satellite communication network that provides a location and time signal to any given place on Earth where four or more satellites can see the receiver without obstruction. Advances in technology have significantly increased the dependence on GPS for reliable and accurate positioning in aviation, land, and naval navigation.

For this project GPS data is provided by the Geo-PNT, which is supplied and manufactured by Spectracom Corporation. Spectracom tasked our design team to create an application to demonstrate the Geo-PNT and its capabilities. In doing so, a promotional video explaining these features and how they have assisted us in developing our system was also created and presented to Spectracom for them to use in creating a marketing video.

During the design process, it was determined that a camera tracking system would be a reasonable and viable means by which to demonstrate the Geo-PNT. Our system will use the GPS data provided by the Geo-PNT to determine the control signal for a servo-driven camera mount. The camcorder attached will then track the desired target, based upon the target’s GPS coordinates and error is determined by analyzing the distance off the target is from the center pixel of our video footage.

The following sections of this report will document how the system was assembled, the sub-systems required the results captured including measured error and accuracy.

Process The design process for our project was thoroughly analyzed through the usage of Morphological tables, Pugh charts, and other means for selecting the best options to create the most effective system. Spectracom’s requirements only limited us to demonstrating their product by using its output parameters and creating an informative video upon completion showing how we were able to utilize the Geo-PNT for our system.

First, we started with possible options that satisfied the requirements of demonstrating the Geo-PNT. It was determined that pointing a camera would help automatically generate video footage for the required video demonstration requested by Spectracom. Our system accepts real-time position in Earth-Centered-Earth-Fixed (ECEF) coordinates from the Geo-PNT. By using Google Maps and other methods, it was possible to determine a building’s or tower’s fixed location based on its GPS coordinates. A conversion of the known position’s coordinates from latitude, longitude, altitude (LLA) is performed by our system to get into ECEF coordinates, and thus we can perform vector calculus to obtain a pointing vector of where our camera should be pointed to remain focused on this particular location as we navigate around it in a vehicle. This solution also was determined to have greater cost effectiveness, while being a more feasible and practical solution for our student design team. Once this solution was selected, we focused on each component in our system that would be necessary. Each of the components can be broken into categories related to functionality with each being critical to system performance. Listed below are descriptions of all the major sub-components and how they relate to our overall system, shown in Figure 2.

Project P15310

Page 3: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

GPS and Inertial Navigation Guided Tracking Page 3

Figure 2: Fully Integrated System

GEO-PNT The Geo-PNT provides positional, navigational, and timing related data, but only certain parts of the data provided by the Geo-PNT were required for this project. The Geo-PNT is an accurate device and therefore it was necessary to tailor later requirements for the pointer and mount to not mask this accuracy. Since this unit was provided by the customer, data acquisition was limited to what it could provide. Additionally, space and power constraints were set on the project due to the hard requirement of using the Geo-PNT.

The Geo-PNT was simply mounted to the bottom of the enclosure and secured so that it would not move. Space was left to enable wire connections to other subassemblies. Since the Geo-PNT was provided, all analysis was done on the data outputs. These outputs included location coordinates and attitude measurements. These data outputs were deemed enough to complete the project. Calibration of the Geo-PNT consisted of pointing the unit due North and driving in a North-East-South-West square several times to fill the filter queue with valid data.

The antenna that provides the Geo-PNT with GPS signal was considered to be part of the “purchase” package with the Geo-PNT for this project. It was provided by Spectracom with the Geo-PNT, so therefore our team did not perform any calculations or calibrations for it. It is a standard L1/L2 GPS receiver antenna with practical applications in military and commercial industries. The antenna mount was positioned on the enclosure such that it would not be blocked by anything else, and thus always have a clear view of the sky. MICROCONTROLLER The microcontroller had some functional requirements that were inferred from the requirements of the Geo-PNTs operation. The Geo-PNT must be in motion for the demonstration to work. This means that the entirety of the solution cannot be too large, or it would limit mounting options. In order to show the precision of the Geo-PNT, the architecture needs to support the same precision as the Geo-PNT provides. This means 64 bit floating point arithmetic also needed to be supported. The algorithm required plenty of trigonometric functions, but no large loops. The mount required an RS-232 interface and the Geo-PNT interface is a standard Ethernet connection so the microcontroller had to satisfy both.

The Arduino programming and interface procedures are among the easiest with which to work. A large library of example programs is included with the Arduino programming software. This includes sample programs for Ethernet based networking and serial communication. The Ethernet and Serial shields are stackable components that can be placed on top of the Arduino Due. Combining a few code examples proved that both could be used sequentially and with room for a control program. Data can be read in from the Geo-PNT and then used in a control algorithm to generate digital output for the mount.

POWER Based upon the requirement for mobility and power to operate the Geo-PNT, a battery was selected as the best option to power the system. Power was also required for the mount and microcontroller. Since each of these components required 3 distinct voltage and current ratings, a power regulation board was created to step the battery down to the levels of output required.

BatteryThe size and capacity of the battery was largely based upon the ability to supply the voltages required at the rated currents for each device. Table 1 shows the voltage and current requirements for each of the components that determine the battery capacity.

Device Required Voltage [V] Required Current [A] Total Power Consumption [W]

Geo-PNT 10V Minimum30V Maximum

2A 20W Minimum60W Maximum

Arduino Mega Microcontroller

5V Minimum12V Maximum9V Optimum

0.500A 2.5W Minimum6W Maximum

ServoCity PT785 Motorized Camera

Mount

6V Minimum7.2V Maximum

3A 18W Minimum21.6W Maximum

Copyright © 2008 Rochester Institute of Technology

Page 4: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

GPS and Inertial Navigation Guided Tracking Page 4

Table 1: Sub-Component Power Consumptions Chart

Based upon the limitations set by Table 1, a battery was chosen such that each of the maximum conditions could be met, and the battery could still provide ample voltage and current for multiple hours. Thus, the Panasonic LC-RA1212P1 was selected, which provides 12Vdc at a rated 12.0A-hour. By adding up each of the maximum currents, it can be seen that our system can draw as much as 6.5A at any given time. By selecting the battery to have double the capacity of the maximum current spike, this should be more than sustainable and provide hours of usage for our system. Since the values expressed in Table 4.1.1 are maximums, our nominal and regular current draw is much less than 6.5A.

The testing phase of the battery was conducted in two stages. First a charging test, to verify the duration of charging time. The battery requires a significant amount of time to complete a full charge. However enough charge could be completed in the time allocated for the test to conclude that we can charge the battery to a level that we need within hours. Once the battery was fully charged, it was cycled using a 1.5A constant current load, comprised of seven, 10-Watt resistors - ultimately giving a resistance of 7.647-ohms. The battery was able to provide ample voltage throughout the entire 3-hour test. From the test data acquired, it can be concluded that this battery should supply more than enough voltage and current for an entire duration of running our system.

Power Regulation Circuit BoardThe customer of the power regulation board is the senior design team. The intention of the power regulation board is to provide our system with the required voltages and currents described by Table 4.1.1. In addition, a battery level or battery status indication circuit was derived such that an LED bar-graph would indicate the battery voltage. The only requirement from the board was to ensure desired voltage and current outputs to power the system. Physical size was a constraint to ensure the finished board could easily fit into the enclosure and be accessible after mounting.

After thorough testing and analysis, it was determined that the original design would not be attainable, based upon several conditions, including layout and component selection. Since this was realized early enough, it was possible to substitute several DC/DC converters from Keedox to fill the void and still satisfy our requirements. These DC/DC converters now plug into the board and output to the output terminals on the board.

CAMERA In order to satisfy our requirement of recording video while navigating around a designated target, a simple video camera was chosen with sufficient optical zoom to focus in on our target and prove accuracy. A standard Samsung HFX90 video camcorder was purchased and mounted to the motorized camera mount with a Philips head screw. Calibration and verification of the camera was performed, such that we could map the number of pixels for variable zooms to a distance measured in inches that could eventually be converted into degrees off from the center pixel for error measuring.

VEHICLE One of the customer requirements for the project was for the Geo-PNT module to be in motion; which simulates the typical industry applications where the unit is placed on a moving vehicle. Using the Morphological table, several different objects with the ability to move were compared. The top three options were a car, a remote-controlled car, and a bicycle. Based upon the size and weight of the system as a whole, as well as feasibility of being able to conduct a successful experiment with minimal risk, we chose to use a group members’ car, specifically a Chevy Avalanche. This car provided us with enough space and stability to place our system and conduct our testing.

MOTORIZED CAMERA MOUNT The motorized camera mount function is to support and drive the camera to the desired target location. Based upon the customer and engineering requirements, the motorized camera mount at worst must be able to meet the accuracy and range of motion criteria listed in Table 2.

Project P15310

Page 5: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

GPS and Inertial Navigation Guided Tracking Page 5

Engineering Requirement Target Value Limit ValueDegrees of Rotational Motion 2 -0/+1

Attitude Roll/Pitch Angular Error 0.75 degrees 1 degreeAttitude Accuracy Yaw Error 0.75 degrees 1 degreeRange of Motion in Roll/Pitch Infinity degrees 180 degrees

Range of Motion in Yaw Infinity degrees 360 degreesTable 2: Engineering Requirements – Pointer Mount

Additional considerations included the size, weight, and range of applications in which the mount could potentially be used. The mount was desired to be as small as possible. The smaller the size and weight of the mount, the lower its inertia. This equates to a less motor torque and allows for quicker angular accelerations. Furthermore, if the motorized camera mount can handle quicker accelerations, it may be used closer to the target being tracked.

When deciding on a three or two axis camera mount, a three axis mount was ruled out due to size, weight, price, and complexity. To further filter out the many options for an azimuth mount, additional requirements were added to account for performance as shown in Table 3. These values were determined on a worst case scenario. It is important to note that these values are not likely and should be seen as an upper bound or limit of the mount.

Engineering Requirement Target Value Limit ValueAngular Acceleration 0.75 radian/s2 -/+0.25 radian/s2

Table 3: Additional Requirements – Pointer Mount

Knowing this information, a search was conducted looking for existing solutions that met all requirements that are purchase ready. The mount seen attached to the enclosure in Figure 1 was consequently found. This potential off-the-shelf solution was then compared to what could be custom fabricated. It was then chosen that a purchased motorized mount with a gearing ratio of 7:1 would be pursued. The reason being, it would be sufficient to demonstrate the Geo-PNT capabilities, reduce mechanical lead time, and allow for additional time for software engineering development and testing under current schedule constraints. It also has proven reliability, and is less expensive.

With this solution, the inertia and the maximum torque (τmax) of the camera mount was determined to be 1281 in-oz from the datasheet. The max rotational speed (ω) was also determined to be 37 degrees/second. Thus, limitations could be derived for different acceleration and velocity scenarios and are summarized in Equation 1, Equation 2, Equation 3 andEquation 4. Note that I=inertia, and r=distance in feet from the target object. Additional error, up to 0.0025 degrees, may be introduced through gearing backlash.

α max=τ max[¿−oz]

192∗I[lb−ft2 ]

=0.6979 rads2

Equation 1: Angular Acceleration Maximum

a[ ft /s2 ]=α max∗rEquation 2: Linear Acceleration Limit

a[ ft /s2 ]=1.4667∗∆ v[mph]

∆ tEquation 3: Linear Acceleration

v[mph]=0.681 π

180∗ω

[deg

s ]∗r

Equation 4: Linear Velocity Limit

The camera mount was tested to ensure the accuracy of the mount was 1.5 degrees, the gearing back lash was less than 0.1 degrees, and that the mount does not reach stall torque during expected loading.

Copyright © 2008 Rochester Institute of Technology

Page 6: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

Figure 3: ECEF to LLA Relationship [2]

GPS and Inertial Navigation Guided Tracking Page 6

ENCLOSURE The function of the enclosure is to contain and protect all electronic components. The enclosure was chosen by the team as the best option to support all of our subcomponents. It was chosen on the premise that it could protect our equipment in the event of wet weather conditions. Also, if the hardware was not contained during testing then the hardware could be dropped or fall out of the vehicle thus causing heavy damage. When considering what enclosure to use, it was important that the enclosure be rigid enough to protect and support all electronic hardware components, water resistant and as small as possible.

SOFTWARE The first program runs the microcontroller and interprets data from the Geo-PNT to command the mount to point at the desired location. This is the demonstration of the Geo-PNT’s power and the simplicity of its use. The second measures the final error of the system as a whole. This is how success is measured for the system. Other smaller programs were written but they were merely tools with which to reach and test these two end goals.

Motion Control ProgramProgram Design:The purpose of the program is to demonstrate a use for the Geo-PNT. While not a full SATCOM-On-The-Move application, this simulates it in a way that is easily measured. Pointing a camera at some geosynchronous object in space simulates pointing a satellite antenna at a satellite. Using only data as it comes from the Geo-PNT, the program must point correctly regardless of the systems attitude or location.

Pitch and Yaw Calculations:Only pitch and heading are required in a 3 dimensional space to point towards a location. The focus point chosen is in Earth Centered Earth Fixed (ECEF). The location of the box is in ECEF. The orientation of the box is in reference to the local tangent plane in East, North, and Up (ENU). All of these pieces of information must first be converted to the same coordinate plane. The coordinate plane in which the mount exists is that of the box’s orientation. From here the steps are simple. Convert all ECEF points to ENU and then rotate them to match the Geo-PNT’s orientation.

The local tangent plane is a function of the current location on the earth. It is conveniently a 90 degree shift from the longitude and latitude at the current location. The current location is converted to LLA from ECEF [3] to create the transformation matrix to convert to ENU [4] as seen in Figure 3. The second conversion is to get the azimuth and pitch in reference to the mount. The difference between this coordinate plane and ENU is the attitude reported by the Geo-PNT. These pitch, roll, and yaw values make up another transformation matrix to convert both points [5]. Once this is done, the two points are subtracted from each other to form a pointing vector. This vector is used to get the pitch and yaw for the mount [4]. Note that this vector has not changed at all in any of the conversions.

Test:Testing the program is an extremely complex process. The Geo-PNT must first be calibrated and outside in order to receive good data. Then a pointing location must be chosen with the same precision as the Geo-PNT, and the camera is set up to check if the desired point is actually centered on screen. To simulate this process, the log files were fed from a computer over Ethernet (simulating the Geo-PNT) and various scenarios were taken into account.

Error Measurement ProgramThe error measurement program is a custom tool by which to fulfill the customer and engineering requirements of measuring the error of the system, but it does not have its own specific requirements. It is

Project P15310

Page 7: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

GPS and Inertial Navigation Guided Tracking Page 7

not a deliverable, simply a method by which error will be determined. This program is the result of a constraint imposed upon the project. There must be error measurement of the. When the system takes video of objects that could possibly be miles away, the only feasible way to determine error is from that video.

In order to use the tool, a computer with a preconfigured Visual Studio environment needs to be used. This environment was chosen as it was the simplest way to quickly and accurately develop the tool with the non-deliverable aspect of the program in mind. The main concept of this tool is to be able to report error of the system over time (as referenced to the video time) in both textual formats such as CSV and visual format to be used in a customer directed video.

Error measurements could be done manually by determining the distance that each pixel in the video represents and comparing them in an image or video edited software; or by developing a video processing program to do this automatically. The latter option was determined to be the only feasible choice as manually transcribing any respectable length of video would consume an extremely large amount of human resources.

In order to create the error measurement program, several different options were considered. Building this tool manually would take an amount of time deemed too long to be feasible for the project, so a toolkit was found on which to build upon. The OpenCV toolkit and libraries were chosen because of their ability to process entire videos at once. Once the environment was in place, open source object tracking software was used to create the initial base of the program. This allowed for the tracking of general moving objects in a video. Since the pointing target’s location would always be changing in relation to the system, the theoretical error margin would also be moving, allowing this software to track the target. In order to ensure that the proper object was tracked, sensitivity controls in the program were customized based on a prior knowledge of the object being tracked. Once the objects were being tracked successfully, the software determined the error of the system by first calculating the number of pixels that the object was from the center of the video using Pythagorean Theorem. From this point, the degrees of distance that each pixel represented were calculated using a series of tests with the camera on a known zoom at a known distance from the target. Once both of these values were known, the error of the system in degrees from the target was calculated by multiplying the two together. OpenCV functions were then used to plot this error as well as a visual indicator over the initial video in order to clearly represent the error visually. This error data was also printed out to a CSV file for later analysis and graph creation.To test this program, various different moving objects were used in order to determine both a correct value of sensitivity to accurately track any object, but also to ensure that the error was correctly determined from the tracked object and the center point of the video.

Results and DiscussionThe results captured by our system were in the form of video footage obtained for several different target locations. Testing and results for each of our subcomponents were held independently of our overall system results, as discussed in the previous section.

Different locations were chosen to validate our system and verify that we can track any given location based upon its geosynchronous location in space. Since it was difficult to account for accurate geosynchronous locations, some of our “target locations” were points in the middle of a parking lot, where we could park with the Geo-PNT and acquire the coordinates of that location. Some of the example target locations we tracked include Mark Ellingson Hall, which is one of the tallest dorm buildings on the RIT campus, a blue emergency beacon in one of the parking lots, and a parked vehicle with an object on the roof that we could track as we navigated around the parking lot.

Copyright © 2008 Rochester Institute of Technology

Figure 4: Error Measurement Example

Page 8: Proceedings - Rochester Institute of Technologyedge.rit.edu/content/P15310/public/Final Documents... · Web view"Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web

GPS and Inertial Navigation Guided Tracking Page 8

0 5 10 15 20 25 30 35 40012345678

Degrees Error over Time

Degrees ErrorAcceptable Error

Time (seconds)

Erro

r (de

gree

s)

Figure 5: Ellingson Test Degrees of Error

The above plot is a best case scenario from a test video of tracking Ellingson Hall. We are outside of the acceptable error margin. The algorithm was found to be most likely the problem in the system. The system was consistent and repetitive, but it was clear that the errors came from the pointing algorithm. The system pointed towards an area, but was far from ideal.

Conclusions and RecommendationsThe mount, battery, enclosure, and PCB all met or surpassed expectations. The azimuth mount was consistent and its behavior could be mapped to a simple linear function. The battery gave power to the system while the mount was moving for almost 10 hours of straight use. The enclosure passed all tests and provided a safe housing for the other subsystems. The realized solution for the pointing algorithm was highly recommended, but did not yield as accurate a result as intended. Although the calculations were wrong, they are very consistent and fairly accurate. They can likely be remedied once a missing factor or concept is discovered. Another possible improvement could be made in regards to the motor capabilities. Since the precision of the Geo-PNT is so high and the loss of accuracy over time is said to be very low and slow, higher precision motors could allow for a more precise measurement of the error.

REFERENCES[1] "PT785-S Pan & Tilt System." PT785-S Pan & Tilt System. N.p., n.d. Web. 12 May 2015.

<https://www.servocity.com/html/pt785-s_pan___tilt_system.html#.VRSekPnF-So>.[2] "Geodetic Datum." Wikipedia. Wikimedia Foundation, n.d. Web. 12 May 2015.[3] "Datum Transformations of GPS Positions." U-blox, 5 July 1999. Web. 5 May 15.[4] Subirana, J. S., J. J. Zornoza, and M. Hernández-Pajares. "Transformations between ECEF and ENU

Coordinates." Transformations between ECEF and ENU Coordinates. Navipedia, 13 Jan. 2013. Web. 12 May 2015.

[5] LaValle, Steven M. "Yaw, Pitch, and Roll Rotations." Yaw, Pitch, and Roll Rotations. Cambridge University Press, n.d. Web. 12 May 2015.

ACKNOWLEDGEMENTS Leo Farnand – Faculty Guide John Fischer – Spectracom Representative Paul Myers – Spectracom Representative Dr. Agamemnon Crassidis – Associate Professor Dr. Josh Faber – Gravitational Computation Expert Dr. Adriana Becker-Gomez – Faculty Guide/Algorithm Expert Steve Verzuilli – Motor Applications Expert

Project P15310