880 resultados para 090904 Navigation and Position Fixing
Resumo:
Precise, up-to-date and increasingly detailed road maps are crucial for various advanced road applications, such as lane-level vehicle navigation, and advanced driver assistant systems. With the very high resolution (VHR) imagery from digital airborne sources, it will greatly facilitate the data acquisition, data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lane information from aerial images with employment of the object-oriented image analysis method. Our proposed algorithm starts with constructing the DSM and true orthophotos from the stereo images. The road lane details are detected using an object-oriented rule based image classification approach. Due to the affection of other objects with similar spectral and geometrical attributes, the extracted road lanes are filtered with the road surface obtained by a progressive two-class decision classifier. The generated road network is evaluated using the datasets provided by Queensland department of Main Roads. The evaluation shows completeness values that range between 76% and 98% and correctness values that range between 82% and 97%.
Resumo:
The eyelids play an important role in lubricating and protecting the surface of the eye. Each blink serves to spread fresh tears, remove debris and replenish the smooth optical surface of the eye. Yet little is known about how the eyelids contact the ocular surface and what pressure distribution exists between the eyelids and cornea. As the principal refractive component of the eye, the cornea is a major element of the eye’s optics. The optical properties of the cornea are known to be susceptible to the pressure exerted by the eyelids. Abnormal eyelids, due to disease, have altered pressure on the ocular surface due to changes in the shape, thickness or position of the eyelids. Normal eyelids also cause corneal distortions that are most often noticed when they are resting closer to the corneal centre (for example during reading). There were many reports of monocular diplopia after reading due to corneal distortion, but prior to videokeratoscopes these localised changes could not be measured. This thesis has measured the influence of eyelid pressure on the cornea after short-term near tasks and techniques were developed to quantify eyelid pressure and its distribution. The profile of the wave-like eyelid-induced corneal changes and the refractive effects of these distortions were investigated. Corneal topography changes due to both the upper and lower eyelids were measured for four tasks involving two angles of vertical downward gaze (20° and 40°) and two near work tasks (reading and steady fixation). After examining the depth and shape of the corneal changes, conclusions were reached regarding the magnitude and distribution of upper and lower eyelid pressure for these task conditions. The degree of downward gaze appears to alter the upper eyelid pressure on the cornea, with deeper changes occurring after greater angles of downward gaze. Although the lower eyelid was further from the corneal centre in large angles of downward gaze, its effect on the cornea was greater than that of the upper eyelid. Eyelid tilt, curvature, and position were found to be influential in the magnitude of eyelid-induced corneal changes. Refractively these corneal changes are clinically and optically significant with mean spherical and astigmatic changes of about 0.25 D after only 15 minutes of downward gaze (40° reading and steady fixation conditions). Due to the magnitude of these changes, eyelid pressure in downward gaze offers a possible explanation for some of the day-to-day variation observed in refraction. Considering the magnitude of these changes and previous work on their regression, it is recommended that sustained tasks performed in downward gaze should be avoided for at least 30 minutes before corneal and refractive assessment requiring high accuracy. Novel procedures were developed to use a thin (0.17 mm) tactile piezoresistive pressure sensor mounted on a rigid contact lens to measure eyelid pressure. A hydrostatic calibration system was constructed to convert raw digital output of the sensors to actual pressure units. Conditioning the sensor prior to use regulated the measurement response and sensor output was found to stabilise about 10 seconds after loading. The influences of various external factors on sensor output were studied. While the sensor output drifted slightly over several hours, it was not significant over the measurement time of 30 seconds used for eyelid pressure, as long as the length of the calibration and measurement recordings were matched. The error associated with calibrating at room temperature but measuring at ocular surface temperature led to a very small overestimation of pressure. To optimally position the sensor-contact lens combination under the eyelid margin, an in vivo measurement apparatus was constructed. Using this system, eyelid pressure increases were observed when the upper eyelid was placed on the sensor and a significant increase was apparent when the eyelid pressure was increased by pulling the upper eyelid tighter against the eye. For a group of young adult subjects, upper eyelid pressure was measured using this piezoresistive sensor system. Three models of contact between the eyelid and ocular surface were used to calibrate the pressure readings. The first model assumed contact between the eyelid and pressure sensor over more than the pressure cell width of 1.14 mm. Using thin pressure sensitive carbon paper placed under the eyelid, a contact imprint was measured and this width used for the second model of contact. Lastly as Marx’s line has been implicated as the region of contact with the ocular surface, its width was measured and used as the region of contact for the third model. The mean eyelid pressures calculated using these three models for the group of young subjects were 3.8 ± 0.7 mmHg (whole cell), 8.0 ± 3.4 mmHg (imprint width) and 55 ± 26 mmHg (Marx’s line). The carbon imprints using Pressurex-micro confirmed previous suggestions that a band of the eyelid margin has primary contact with the ocular surface and provided the best estimate of the contact region and hence eyelid pressure. Although it is difficult to directly compare the results with previous eyelid pressure measurement attempts, the eyelid pressure calculated using this model was slightly higher than previous manometer measurements but showed good agreement with the eyelid force estimated using an eyelid tensiometer. The work described in this thesis has shown that the eyelids have a significant influence on corneal shape, even after short-term tasks (15 minutes). Instrumentation was developed using piezoresistive sensors to measure eyelid pressure. Measurements for the upper eyelid combined with estimates of the contact region between the cornea and the eyelid enabled quantification of the upper eyelid pressure for a group of young adult subjects. These techniques will allow further investigation of the interaction between the eyelids and the surface of the eye.
Resumo:
Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.
Resumo:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Resumo:
The paper discusses robot navigation from biological inspiration. The authors sought to build a model of the rodent brain that is suitable for practical robot navigation. The core model, dubbed RatSLAM, has been demonstrated to have exactly the same advantages described earlier: it can build, maintain, and use maps simultaneously over extended periods of time and can construct maps of large and complex areas from very weak geometric information. The work contrasts with other efforts to embody models of rat brains in robots. The article describes the key elements of the known biology of the rat brain in relation to navigation and how the RatSLAM model captures the ideas from biology in a fashion suitable for implementation on a robotic platform. The paper then outline RatSLAM's performance in two difficult robot navigation challenges, demonstrating how a cognitive robotics approach to navigation can produce results that rival other state of the art approaches in robotics.
Resumo:
Purpose: The purpose of this paper is to gain a better understanding of the types of relationships that exist along the supply chain and the capabilities that are needed to manage them effectively. ---------- Design/methodology/approach: This is exploratory research as there has been little empirical research into this area. Quantitative data were gathered by using a self-administered questionnaire, using the Australian road freight industry as the context. There were 132 usable responses. Inferential and descriptive analysis, including factor analysis, confirmatory factor and regression analysis was used to examine the predictive power of relational factors in inter-firm relationships. ---------- Findings: Three factors were identified as having significant influence on relationships: sharing, power and interdependency. “Sharing” is the willingness of the organisation to share resources with other members of the supply chain. “Power” relates to exercising control based on experience, knowledge and position in the supply chain. “Interdependency” is the relative levels of dependency along the supply chain. ---------- Research limitations/implications: The research only looks at the Australian road freight industry; a wider sample including other industries would help to strengthen the generalisability of the findings. ---------- Practical implications: When these factors are correlated to the types of relationship, arm's length, cooperation, collaboration and alliances, managerial implications can be identified. The more road freight businesses place importance on power, the less they will cooperate. The greater the importance of sharing and interdependency, the greater is the likelihood of arm's length relationships. ---------- Originality/value: This paper makes a contribution by describing empirical work conducted in an under-researched but important area – supply chain relationships in the Australian road freight industry.
Resumo:
This paper presents an automated system for 3D assembly of tissue engineering (TE) scaffolds made from biocompatible microscopic building blocks with relatively large fabrication error. It focuses on the pin-into-hole force control developed for this demanding microassembly task. A beam-like gripper with integrated force sensing at a 3 mN resolution with a 500 mN measuring range is designed, and is used to implement an admittance force-controlled insertion using commercial precision stages. Visual-based alignment followed by an insertion is complemented by a haptic exploration strategy using force and position information. The system demonstrates fully automated construction of TE scaffolds with 50 microparts whose dimension error is larger than 5%.
Resumo:
Starbug is an inexpensive, miniature autonomous underwater vehicle ideal for data collection and ecosystem surveys. Starbug is small enough to be launched by one person without the need for specialised equipment, such as cranes, and it operates with minimal to no human intervention. Starbug was one of the first autonomous underwater vehicles (AUVs) in the world where vision is the primary means of navigation and control. More details of Starbug can be found here: http://www.csiro.au/science/starbug.html
Resumo:
In recent months the extremes of Australia’s weather have affected, killed a good number of people and millions of dollars lost. Contrary to a manned aircraft or a helicopter; which have restricted air time, a UAS or a group of UAS could provide 24 hours coverage of the disaster area and be instrumented with infrared cameras to locate distressed people and relay information to emergency services. The solar powered UAV is capable of carrying a 0.25Kg payload consuming 0.5 watt and fly continuously for at low altitude for 24 hrs ,collect the data and create a special distribution . This system, named Green Falcon, is fully autonomous in navigation and power generation, equipped with solar cells covering its wing, it retrieves energy from the sun in order to supply power to the propulsion system and the control electronics, and charge the battery with the surplus of energy. During the night, the only energy available comes from the battery, which discharges slowly until the next morning when a new cycle starts. The prototype airplane was exhibited at the Melbourne Museum form Nov09 to Feb 2010.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
The removal of toxic anions has been achieved using hydrotalcite via two methods: (1) coprecipitation and (2) thermal activation. Hydrotalcite formed via the coprecipitation method, using solutions containing arsenate and vanadate up to pH 10, are able to remove more than 95% of the toxic anions (0.2 M) from solution. The removal of toxic anions in solutions with a pH of >10 reduces the removal uptake percentage to 75%. Raman spectroscopy observed multiple A1 stretching modes of V−O and As−O at 930 and 810 cm−1, assigned to vanadate and arsenate, respectively. Analysis of the intensity and position of the A1 stretching modes helped to identify the vanadate and arsenate specie intercalated into the hydrotalcite structure. It has been determined that 3:1 hydrotalcite structure predominantly intercalate anions into the interlayer region, while the 2:1 and 4:1 hydrotalcite structures shows a large portion of anions being removed from solution by adsorption processes. Treatment of carbonate solutions (0.2 M) containing arsenate and vanadate (0.2 M) three times with thermally activated hydrotalcite has been shown to remove 76% and 81% of the toxic anions, respectively. Thermally activated hydrotalcite with a Mg:Al ratio of 2:1, 3:1, and 4:1 have all been shown to remove 95% of arsenate and vanadate (25 ppm). At increased concentrations of arsenate and vanadate, the removal uptake percentage decreased significantly, except for the 4:1 thermally activated hydrotalcite. Thermally activated Bayer hydrotalcite has also been shown to be highly effective in the removal of arsenate and vanadate. The thermal activation of the solid residue component (red mud) removes 30% of anions from solution (100 ppm of both anions), while seawater-neutralized red mud removes 70%. The formation of hydrotalcite during the seawater neutralization process removes anions via two mechanisms, rather than one observed for thermally activated red mud.
Resumo:
Following the completion of the draft Human Genome in 2001, genomic sequence data is becoming available at an accelerating rate, fueled by advances in sequencing and computational technology. Meanwhile, large collections of astronomical and geospatial data have allowed the creation of virtual observatories, accessible throughout the world and requiring only commodity hardware. Through a combination of advances in data management, data mining and visualization, this infrastructure enables the development of new scientific and educational applications as diverse as galaxy classification and real-time tracking of earthquakes and volcanic plumes. In the present paper, we describe steps taken along a similar path towards a virtual observatory for genomes – an immersive three-dimensional visual navigation and query system for comparative genomic data.
Resumo:
This paper presents a new rat animat, a rat-sized bio-inspired robot platform currently being developed for embodied cognition and neuroscience research. The rodent animat is 150mm x 80mm x 70mm and has a different drive, visual, proximity, and odometry sensors, x86 PC, and LCD interface. The rat animat has a bio-inspired rodent navigation and mapping system called RatSLAM which demonstrates the capabilities of the platform and framework. A case study is presented of the robot's ability to learn the spatial layout of a figure of eight laboratory environment, including its ability to close physical loops based on visual input and odometry. A firing field plot similar to rodent 'non-conjunctive grid cells' is shown by plotting the activity of an internal network. Having a rodent animat the size of a real rat allows exploration of embodiment issues such as how the robot's sensori-motor systems and cognitive abilities interact. The initial observations concern the limitations of the deisgn as well as its strengths. For example, the visual sensor has a narrower field of view and is located much closer to the ground than for other robots in the lab, which alters the salience of visual cues and the effectiveness of different visual filtering techniques. The small size of the robot relative to corridors and open areas impacts on the possible trajectories of the robot. These perspective and size issues affect the formation and use of the cognitive map, and hence the navigation abilities of the rat animat.
Resumo:
This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.