992 resultados para size accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the relationship between speech discrimination ability and vowel map accuracy and vowel map size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The EU is in the process of negotiating its 2014-20 financial framework. Failure to reach an agreement would imply a delay in the preparation of the strategic plans each member state puts together to explain how it will use Structural and Cohesion Funds. Even if solutions are found – for example annual renewals of the budget based on the previous year's figures – there will be political and institutional costs. EU leaders have too often and too forcefully advocated the use of the EU budget for growth to be able to drop the idea without consequences. • The overwhelming attention paid to the size of the budget is misplaced. EU leaders should instead aim to make the EU budget more flexible, safeguard it from future political power struggles, and reinforce assessment of the impact of EU funded growth policies. • To improve flexibility a commitment device should be created that places the EU budget above continuous political disagreement. We suggest the creation of a European Growth Fund, on the basis of which the European Commission should be allowed to borrow on capital markets to anticipate pre-allocated EU expenditure, such as Structural and Cohesion Funds. Markets would thus be a factor in EU budget policymaking, with a potentially disciplining effect. Attaching conditionality to this type of disbursement appears legitimate, as capital delivered in this way is a form of assistance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wetlands in southern Alberta are often managed to benefit waterfowl and cattle production. Effects on other species usually are not examined. I determined the effect of managed wetlands on upland-nesting shorebirds in southern Alberta by comparing numbers of breeding willets (Catoptrophorus semipalmatus), marbled godwits (Limosa fedoa), and long-billed curlews (Numenius americanus) among areas of managed wetlands, natural wetland basins, and no wetland basins from 1995 to 2000. Surveys were carried out at 21 sites three times each year. Nine to ten of these areas (each 2 km2) were searched for nests annually from 1998–2000. Numbers of willets and marbled godwits and their nests were always highest in areas with managed wetlands, probably because almost all natural wetland basins were dry in this region in most years. Densities of willets seen during pre-incubation surveys averaged 2.3 birds/km2 in areas of managed wetlands, 0.4 in areas of natural wetland basins, and 0.1 in areas with no wetland basins. Nest densities of willets (one search each season) averaged 1.5, 0.9, and 0.3 nests/km2 in areas of managed, natural, and no wetland basins, respectively. Similarly, pre-incubation surveys averaged 1.6, 0.6, and 0.2 godwits/km2 in areas of managed, natural, and no wetland basins, and 1.2, 0.3, and 0.1 godwit nests/km2. For long-billed curlews, pre-incubation surveys averaged 0.1, 0.2, and 0.1 birds/km2, and 0, 0.2, and 0 nests/km2. Nest success was similar in areas with and without managed wetlands. Shallow managed wetlands in this region appear beneficial to willets and marbled godwits, but not necessarily to long-billed curlews. Only 8% of marked willets and godwits with nests in the area were seen or heard during surveys, compared with 29% of pre-laying individuals and 42% of birds with broods. This suggests that a low and variable percentage of these birds is counted during breeding bird surveys, likely limiting their ability to adequately monitor populations of these species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Avian communities in cloud forests have high levels of endemism and are at major risk given the accelerated rate of habitat fragmentation. Nevertheless, the response of these communities to changes in fragment size remains poorly understood. We evaluated species richness, bird community density, community composition, and dominance as indicators of the response to fragment size in a fragmented cloud forest landscape in central Veracruz, Mexico. Medium-sized fragments had statistically higher than expected species richness and more even communities, which may be a reflection of the intermediate disturbance hypothesis, in which medium-sized fragments are exploited by both forest and disturbance-associated species. Bird density also reached higher values in medium-sized fragments, which may indicate a carrying capacity in this habitat. However, large cloud forest fragments had a distinct taxonomic and functional composition, attributable to an increased number of understory insectivore species and canopy frugivores. By comparison, omnivorous species associated with human-altered habitats were more abundant in smaller fragments. Hence, although medium-sized cloud forest fragments had higher species richness and high bird density, large forest tracts maintained a distinct avian community composition, particularly of insectivorous and frugivorous species. Furthermore, the underlying response to fragmentation can only be properly addressed when contrasting several community attributes, such as richness, density, composition, and species dominance. Therefore, cloud forest conservation should aim to preserve the remaining large forest fragments to maintain comprehensive avian communities and avoid local extinctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a flexible chemical box model with full heterogeneous chemistry, intercepts of chemically modified Langley plots have been computed for the 5 years of zenith-sky NO2 data from Faraday in Antarctica (65°S). By using these intercepts as the effective amount in the reference spectrum, drifts in zero of total vertical NO2 were much reduced. The error in zero of total NO2 is ±0.03×1015 moleccm−2 from one year to another. This error is small enough to determine trends in midsummer and any variability in denoxification between midwinters. The technique also suggests a more sensitive method for determining N2O5 from zenith-sky NO2 data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that close to monodisperse crystalline fibrils of dibenzylidene sorbitol can be obtained by preparation in a polymeric solvent subjected to extended shear flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Insect returns from the UK's Doppler weather radars were collected in the summers of 2007 and 2008, to ascertain their usefulness in providing information about boundary layer winds. Such observations could be assimilated into numerical weather prediction models to improve forecasts of convective showers before precipitation begins. Significant numbers of insect returns were observed during daylight hours on a number of days through this period, when they were detected at up to 30 km range from the radars, and up to 2 km above sea level. The range of detectable insect returns was found to vary with time of year and temperature. There was also a very weak correlation with wind speed and direction. Use of a dual-polarized radar revealed that the insects did not orient themselves at random, but showed distinct evidence of common orientation on several days, sometimes at an angle to their direction of travel. Observation minus model background residuals of wind profiles showed greater bias and standard deviation than that of other wind measurement types, which may be due to the insects' headings/airspeeds and to imperfect data extraction. The method used here, similar to the Met Office's procedure for extracting precipitation returns, requires further development as clutter contamination remained one of the largest error contributors. Wind observations derived from the insect returns would then be useful for data assimilation applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of the vertical velocity of ice crystals observed with a 1.5micron Doppler lidar from a continuous sample of stratiform ice clouds over 17 months show that the distribution of Doppler velocity varies strongly with temperature, with mean velocities of 0.2m/s at -40C, increasing to 0.6m/s at -10C due to particle growth and broadening of the size spectrum. We examine the likely influence of crystals smaller than 60microns by forward modelling their effect on the area-weighted fall speed, and comparing the results to the lidar observations. The comparison strongly suggests that the concentration of small crystals in most clouds is much lower than measured in-situ by some cloud droplet probes. We argue that the discrepancy is likely due to shattering of large crystals on the probe inlet, and that numerous small particles should not be included in numerical weather and climate model parameterizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SMPS and DMS500 analysers were used to measure particulate size distributions in the exhaust of a fully annular aero gas turbine engine at two operating conditions to compare and analyse sources of discrepancy. A number of different dilution ratio values were utilised for the comparative analysis, and a Dekati hot diluter operating at a temperature of 623°K was also utilised to remove volatile PM prior to measurements being made. Additional work focused on observing the effect of varying the sample line temperatures to ascertain the impact. Explanations are offered for most of the trends observed, although a new, repeatable event identified in the range from 417°K to 423°K – where there was a three order of magnitude increase in the nucleation mode of the sample – requires further study.