964 resultados para High Precision Positioning


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Permanent magnet synchronous machines with fractional-slot non-overlapping windings (FSPMSM), also known as tooth-coil winding permanent magnet synchronous machines (TCW PMSM), have been under intensive research during the latest decade. There are many optimization routines explained and implemented in the literature in order to improve the characteristics of this machine type. This paper introduces a new technique for torque ripple minimization in TCW PMSM. The source of torque harmonics is also described. The low order torque harmonics can be harmful for a variety of applications, such as direct drive wind generators, direct drive light vehicle electrical motors, and for some high precision servo applications. The reduction of the torque ripple harmonics with the lowest orders (6th and 12th) is realized by machine geometry optimization technique using finite element analysis (FEA). The presented optimization technique includes the stator geometry adjustment in TCW PMSMs with rotor surface permanent magnets and with rotor embedded permanent magnets. Influence of the permanent magnet skewing on the torque ripple reduction and cogging torque elimination was also investigated. It was implemented separately and together with the stator optimization technique. As a result, the reduction of some torque ripple harmonics was attained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il est reconnu que le benzène, le toluène, l’éthylbenzène et les isomères du xylène, composés organiques volatils (COVs) communément désignés BTEX, produisent des effets nocifs sur la santé humaine et sur les végétaux dépendamment de la durée et des niveaux d’exposition. Le benzène en particulier est classé cancérogène et une exposition à des concentrations supérieures à 64 g/m3 de benzène peut être fatale en 5–10 minutes. Par conséquent, la mesure en temps réel des BTEX dans l’air ambiant est essentielle pour détecter rapidement un danger associé à leur émission dans l’air et pour estimer les risques potentiels pour les êtres vivants et pour l’environnement. Dans cette thèse, une méthode d’analyse en temps réel des BTEX dans l’air ambiant a été développée et validée. La méthode est basée sur la technique d’échantillonnage direct de l’air couplée avec la spectrométrie de masse en tandem utilisant une source d’ionisation chimique à pression atmosphérique (APCI-MS/MS directe). La validation analytique a démontré la sensibilité (limite de détection LDM 1–2 μg/m3), la précision (coefficient de variation CV < 10%), l’exactitude (exactitude > 95%) et la sélectivité de la méthode. Des échantillons d’air ambiant provenant d’un site d’enfouissement de déchets industriels et de divers garages d’entretien automobile ont été analysés par la méthode développée. La comparaison des résultats avec ceux obtenus par la technique de chromatographie gazeuse on-line couplée avec un détecteur à ionisation de flamme (GC-FID) a donné des résultats similaires. La capacité de la méthode pour l’évaluation rapide des risques potentiels associés à une exposition aux BTEX a été prouvée à travers une étude de terrain avec analyse de risque pour la santé des travailleurs dans trois garages d’entretien automobile et par des expériences sous atmosphères simulées. Les concentrations mesurées dans l’air ambiant des garages étaient de 8,9–25 µg/m3 pour le benzène, 119–1156 µg/m3 pour le toluène, 9–70 µg/m3 pour l’éthylbenzène et 45–347 µg/m3 pour les xylènes. Une dose quotidienne environnementale totale entre 1,46 10-3 et 2,52 10-3 mg/kg/jour a été déterminée pour le benzène. Le risque de cancer lié à l’exposition environnementale totale au benzène estimé pour les travailleurs étudiés se situait entre 1,1 10-5 et 1,8 10-5. Une nouvelle méthode APCI-MS/MS a été également développée et validée pour l’analyse directe de l’octaméthylcyclotétrasiloxane (D4) et le décaméthylcyclopentasiloxane (D5) dans l’air et les biogaz. Le D4 et le D5 sont des siloxanes cycliques volatils largement utilisés comme solvants dans les processus industriels et les produits de consommation à la place des COVs précurseurs d’ozone troposphérique tels que les BTEX. Leur présence ubiquitaire dans les échantillons d’air ambiant, due à l’utilisation massive, suscite un besoin d’études de toxicité. De telles études requièrent des analyses qualitatives et quantitatives de traces de ces composés. Par ailleurs, la présence de traces de ces substances dans un biogaz entrave son utilisation comme source d’énergie renouvelable en causant des dommages coûteux à l’équipement. L’analyse des siloxanes dans un biogaz s’avère donc essentielle pour déterminer si le biogaz nécessite une purification avant son utilisation pour la production d’énergie. La méthode développée dans cette étude possède une bonne sensibilité (LDM 4–6 μg/m3), une bonne précision (CV < 10%), une bonne exactitude (> 93%) et une grande sélectivité. Il a été également démontré qu’en utilisant cette méthode avec l’hexaméthyl-d18-disiloxane comme étalon interne, la détection et la quantification du D4 et du D5 dans des échantillons réels de biogaz peuvent être accomplies avec une meilleure sensibilité (LDM ~ 2 μg/m3), une grande précision (CV < 5%) et une grande exactitude (> 97%). Une variété d’échantillons de biogaz prélevés au site d’enfouissement sanitaire du Complexe Environnemental de Saint-Michel à Montréal a été analysée avec succès par cette nouvelle méthode. Les concentrations mesurées étaient de 131–1275 µg/m3 pour le D4 et 250–6226 µg/m3 pour le D5. Ces résultats représentent les premières données rapportées dans la littérature sur la concentration des siloxanes D4 et D5 dans les biogaz d’enfouissement en fonction de l’âge des déchets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The motion instability is an important issue that occurs during the operation of towed underwater vehicles (TUV), which considerably affects the accuracy of high precision acoustic instrumentations housed inside the same. Out of the various parameters responsible for this, the disturbances from the tow-ship are the most significant one. The present study focus on the motion dynamics of an underwater towing system with ship induced disturbances as the input. The study focus on an innovative system called two-part towing. The methodology involves numerical modeling of the tow system, which consists of modeling of the tow-cables and vehicles formulation. Previous study in this direction used a segmental approach for the modeling of the cable. Even though, the model was successful in predicting the heave response of the tow-body, instabilities were observed in the numerical solution. The present study devises a simple approach called lumped mass spring model (LMSM) for the cable formulation. In this work, the traditional LMSM has been modified in two ways. First, by implementing advanced time integration procedures and secondly, use of a modified beam model which uses only translational degrees of freedoms for solving beam equation. A number of time integration procedures, such as Euler, Houbolt, Newmark and HHT-α were implemented in the traditional LMSM and the strength and weakness of each scheme were numerically estimated. In most of the previous studies, hydrodynamic forces acting on the tow-system such as drag and lift etc. are approximated as analytical expression of velocities. This approach restricts these models to use simple cylindrical shaped towed bodies and may not be applicable modern tow systems which are diversed in shape and complexity. Hence, this particular study, hydrodynamic parameters such as drag and lift of the tow-system are estimated using CFD techniques. To achieve this, a RANS based CFD code has been developed. Further, a new convection interpolation scheme for CFD simulation, called BNCUS, which is blend of cell based and node based formulation, was proposed in the study and numerically tested. To account for the fact that simulation takes considerable time in solving fluid dynamic equations, a dedicated parallel computing setup has been developed. Two types of computational parallelisms are explored in the current study, viz; the model for shared memory processors and distributed memory processors. In the present study, shared memory model was used for structural dynamic analysis of towing system, distributed memory one was devised in solving fluid dynamic equations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report examines how to estimate the parameters of a chaotic system given noisy observations of the state behavior of the system. Investigating parameter estimation for chaotic systems is interesting because of possible applications for high-precision measurement and for use in other signal processing, communication, and control applications involving chaotic systems. In this report, we examine theoretical issues regarding parameter estimation in chaotic systems and develop an efficient algorithm to perform parameter estimation. We discover two properties that are helpful for performing parameter estimation on non-structurally stable systems. First, it turns out that most data in a time series of state observations contribute very little information about the underlying parameters of a system, while a few sections of data may be extraordinarily sensitive to parameter changes. Second, for one-parameter families of systems, we demonstrate that there is often a preferred direction in parameter space governing how easily trajectories of one system can "shadow'" trajectories of nearby systems. This asymmetry of shadowing behavior in parameter space is proved for certain families of maps of the interval. Numerical evidence indicates that similar results may be true for a wide variety of other systems. Using the two properties cited above, we devise an algorithm for performing parameter estimation. Standard parameter estimation techniques such as the extended Kalman filter perform poorly on chaotic systems because of divergence problems. The proposed algorithm achieves accuracies several orders of magnitude better than the Kalman filter and has good convergence properties for large data sets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The success of an organization isn’t, in most cases, only shown trough their profits. Today the value of a company, with respect to its market value exceeds their financial quality. Intellectual capital is a major share in the value of the company. Managing employees with an emphasis on intellectual capital and talent is an emergency that arises in the path of human resource managers. The definition of intellectual capital and talent, leads us, first, to a high IQ (Intelligence Quotient), good schools and / or university results. But the intellectual capital and talent of an employee must be linked to his ability, to high performance and good results. How to manage, attract and keep these employees in organizations is also something that requires talent. Now, the basic skills of employees aren’t sufficient for competitive companies. There are currently required higher levels of skills, because there are a growing number of activities that involve "knowledge work". Most companies in the world have a great challenge for the coming years: the challenge of scarcity of talent. The most competitive companies will be those that have the most talented employees. In terms of originality, this paper aims to create discussion about the relationship between talent attraction, talent retention and innovation, as drivers of business competitiveness. The research is based on the categorization methodology defined by Yin (2003) as single case study carried out in a company that is specialized in high precision components.The findings presented here show a strong link between talents attraction, talents retention and innovation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Immature and mature calcretes from an alluvial terrace sequence in the Sorbas basin, southeast Spain, were dated by the U-series isochron technique. The immature horizons consistently produced statistically reliable ages of high precision. The mature horizons typically produced statistically unreliable ages but, because of linear trends in the dataset and low errors associated with each data point, it was still possible to place a best-fit isochron through the dataset to produce an age with low associated uncertainties. It is, however, only possible to prove that these statistically unreliable ages have geochronological significance if multiple isochron ages are produced for a single site, and if these multiple ages are stratigraphically consistent. The geochronological significance of such ages can be further proven if at least one of the multiple ages is statistically reliable. By using this technique to date calcretes that have formed during terrace aggradation and at the terrace surface after terrace abandonment it is possible not only to date the timing of terrace aggradation but also to constrain the age at which the river switched from aggradation to incision. This approach, therefore, constrains the timing of changes in fluvial processes more reliably than any currently used geochronological procedure and is appropriate for dating terrace sequences in dryland regions worldwide, wherever calcrete horizons are present. (c) 2005 University of Washington. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data for water vapor adsorption and evaporation are presented for a bare soil (sandy loam, clay content 15%) in a southern Spanish olive grove. Water losses and gains were measured using eight high-precision minilysimeters, placed around an olive tree, which had been irrigated until the soil reached field capacity (similar to 0.22 m(3) m(-3)). They were subsequently left to dry for 10 days. A pair of lysimeters was situated at each of the main points of the compass (N, E, S, W), at a distance of 1 m (the inner set of lysimeters; ILS) and 2 m (the outer set of lysimeters; OLS), respectively, from the tree trunk. Distinct periods of moisture loss (evaporation) and moisture gain (vapor adsorption) could be distinguished for each day. Vapor adsorption often started just after noon and generally lasted until the (early) evening. Values of up to 0.7 mm of adsorbed water per day were measured. Adsorption was generally largest for the OLS (up to 100% more on a daily basis), and increased during the dry down. This was mainly the result of lower OLS surface soil moisture contents (period-average absolute difference similar to 0.005 m(3) m(-3)), as illustrated using various analyses employing a set of micrometeorological equations describing the exchange of water vapor between bare soil and the atmosphere. These analyses also showed that the amount of water vapor adsorbed by soils is very sensitive to changes in atmospheric forcing and surface variables. The use of empirical equations to estimate vapor adsorption is therefore not recommended.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As improvements to the optical design of spectrometer and radiometer instruments evolve with advances in detector sensitivity, use of focal plane detector arrays and innovations in adaptive optics for large high altitude telescopes, interest in mid-infrared astronomy and remote sensing applications have been areas of progressive research in recent years. This research has promoted a number of developments in infrared coating performance, particularly by placing increased demands on the spectral imaging requirements of filters to precisely isolate radiation between discrete wavebands and improve photometric accuracy. The spectral design and construction of multilayer filters to accommodate these developments has subsequently been an area of challenging thin-film research, to achieve high spectral positioning accuracy, environmental durability and aging stability at cryogenic temperatures, whilst maximizing the far-infrared performance. In this paper we examine the design and fabrication of interference filters in instruments that utilize the mid-infrared N-band (6-15 µm) and Q-band (16-28 µm) atmospheric windows, together with a rationale for the selection of materials, deposition process, spectral measurements and assessment of environmental durability performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a new satellite mission to deliver high quality measurements of upper air water vapour. The concept centres around a LiDAR in limb sounding by occultation geometry, designed to operate as a very long path system for differential absorption measurements. We present a preliminary performance analysis with a system sized to send 75 mJ pulses at 25 Hz at four wavelengths close to 935 nm, to up to 5 microsatellites in a counter-rotating orbit, carrying retroreflectors characterized by a reflected beam divergence of roughly twice the emitted laser beam divergence of 15 µrad. This provides water vapour profiles with a vertical sampling of 110 m; preliminary calculations suggest that the system could detect concentrations of less than 5 ppm. A secondary payload of a fairly conventional medium resolution multispectral radiometer allows wide-swath cloud and aerosol imaging. The total weight and power of the system are estimated at 3 tons and 2,700 W respectively. This novel concept presents significant challenges, including the performance of the lasers in space, the tracking between the main spacecraft and the retroreflectors, the refractive effects of turbulence, and the design of the telescopes to achieve a high signal-to-noise ratio for the high precision measurements. The mission concept was conceived at the Alpbach Summer School 2010.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Nutrient concentrations (particularly N and P) determine the extent to which water bodies are or may become eutrophic. Direct determination of nutrient content on a wide scale is labour intensive but the main sources of N and P are well known. This paper describes and tests an export coefficient model for prediction of total N and total P from: (i) land use, stock headage and human population; (ii) the export rates of N and P from these sources; and (iii) the river discharge. Such a model might be used to forecast the effects of changes in land use in the future and to hindcast past water quality to establish comparative or baseline states for the monitoring of change. 2. The model has been calibrated against observed data for 1988 and validated against sets of observed data for a sequence of earlier years in ten British catchments varying from uplands through rolling, fertile lowlands to the flat topography of East Anglia. 3. The model predicted total N and total P concentrations with high precision (95% of the variance in observed data explained). It has been used in two forms: the first on a specific catchment basis; the second for a larger natural region which contains the catchment with the assumption that all catchments within that region will be similar. Both models gave similar results with little loss of precision in the latter case. This implies that it will be possible to describe the overall pattern of nutrient export in the UK with only a fraction of the effort needed to carry out the calculations for each individual water body. 4. Comparison between land use, stock headage, population numbers and nutrient export for the ten catchments in the pre-war year of 1931, and for 1970 and 1988 show that there has been a substantial loss of rough grazing to fertilized temporary and permanent grasslands, an increase in the hectarage devoted to arable, consistent increases in the stocking of cattle and sheep and a marked movement of humans to these rural catchments. 5. All of these trends have increased the flows of nutrients with more than a doubling of both total N and total P loads during the period. On average in these rural catchments, stock wastes have been the greatest contributors to both N and P exports, with cultivation the next most important source of N and people of P. Ratios of N to P were high in 1931 and remain little changed so that, in these catchments, phosphorus continues to be the nutrient most likely to control algal crops in standing waters supplied by the rivers studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Sea and Land Surface Temperature Radiometer (SLSTR) is a nine channel visible and infrared high precision radiometer designed to provide climate data of global sea and land surface temperatures. The SLSTR payload is destined to fly on the Ocean and Medium-Resolution Land Mission for the ESA/EU Global Monitoring for Environment and Security (GMES) Programme Sentinel-3 mission to measure the sea and land temperature and topography for near real-time environmental and atmospheric climate monitoring of the Earth. In this paper we describe the optical layout of infrared optics in the instrument, spectral thin-film multilayer design, and system channel throughput analysis for the combined interference filter and dichroic beamsplitter coatings to discriminate wavelengths at 3.74, 10.85 & 12.0 μm. The rationale for selection of thin-film materials, deposition technique, and environmental testing, inclusive of humidity, thermal cycling and ionizing radiation testing are also described.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new speleothem record of atmospheric Δ14C between 28 and 44 ka that offers considerable promise for resolving some of the uncertainty associated with existing radiocarbon calibration curves for this time period. The record is based on a comprehensive suite of AMS 14C ages, using new low-blank protocols, and U–Th ages using high precision MC-ICPMS procedures. Atmospheric Δ14C was calculated by correcting 14C ages with a constant dead carbon fraction (DCF) of 22.7 ± 5.9%, based on a comparison of stalagmite 14C ages with the IntCal04 (Reimer et al., 2004) calibration curve between 15 and 11 ka. The new Δ14C speleothem record shows similar structure and amplitude to that derived from Cariaco Basin foraminifera (Hughen et al., 2004, 2006), and the match is further improved if the latter is tied to the most recent Greenland ice core chronology (Svensson et al., 2008). These data are however in conflict with a previously published 14C data set for a stalagmite record from the Bahamas — GB-89-24-1 (Beck et al., 2001), which likely suffered from 14C analytical blank subtraction issues in the older part of the record. The new Bahamas speleothem ∆14C data do not show the extreme shifts between 44 and 40 ka reported in the previous study (Beck et al., 2001). Causes for the observed structure in derived atmospheric Δ14C variation based on the new speleothem data are investigated with a suite of simulations using an earth system model of intermediate complexity. Data-model comparison indicates that major fluctuations in atmospheric ∆14C during marine isotope stage 3 is primarily a function of changes in geomagnetic field intensity, although ocean–atmosphere system reorganisation also played a supporting role.