818 resultados para Precision positioning
Resumo:
A key challenge of wide area kinematic positioning is to overcome the effects of the varying hardware biases in code signals of the BeiDou system. Based on three geometryfree/ionosphere-free combinations, the elevation-dependent code biases are modelled for all BeiDou satellites. Results from the data sets of 30-day for 5 baselines of 533 to 2545 km demonstrate that the wide-lane (WL) integer-fixing success rates of 98% to 100% can be achieved within 25 min. Under the condition of HDOP of less than 2, the overall RMS statistics show that ionospheric-free WL single-epoch solutions achieve 24 to 50 cm in the horizontal direction. Smoothing processing over the moving window of 20 min reduces the RMS values by a factor of about 2. Considering distance-independent nature, the above results show the potential that reliable and high precision positioning services could be provided in a wide area based on a sparsely distributed ground network.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.
Resumo:
Reliability of carrier phase ambiguity resolution (AR) of an integer least-squares (ILS) problem depends on ambiguity success rate (ASR), which in practice can be well approximated by the success probability of integer bootstrapping solutions. With the current GPS constellation, sufficiently high ASR of geometry-based model can only be achievable at certain percentage of time. As a result, high reliability of AR cannot be assured by the single constellation. In the event of dual constellations system (DCS), for example, GPS and Beidou, which provide more satellites in view, users can expect significant performance benefits such as AR reliability and high precision positioning solutions. Simply using all the satellites in view for AR and positioning is a straightforward solution, but does not necessarily lead to high reliability as it is hoped. The paper presents an alternative approach that selects a subset of the visible satellites to achieve a higher reliability performance of the AR solutions in a multi-GNSS environment, instead of using all the satellites. Traditionally, satellite selection algorithms are mostly based on the position dilution of precision (PDOP) in order to meet accuracy requirements. In this contribution, some reliability criteria are introduced for GNSS satellite selection, and a novel satellite selection algorithm for reliable ambiguity resolution (SARA) is developed. The SARA algorithm allows receivers to select a subset of satellites for achieving high ASR such as above 0.99. Numerical results from a simulated dual constellation cases show that with the SARA procedure, the percentages of ASR values in excess of 0.99 and the percentages of ratio-test values passing the threshold 3 are both higher than those directly using all satellites in view, particularly in the case of dual-constellation, the percentages of ASRs (>0.99) and ratio-test values (>3) could be as high as 98.0 and 98.5 % respectively, compared to 18.1 and 25.0 % without satellite selection process. It is also worth noting that the implementation of SARA is simple and the computation time is low, which can be applied in most real-time data processing applications.
Resumo:
提出了一种基于狭缝投影的位置传感技术,阐述了此技术的传感原理及其在精密定位中的应用。准直激光束照明的投影狭缝由一个透镜以掠入射角度投影在被测物体上,狭缝投影经过被测物体表面的反射和另一个透镜的成像在探测双缝上形成投影狭缝像。探测双缝放大成像在双像限探测器上,投影狭缝像透过探测双缝的光强分别被双像限探测器的两个像限所接收,通过检测双像限探测器两个像限上的光强获得被测物体的位置。实验验证了此传感技术的可行性,其位置重复测量偏差小于32nm(1σ)。
Resumo:
压电驱动器的位移输出能力有限,因此通常借助于柔性机构对其位移量进行放大。对常用的柔性放大机构的性能进行了分析。提出一种柔性八杆放大机构,并对其进行有限元分析和理论计算。为了提高放大率,提出两级串连式机构。机构整体具有结构紧凑、放大效率较高的优点。
Resumo:
提出一种新型的五自由度精密定位平台的工作原理及其设计方法。工作台采用压电陶瓷作为驱动元件,柔性导向机构实现平移及转动功能。整个工作台可由整块金属材料通过线切割加工制成,实现一体化加工,而且结构紧凑。并给出导向机构刚度计算公式及设计实例。
Resumo:
提出一种新型的五自由度精密定位平台的工作原理及其设计方法。工作台采用柔性导向机构实现平移及转动功能,采用压电陶瓷作为驱动元件,外置纳米级电容传感器作为位移量测量反馈元件,采用数字PID控制方法,可以实现纳米级精度的定位。给出了多种形式柔性导向机构刚度计算公式及设计实例。
Resumo:
As propriedades funcionais dos materiais ferroeléctricos tais como a polarização reversível, piroelectricidade, piezoelectricidade, elevada actividade óptica não linear e comportamento dieléctrico não linear são fundamentais para a sua aplicação em sensores, microactuadores, detectores de infravermelhos, filtros de fase de microondas e memórias não-voláteis. Nos últimos anos, motivado pelas necessidades industriais de redução do tamanho dos dispositivos microelectrónicos, aumentando a eficiência volumétrica, tem sido feito um grande esforço ao nível da investigação para desenvolver estruturas ferroeléctricas à escala micro- e nano- métrica. É sabido que a redução de tamanho em materiais ferroeléctricos afecta significamente as suas propriedades. Neste sentido e considerando que foi previsto teoreticamente por cálculos ab initio que estruturas do tipo nanocilindros e nanodiscos apresentariam um novo tipo de ordem ferroeléctrica e, na expectativa de alcançar conhecimento para o desenvolvimento de uma nova geração de dispositivos microelectróncos, existe um grande interesse em desenvolver métodos de fabrico de nanoestruturas ferroeléctricas unidimensionais (1D) tais como nanocilindros e nanotubos. As estratégias de fabrico de nanoestruturas 1D até agora descritas na literatura indicam claramente as dificuldades inerentes à sua preparação. Existem duas grandes vias de síntese destas nanoestruturas: i) o método “topdown” que consiste na redução de tamanho de um dado material até à obtenção duma estrutura 1D; e ii) o método “bottom-up” em que átomos, iões e moléculas são agrupados para formar um material 1D. O método “top down” envolve em geral técnicas de desgaste, como o uso do feixe de electrões, que apesar de permitirem elevada precisão no posicionamento e no controlo do tamanho, falham em termos de resolução, exigem muito tempo e causam facilmente defeitos que deterioram as propriedades físicas destes materiais. Na metodologia “bottom up” a utilização de moléculas ou estruturas “molde” tem sido a mais explorada. As estructuras 1D podem também ser preparadas sem recorrer a “moldes”. Neste caso a agregação orientada é promovida pelo recurso a aditivos que controlam o crescimento dos cristais em direcções preferenciais. Neste contexto, neste trabalho utilizaram-se duas estratégias “bottom up” de baixo custo para a preparação de nanopartículas de titanato de bário (BaTiO3) com morfologia controlada: 1) síntese química (em solução e em fase vapor) com utilização de nanotubos de titanato TiNTs) como “moldes” e precursores de titânio 2) síntese química em solução com presença de aditivos. Os nanotubos de titanato de sódio foram preparados por síntese hidrotermal. Como existiam muitas dúvidas acerca da natureza estrutural e do mecanismo de formação dos NTs, a parte inicial do trabalho foi dedicada à realização de um estudo sistemático dos parâmetros intervenientes na síntese e à caracterização da sua estrutura e microestrutura. Foi demonstrado que os NTs têm a fórmula geral A2Ti2O5 (A = H+ or Na+), e não TiO2 (anátase) com defendido por vários autores na literatura, e podem ser preparados por método hidrotermal em meio fortemente alcalino usando como fonte de titânio TiO2 comercial na forma de anátase ou rútilo. A menor reactividade do rútilo exige temperaturas de síntese superiores ou tempos de reacção mais longos. A forma tubular resulta do tratamento hidrotermal e não de processos de lavagem e neutralização subsequentes. Se os NTs forem tratados após a síntese hidrotérmica em água a 200 ºC, transformam-se em nanocilindros. Uma das partes principais desta tese consistiu na investigação do papel dos NTs de titanato no crescimento anisotrópico de BaTiO3. O potencial funcionamento dos NTs como “moldes” para além de precursores foi testado em reacção com hidróxido de bário em síntese em solução e por reacção com um precursor orgânico de bário em fase vapor. Tendo por base os estudos cinéticos realizados, bem como as alterações estruturais e morfológicas das amostras, é possível concluir que a formação do BaTiO3 a partir de NTs de titanato de sódio, ocorre por dois mecanismos dependendo da temperatura e tempo de reacção. Assim, a baixa temperatura e curto tempo de reacção verifica-se que se formam partículas dendríticas de BaTiO3 cuja superfície é bastante irregular (“wild”) e que apresentam estrutura pseudo-cúbica. Estas partículas formam-se por reacção topotáctica na fronteira dos nanotubos de titanato de sódio. A temperaturas mais altas e/ou reacções mais longas, a reacção é controlada por um mecanismo de dissolução e precipitação com formação de dendrites de BaTiO3 tetragonais com superfície mais regular (“seaweed”). A microscopia de força piezoeléctrica mostrou que as dendrites “seaweeds“ possuem actividade piezoeléctrica superior à das dendrites “wild”, o que confirma o papel desempenhado pela estrutura e pela concentração de defeitos na rede na coerência e ordem ferroeléctrica de nanoestruturas. Os nossos resultados confirmam que os NTs de titanato não actuam facilmente como “moldes” na síntese em solução de BaTiO3 já que a velocidade de dissolução dos NTs em condições alcalinas é superior à velocidade de formação do BaTiO3. Assumindo que a velocidade de reacção dos NTs com o precursor de bário é superior em fase vapor, efectuou-se a deposição de um precursor orgânico de bário por deposição química de vapor sobre um filme de NTs de titnato de sódio depositados por deposição electroforética. Estudou-se a estabilidade dos NTs nas diferentes condições do reactor. Quando os NTs são tratados a temperaturas superiores a 700 ºC, ocorre a transformação dos NTs em nanocilindros de anatase por um mecanismo de agregação orientada. Quando se faz a deposição do precursor de bário, seguida de calcinação a 700 ºC em atmosfera oxidante de O2, verifica-se que a superficie dos NTs fica coberta com nanocristais de BaTiO3 independentemente da concentração de bário. O papel dos NTs de titanato no crescimento anisotrópico de BaTiO3 em fase vapor é assim descrito pela primeira vez. Em relação à metodologias de crescimento de partículas na ausência de “moldes” mas com aditivos fez-se um estudo sistemático utilizando 5 aditivos de natureza differente. As diferenças entre aditivos foram sistematizadas tendo em conta as diferenças estruturais e morfológicas verificadas. Está provado que os aditivos podem funcionar como modificadores de crescimento cristalino por alteração do seu padrão de crescimento ou por alteração da cinética de crescimento das faces cristalográficas do cristal. Entre os aditivos testados verificou-se que o ácido poliacrilíco adsorve em faces específicas do BaTiO3 alterando a cinética de crescimento e induzindo a agregação orientada das partículas. O polivinilpirrolidona, o docecilsulfato de sódio e hidroxipropilmetilcelulose actuam mais como inibidores de crescimento do que como modificadores do tipo de crescimento. A D-frutose aumenta a energia de activação da etapa de nucleação não ocorrendo formação de BaTiO3 para as mesmas condições dos outros aditivos. Esta tese clarifica o papel dos NTs de titanato de sódio enquanto precursores e “moldes” no crescimento anisotrópico de BaTiO3 em solução e em fase vapor. É feita também a abordagem do controlo morfológico do BaTiO3 através do uso de aditivos. As estratégias de preparação de BaTiO3 propostas são de baixo custo, reprodutíveis e fáceis de efectuar. Os resultados contribuem para uma melhor compreensão da relação tamanho – morfologia – propriedade em materiais ferroeléctricos nanométricos com vista à sua potencial aplicação.
Resumo:
The general objective of this thesis has been seasonal monitoring (quarterly time scale) of coastal and estuarine areas of a section of the Northern Coast of Rio Grande do Norte, Brazil, environmentally sensitive and with intense sediment erosion in the oil activities to underpin the implementation of projects for containment of erosion and mitigate the impacts of coastal dynamics. In order to achieve the general objective, the work was done systematically in three stages which consisted the specific objectives. The first stage was the implementation of geodetic reference infrastructure for carrying out the geodetic survey of the study area. This process included the implementation of RGLS (Northern Coast of the RN GPS Network), consisting of stations with geodetic coordinates and orthometric heights of precision; positioning of Benchmarks and evaluation of the gravimetric geoid available, for use in GPS altimetry of precision; and development of software for GPS altimetry of precision. The second stage was the development and improvement of methodologies for collection, processing, representation, integration and analysis of CoastLine (CL) and Digital Elevation Models (DEM) obtained by geodetic positioning techniques. As part of this stage have been made since, the choice of equipment and positioning methods to be used, depending on the required precision and structure implanted, and the definition of the LC indicator and of the geodesic references best suited, to coastal monitoring of precision. The third step was the seasonal geodesic monitoring of the study area. It was defined the execution times of the geodetic surveys by analyzing the pattern of sediment dynamics of the study area; the performing of surveys in order to calculate and locate areas and volumes of erosion and accretion (sandy and volumetric sedimentary balance) occurred on CL and on the beaches and islands surfaces throughout the year, and study of correlations between the measured variations (in area and volume) between each survey and the action of the coastal dynamic agents. The results allowed an integrated study of spatial and temporal interrelationships of the causes and consequences of intensive coastal processes operating in the area, especially to the measurement of variability of erosion, transport, balance and supply sedimentary over the annual cycle of construction and destruction of beaches. In the analysis of the results, it was possible to identify the causes and consequences of severe coastal erosion occurred on beaches exposed, to analyze the recovery of beaches and the accretion occurring in tidal inlets and estuaries. From the optics of seasonal variations in the CL, human interventions to erosion contention have been proposed with the aim of restoring the previous situation of the beaches in the process of erosion.
Resumo:
Point positioning from GPS data can provide precision varying from 100 meters to a few millimeters at the level of 95% probability. To reach the best level of accuracy, users need proper equipment and software, as well as access capability to GPS products available at the International GPS Geodynamics Service. In this paper, the theory related to point positioning using GPS is presented as well as the results of an experiment conducted using data from the Brazilian Active Control System. The results show repeatability better than 5mm and 10mm for the N and E baseline components respectively, and 6mm + 4ppb (parts per billion) for the vertical. Comparison with SIRGAS campaign showed results at the same level of uncertainty as that of the stations used to tie the SIRGAS frame to ITRF94. Therefore, precise point positioning is a powerful tool to be used in applications requiring high level of precision, such as Geodynamics.
Resumo:
Among the positioning systems that compose GNSS (Global Navigation Satellite System), GPS has the capability of providing low, medium and high precision positioning data. However, GPS observables may be subject to many different types of errors. These systematic errors can degrade the accuracy of the positioning provided by GPS. These errors are mainly related to GPS satellite orbits, multipath, and atmospheric effects. In order to mitigate these errors, a semiparametric model and the penalized least squares technique were employed in this study. This is similar to changing the stochastical model, in which error functions are incorporated and the results are similar to those in which the functional model is changed instead. Using this method, it was shown that ambiguities and the estimation of station coordinates were more reliable and accurate than when employing a conventional least squares methodology.
Resumo:
This paper presents a discussion on the potential use of high tech garbage, including electronic waste (e-waste), as a source of mechanisms, sensors and actuators, that can be adapted to improve the reality of microprocessor systems labs, at low cost. By means of some examples, it is shown that entire subsystems withdrawn of high tech equipments can be easily integrated into existing laboratory infrastructure. As examples, first a precision positioning mechanism is presented, which was taken from a discarded commercial ink jet printer and interfaced with a microprocessor board used in the laboratory classes. Secondly, a read/write head and its positioning mechanism has been withdrawn of a retired CD/DVD drive and again interfaced with the microprocessor board. Students who have been using these new experiments strongly approve their inclusion in the lab schedules. © 2011 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Plasmonen sind die kollektive resonante Anregung von Leitungselektronen. Vom Licht angeregternPlasmonen in subwellenlängen-grossen Nanopartikeln heissen Partikelplasmonen und sind vielversprechende Kandidaten für zukünftige Mikrosensoren wegen der starken Abhängigkeit der Resonanz an extern steuerbaren Parametern, wie die optischen Eigenschaften des umgebenden Mediums und die elektrische Ladung der Nanopartikel. Die extrem hohe Streue_zienz von Partikelplasmonen erlaubt eine einfache Beobachtung einzelner Nanopartikel in einem Mikroskop.rnDie Anforderung, schnell eine statistisch relevante Anzahl von Datenpunkten sammeln zu können,rnund die wachsende Bedeutung von plasmonischen (vor allem Gold-) Nanopartikeln für Anwendungenrnin der Medizin, hat nach der Entwicklung von automatisierten Mikroskopen gedrängt, die im bis dahin nur teilweise abgedeckten spektralen Fenster der biologischen Gewebe (biologisches Fenster) von 650 bis 900nm messen können. Ich stelle in dieser Arbeit das Plasmoscope vor, das genau unter Beobachtung der genannten Anforderungen entworfen wurde, in dem (1) ein einstellbarer Spalt in die Eingangsö_nung des Spektrometers, die mit der Bildebene des Mikroskops zusammenfällt, gesetzt wurde, und (2) einem Piezo Scantisch, der es ermöglicht, die Probe durch diesen schmalen Spalt abzurastern. Diese Verwirklichung vermeidet optische Elemente, die im nahen Infra-Rot absorbieren.rnMit dem Plasmoscope untersuche ich die plasmonische Sensitivität von Gold- und Silbernanostrnäbchen, d.h. die Plasmon-Resonanzverschiebung in Abhängigkeit mit der Änderung des umgebendenrnMediums. Die Sensitivität ist das Mass dafür, wie gut die Nanopartikeln Materialänderungenrnin ihrer Umgebung detektieren können, und damit ist es immens wichtig zu wissen, welche Parameterrndie Sensitivität beein_ussen. Ich zeige hier, dass Silbernanostäbchen eine höhere Sensitivität alsrnGoldnanostäbchen innerhalb des biologischen Fensters besitzen, und darüberhinaus, dass die Sensitivität mit der Dicke der Stäbchen wächst. Ich stelle eine theoretische Diskussion der Sensitivitätrnvor, indenti_ziere die Materialparameter, die die Sensitivität bein_ussen und leite die entsprechendenrnFormeln her. In einer weiteren Annäherung präsentiere ich experimentelle Daten, die die theoretische Erkenntnis unterstützen, dass für Sensitivitätsmessschemata, die auch die Linienbreite mitberücksichtigen, Goldnanostäbchen mit einem Aspektverhältnis von 3 bis 4 das optimalste Ergebnis liefern. Verlässliche Sensoren müssen eine robuste Wiederholbarkeit aufweisen, die ich mit Gold- und Silbernanostäbchen untersuche.rnDie Plasmonen-resonanzwellenlänge hängt von folgenden intrinsischen Materialparametern ab:rnElektrondichte, Hintergrundpolarisierbarkeit und Relaxationszeit. Basierend auf meinen experimentellen Ergebnissen zeige ich, dass Nanostäbchen aus Kupfer-Gold-Legierung im Vergleich zu ähnlich geformten Goldnanostäbchen eine rotverschobene Resonanz haben, und in welcher Weiserndie Linienbreite mit der stochimetrischen Zusammensetzung der legierten Nanopartikeln variiert.rnDie Abhängigkeit der Linienbreite von der Materialzusammensetzung wird auch anhand von silberbeschichteten und unbeschichteten Goldnanostäbchen untersucht.rnHalbleiternanopartikeln sind Kandidaten für e_ziente photovoltaische Einrichtungen. Die Energieumwandlung erfordert eine Ladungstrennung, die mit dem Plasmoscope experimentell vermessen wird, in dem ich die lichtinduzierte Wachstumsdynamik von Goldsphären auf Halbleiternanost äbchen in einer Goldionenlösung durch die Messung der gestreuten Intensität verfolge.rn