882 resultados para ambiguity advantage
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.
Resumo:
The question as to whether poser race affects the happy categorization advantage, the faster categorization of happy than of negative emotional expressions, has been answered inconsistently. Hugenberg (2005) found the happy categorization advantage only for own race faces whereas faster categorization of angry expressions was evident for other race faces. Kubota and Ito (2007) found a happy categorization advantage for both own race and other race faces. These results have vastly different implications for understanding the influence of race cues on the processing of emotional expressions. The current study replicates the results of both prior studies and indicates that face type (computer-generated vs. photographic), presentation duration, and especially stimulus set size influence the happy categorization advantage as well as the moderating effect of poser race.
Resumo:
Piezoelectric composites comprising an active phase of ferroelectric ceramic and a polymer matrix have recently attracted numerous sensory applications. However, it remains a major challenge to further improve their electromechanical response for advanced applications such as precision control and monitoring systems. We hereby investigated the incorporation of graphene platelets (GnPs) and multi-walled carbon nanotubes (MWNTs), each with various weight fractions, into PZT (lead zirconate titanate)/epoxy composites to produce three-phase nanocomposites. The nanocomposite films show markedly improved piezoelectric coefficients and electromechanical responses (50%) besides an enhancement of ~200% in stiffness. Carbon nanomaterials strengthened the impact of electric field on the PZT particles by appropriately raising the electrical conductivity of epoxy. GnPs have been proved far more promising in improving the poling behavior and dynamic response than MWNTs. The superior dynamic sensitivity of GnP-reinforced composite may be caused by GnPs’ high load transfer efficiency arising from their two-dimensional geometry and good compatibility with the matrix. Reduced acoustic impedance mismatch resulted from the improved thermal conductance may also contribute to the higher sensitivity of GnP-reinforced composite. This research pointed out the potential of employing GnPs to develop highly sensitive piezoelectric composites for sensing applications.
Resumo:
Purpose The role played by the innate immune system in determining survival from non-small-cell lung cancer (NSCLC) is unclear. The aim of this study was to investigate the prognostic significance of macrophage and mast-cell infiltration in NSCLC. Methods We used immunohistochemistry to identify tryptase+ mast cells and CD68+ macrophages in the tumor stroma and tumor islets in 175 patients with surgically resected NSCLC. Results Macrophages were detected in both the tumor stroma and islets in all patients. Mast cells were detected in the stroma and islets in 99.4% and 68.5% of patients, respectively. Using multivariate Cox proportional hazards analysis, increasing tumor islet macrophage density (P < .001) and tumor islet/stromal macrophage ratio (P < .001) emerged as favorable independent prognostic indicators. In contrast, increasing stromal macrophage density was an independent predictor of reduced survival (P = .001). The presence of tumor islet mast cells (P = .018) and increasing islet/stromal mast-cell ratio (P = .032) were also favorable independent prognostic indicators. Macrophage islet density showed the strongest effect: 5-year survival was 52.9% in patients with an islet macrophage density greater than the median versus 7.7% when less than the median (P < .0001). In the same groups, respectively, median survival was 2,244 versus 334 days (P < .0001). Patients with a high islet macrophage density but incomplete resection survived markedly longer than patients with a low islet macrophage density but complete resection. Conclusion The tumor islet CD68+ macrophage density is a powerful independent predictor of survival from surgically resected NSCLC. The biologic explanation for this and its implications for the use of adjunctive treatment requires further study. © 2005 by American Society of Clinical Oncology.
Resumo:
Drawing on the fields of philosophy, phenomenology, art history and theory as well as the candidate's own painting practice, this PhD explores the nature of ambiguity and semiosis in contemporary abstract painting. The thesis demonstrates how the aesthetic qualities of pause and rupture, transition and slippage work emergently to break established clichés, habits and intentions in the experiencing of abstract painting and artistic practice.
Resumo:
Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.
Resumo:
Reliability of carrier phase ambiguity resolution (AR) of an integer least-squares (ILS) problem depends on ambiguity success rate (ASR), which in practice can be well approximated by the success probability of integer bootstrapping solutions. With the current GPS constellation, sufficiently high ASR of geometry-based model can only be achievable at certain percentage of time. As a result, high reliability of AR cannot be assured by the single constellation. In the event of dual constellations system (DCS), for example, GPS and Beidou, which provide more satellites in view, users can expect significant performance benefits such as AR reliability and high precision positioning solutions. Simply using all the satellites in view for AR and positioning is a straightforward solution, but does not necessarily lead to high reliability as it is hoped. The paper presents an alternative approach that selects a subset of the visible satellites to achieve a higher reliability performance of the AR solutions in a multi-GNSS environment, instead of using all the satellites. Traditionally, satellite selection algorithms are mostly based on the position dilution of precision (PDOP) in order to meet accuracy requirements. In this contribution, some reliability criteria are introduced for GNSS satellite selection, and a novel satellite selection algorithm for reliable ambiguity resolution (SARA) is developed. The SARA algorithm allows receivers to select a subset of satellites for achieving high ASR such as above 0.99. Numerical results from a simulated dual constellation cases show that with the SARA procedure, the percentages of ASR values in excess of 0.99 and the percentages of ratio-test values passing the threshold 3 are both higher than those directly using all satellites in view, particularly in the case of dual-constellation, the percentages of ASRs (>0.99) and ratio-test values (>3) could be as high as 98.0 and 98.5 % respectively, compared to 18.1 and 25.0 % without satellite selection process. It is also worth noting that the implementation of SARA is simple and the computation time is low, which can be applied in most real-time data processing applications.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
ABSTRACT A rapidly changing business environment and legacy IT problems has resulted in many organisations implementing standard package solutions. This 'common systems' approach establishes a common IT and business process infrastructure within organisations and its increasing dominance raises several important strategic issues. These are to what extent do common systems impose common business processes and management systems on competing firms, and what is the source of competitive advantage if the majority of firms employ almost identical information systems and business processes? A theoretical framework based on research into legacy systems and earlier IT strategy literature is used to analyse three case studies in the manufacturing, chemical and IT industries. It is shown that the organisations are treating common systems as the core of their organisations' abilities to manage business transactions. To achieve competitive advantage they are clothing these common systems with information systems designed to capture information about competitors, customers and suppliers, and to provide a basis for sharing knowledge within the organisation and ultimately with economic partners. The importance of these approaches to other organisations and industries is analysed and an attempt is made at outlining the strategic options open to firms beyond the implementation of common business systems.
Resumo:
While past knowledge-based approaches to service innovation have emphasized the role of knowledge integration in the delivery of customer-focused solutions, these approaches do not adequately address the complexities inherent in knowledge acquisition and integration in project-oriented firms. Adopting a dynamic capability framework and building on knowledge-based approaches to innovation, the current study examines how the interplay of learning capabilities and knowledge integration capability impacts service innovation and sustained competitive advantage. This two-stage multi-sample study finds that entrepreneurial project-oriented service firms in their quest for competitive advantage through greater innovation invest in knowledge acquisition and integration capabilities. Implications for theory and practice are discussed and directions for future research provided.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
Smaller firms are often viewed as resistant to regulation due to cost burdens. However, evidence indicates that for some compliance is beneficial under certain conditions. Drawing on data on attitudes and responses of smaller firm owner-managers to changes in Australia’s harmonising work health and safety context we report on smaller firms’ responses to these changes. Despite uncertainty due to incomplete harmonisation, many owner-managers viewed safety compliance as important and necessary to do business. Those with negative views still linked positive safety performance to business outcomes. We categorise smaller firms’ responses and in this sample most are Positive Responders. We suggest ways forward for policy-makers to support smaller firms in complying with occupational health and safety regulation.
Resumo:
Australia currently has a small generic and biosimilar industry despite having a good track record in biomedical research and a sound reputation in producing high quality but small volume biological pharmaceuticals. In recent times, Australia has made incremental changes to its regulation of biosimilars – in patent registration, in the use of commercial confidential information, and in remuneration. These improvements, together with Australia’s geographical proximity and strong trade relationship with the Asian biocluster have positioned Australia to take advantage of potential public cost savings from the increased use of biosimilars.