233 resultados para Acceptance test
em Queensland University of Technology - ePrints Archive
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. As design criteria transition from empirical to mechanistic-empirical, soil test methods and equipment that measure properties such as stiffness and modulus and how they relate to Florida materials are needed. Requirements for the selected equipment are that they be portable, cost effective, reliable, a ccurate, and repeatable. A second objective is that the selected equipment measures soil properties without the use of nuclear materials.The current device used to measure soil compaction is the nuclear density gauge (NDG). Equipment evaluated in this research included lightweight deflectometers (LWD) from different manufacturers, a dynamic cone penetrometer (DCP), a GeoGauge, a Clegg impact soil tester (CIST), a Briaud compaction device (BCD), and a seismic pavement analyzer (SPA). Evaluations were conducted over ranges of measured densities and moistures.Testing (Phases I and II) was conducted in a test box and test pits. Phase III testing was conducted on materials found on five construction projects located in the Jacksonville, Florida, area. Phase I analyses determined that the GeoGauge had the lowest overall coefficient of variance (COV). In ascending order of COV were the accelerometer-type LWD, the geophone-type LWD, the DCP, the BCD, and the SPA which had the highest overall COV. As a result, the BCD and the SPA were excluded from Phase II testing.In Phase II, measurements obtained from the selected equipment were compared to the modulus values obtained by the static plate load test (PLT), the resilient modulus (MR) from laboratory testing, and the NDG measurements. To minimize soil and moisture content variability, the single spot testing sequence was developed. At each location, test results obtained from the portable equipment under evaluation were compared to the values from adjacent NDG, PLT, and laboratory MR measurements. Correlations were developed through statistical analysis. Target values were developed for various soils for verification on similar soils that were field tested in Phase III. The single spot testing sequence also was employed in Phase III, field testing performed on A-3 and A-2-4 embankments, limerock-stabilized subgrade, limerock base, and graded aggregate base found on Florida Department of Transportation construction projects. The Phase II and Phase III results provided potential trend information for future research—specifically, data collection for in-depth statistical analysis for correlations with the laboratory MR for specific soil types under specific moisture conditions. With the collection of enough data, stronger relationships could be expected between measurements from the portable equipment and the MR values. Based on the statistical analyses and the experience gained from extensive use of the equipment, the combination of the DCP and the LWD was selected for in-place soil testing for compaction control acceptance. Test methods and developmental specifications were written for the DCP and the LWD. The developmental specifications include target values for the compaction control of embankment, subgrade, and base materials.
Resumo:
This research investigates how to obtain accurate and reliable positioning results with global navigation satellite systems (GNSS). The work provides a theoretical framework for reliability control in GNSS carrier phase ambiguity resolution, which is the key technique for precise GNSS positioning in centimetre levels. The proposed approach includes identification and exclusion procedures of unreliable solutions and hypothesis tests, allowing the reliability of solutions to be controlled in the aspects of mathematical models, integer estimation and ambiguity acceptance tests. Extensive experimental results with both simulation and observed data sets effectively demonstrate the reliability performance characteristics based on the proposed theoretical framework and procedures.
Resumo:
This paper presents findings from an Australian study examining the behavioral correlates and stability of social status for preschool-aged children. The social status of an initial sample of 187 (94 boys and 93 girls) preschool children (mean age 62.4 months, SD = 4.22) was determined through sociometric assessment. Children classified as rejected, neglected and popular (n = 70) were selected for observation. Children were observed for a total of 25 minutes over a three-month period engaging in free play within their preschool centers. Results indicated that children classified as popular were more likely than rejected or neglected children to engage in cooperative play, ongoing connected conversation and to display positive affect. Popular children were less likely than rejected or neglected children to engage in parallel play, onlooker behavior or alone directed behavior. Six months after initial sociometric classification, sociometric interviews were repeated to test for stability and change. Results indicated that preschool-aged children’s social status classifications showed a moderate to high rate of stability for those children classified as popular, rejected and neglected.
Resumo:
Mobile phone banking (M-Banking) adoption around the world has been slow, and this has been perpetuated by the limited research that has been undertaken in the area. To address this gap, the study developed a model of antecedents to consumers' intention to sue M-Banking using attitudinal theory as a framework. To test the model, a quantitative web-based survey was undertaken with 314 respondents. The findings show that perceived usefulness, compatibility, perceived risk, perceived cost and attitude are primary determinants of consumer acceptance of M-Banking in an Australian context, The research contributes to an enhanced understanding of the multiple antecedents beliefs to customer attitudes and usage intentions that must be considered when introducing technology into the service encounter.
Resumo:
This paper aims to identify and test the key motivators and inhibitors for consumer acceptance of mobile phone banking (M-banking), particularly those that affect the consumer’s attitude towards, and intention to use, this self-service banking technology. A web-based survey was undertaken where respondents completed a questionnaire about their perceptions of M-banking’s ease of use, usefulness, cost, risk, compatibility with their lifestyle, and their need for interaction with personnel. Correlation and hierarchical multiple regression analysis, with Sobel tests, were used to determine whether these factors influenced consumers’ attitude and intention to use M-banking.
Resumo:
Process modeling is an emergent area of Information Systems research that is characterized through an abundance of conceptual work with little empirical research. To fill this gap, this paper reports on the development and validation of an instrument to measure user acceptance of process modeling grammars. We advance an extended model for a multi-stage measurement instrument development procedure, which incorporates feedback from both expert and user panels. We identify two main contributions: First, we provide a validated measurement instrument for the study of user acceptance of process modeling grammars, which can be used to assist in further empirical studies that investigate phenomena associated with the business process modeling domain. Second, in doing so, we describe in detail a procedural model for developing measurement instruments that ensures high levels of reliability and validity, which may assist fellow scholars in executing their empirical research.
Resumo:
Background In Booth v Amaca Pty Ltd and Amaba Pty Ltd,1 the New South Wales Dust Diseases Tribunal awarded a retired motor mechanic $326 640 in damages for his malignant pleural mesothelioma allegedly caused by exposure to asbestos through working with the brake linings manufactured by the defendants. The evidence before the Tribunal was that the plaintiff had been exposed to asbestos prior to working as a mechanic from home renovations when he was a child and loading a truck as a youth. However, as a mechanic he had been exposed to asbestos in brake linings on which he worked from 1953 to 1983. Curtis DCJ held at [172] that the asbestos from the brake linings ‘materially contributed to [the plaintiff’s] contraction of mesothelioma’. This decision was based upon acceptance that the effect of exposure to asbestos on the development of mesothelioma was cumulative and rejection of theory that a single fibre of asbestos can cause the disease...
Resumo:
While mobile phones have become ubiquitous in modern society, the use of mobile phones while driving is increasing at an alarming rate despite the associated crash risks. A significant safety concern is that driving while distracted by a mobile phone is more prevalent among young drivers, a less experienced driving cohort with elevated crash risk. The objective of this study was to examine the gap acceptance behavior of distracted young drivers at roundabouts. The CARRS-Q Advanced Driving Simulator was used to test participants on a simulated gap acceptance scenario at roundabouts. Conflicting traffic from the right approach of a four-legged roundabout were programmed to have a series of vehicles having the gaps between them proportionately increased from two to six seconds. Thirty-two licensed young drivers drove the simulator under three phone conditions: baseline (no phone conversation), hands-free and handheld phone conversations. Results show that distracted drivers started responding to the gap acceptance scenario at a distance closer to the roundabout and approached the roundabout at slower speeds. They also decelerated at faster rates to reduce their speeds prior to gap acceptance compared to non-distracted drivers. Although accepted gap sizes were not significantly different across phone conditions, differences in the safety margins at various gap sizes—measured by Post Encroachment Time (PET) between the driven vehicle and the conflicting vehicle—were statistically significant across phone conditions. PETs for distracted drivers were smaller across different gap sizes, suggesting a lower safety margin taken by distracted drivers compared to non-distracted drivers. The results aid in understanding how cognitive distraction resulting from mobile phone conversations while driving influences driving behavior during gap acceptance at roundabouts.