384 resultados para measurement accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a novel sensitive element and due to its advantages of immunity to electrical interference, distributed measurement, etc., fiber Bragg grating (FBG) has been researched widely. To realize the substitution of high accurate electronic temperature sensors, high sensitive FBG temperature sensors can be made by taking advantage of its characters of being sensitive to both temperature and strain. Although there are reports about high sensitive FBG temperature sensors, however, few about their stability have been done. We manufactured a high sensitive FBG temperature sensor, and put it together with an average FBG temperature sensor and an electronic crystal temperature sensor into a stainless steel container filled by water to observe the room temperature change. By comparing their results in two weeks, we have found out that: although the high sensitive FBG temperature sensor is in much better agreement with the electronic crystal sensor than the average FBG sensor is, it has occurred some small drifts. Because the drifts appeared in the process of further pulling the FBG, it might be a result of the slip of the FBG fixing points. This contributes some good experiences to the application of FBG in high accuracy temperature measurement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives The relationship between performance variability and accuracy in cricket fast bowlers of different skill levels under three different task conditions was investigated. Bowlers of different skill levels were examined to observe if they could adapt movement patterns to maintain performance accuracy on a bowling skills test. Design 8 national, 12 emerging and 12 junior pace bowlers completed an adapted version of the Cricket Australia bowling skills test, in which they performed 30 trials involving short (n = 10), good (n = 10), and full (n = 10) length deliveries. Methods Bowling accuracy was recorded by digitising ball position relative to the centre of a target. Performance measures were mean radial error (accuracy), variable error (consistency), centroid error (bias), bowling score and ball speed. Radial error changes across the duration of the skills test were used to record accuracy adjustment in subsequent deliveries. Results Elite fast bowlers performed better in speed, accuracy, and test scores than developing athletes. Bowlers who were less variable were also more accurate across all delivery lengths. National and emerging bowlers were able to adapt subsequent performance trials within the same bowling session for short length deliveries. Conclusions Accuracy and adaptive variability were key components of elite performance in fast bowling which improved with skill level. In this study, only national elite bowlers showed requisite levels of adaptive variability to bowl a range of lengths to different pitch locations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is a vital input for dynamic queue management on metered on-ramps. Accurate and reliable queue information enables the management of on-ramp queue in an adaptive manner to the actual traffic queue size and thus minimises the adverse impacts of queue flush while increasing the benefit of ramp metering. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is the most vital input for a dynamic queue management that can treat long queues on metered on-ramps more sophistically. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in the congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All civil and private aircraft are required to comply with the airworthiness standards set by their national airworthiness authority and throughout their operational life must be in a condition of safe operation. Aviation accident data shows that over 20% of all fatal accidents in aviation are due to airworthiness issues, specifically aircraft mechanical failures. Ultimately it is the responsibility of each registered operator to ensure that their aircraft remain in a condition of safe operation, and this is done through both effective management of airworthiness activities and the effective programme governance of safety outcomes. Typically, the projects within these airworthiness management programmes are focused on acquiring, modifying and maintaining the aircraft as a capability supporting the business. Programme governance provides the structure through which the goals and objectives of airworthiness programmes are set along with the means of attaining them. Whilst the principal causes of failures in many programmes can be traced to inadequate programme governance, many of the failures in large-scale projects can have their root causes in the organizational culture and more specifically in the organizational processes related to decision-making. This paper examines the primary theme of project and programme-based enterprises, and introduces a model for measuring organizational culture in airworthiness management programmes using measures drawn from 211 respondents in Australian airline programmes. The paper describes the theoretical perspectives applied to modifying an original model to specifically focus it on measuring the organizational culture of programmes for managing airworthiness; identifying the most important factors needed to explain the relationship between the measures collected, and providing a description of the nature of these factors. The paper concludes by identifying a model that best describes the organizational culture data collected from seven airworthiness management programmes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The International Classification of Functioning, Disability and Health (ICF) assumes a biopsychosocial basis for disability and provides a framework for understanding how environmental factors contribute to the experience of disability. To determine the utility of prevalent disability assessment instruments, the authors examined the extent to which a range of such instruments addressed the impact of environmental factors on the individual and whether the instruments designed for different disability groups focused differentially on the environment. Items from 20 widely used disability assessment instruments were linked to the five chapters of the ICF environment component using standardized classification rules. Nineteen of the 20 instruments reviewed measured the environment to varying degrees. It was determined that environmental factors from the Natural Environment and Attitudes chapters were not well accommodated by the majority of instruments. Instruments developed for people with intellectual disabilities had the greatest environmental coverage. Only one instrument provided a relatively comprehensive and economical account of environmental barriers. The authors conclude that ICF classification of environmental factors provides a valuable resource for evaluating the environmental content of existing disability-related instruments, and that it may also provide a useful framework for revising instruments in use and for developing future disability assessment instruments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neutron Compton scattering (NCS) measurements of the anisotropy of the momentum distribution and the mean Laplacian of the interatomic potential ∇2V have been performed using electron volt neutrons, with wave vector transfers between 24 Å−1 and 98 Å−1. The measured momentum distribution of the atoms displays significantly more anisotropy than a calculation using a model density of states. We have observed anisotropies in ∇2V for the first time. The results suggest that the atomic potential is harmonic within the graphite planes, but anharmonic for vibrations perpendicular to the planes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper looks at the accuracy of using the built-in camera of smart phones and free software as an economical way to quantify and analyse light exposure by producing luminance maps from High Dynamic Range (HDR) images. HDR images were captured with an Apple iPhone 4S to capture a wide variation of luminance within an indoor and outdoor scene. The HDR images were then processed using Photosphere software (Ward, 2010.) to produce luminance maps, where individual pixel values were compared with calibrated luminance meter readings. This comparison has shown an average luminance error of ~8% between the HDR image pixel values and luminance meter readings, when the range of luminances in the image is limited to approximately 1,500cd/m2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While academic interest in destination branding has been gathering momentum since the field commenced in the late 1990s, one important gap in this literature that has received relatively little attention to date is the measurement of destination brand performance. This paper sets out one method for assessing the performance of a destination brand over time. The intent is to present an approach that will appeal to marketing practitioners, and which is also conceptually sound. The method is underpinned by Decision Set Theory and the concept of Consumer-Based Brand Equity (CBBE), while the key variables mirror the branding objectives used by many destination marketing organisations (DMO). The approach is demonstrated in this paper to measure brand performance for Australia in the New Zealand market. It is suggested the findings provide indicators of both i) the success of previous marketing communications, and ii) future performance, which can be easily communicated to a DMO’s stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fundamental proposition is that the accuracy of the designer's tender price forecasts is positively correlated with the amount of information available for that project. The paper describes an empirical study of the effects of the quantity of information available on practicing Quantity Surveyors' forecasting accuracy. The methodology involved the surveyors repeatedly revising tender price forecasts on receipt of chunks of project information. Each of twelve surveyors undertook two projects and selected information chunks from a total of sixteen information types. The analysis indicated marked differences in accuracy between different project types and experts/non-experts. The expert surveyors' forecasts were not found to be significantly improved by information other than that of basic building type and size, even after eliminating project type effects. The expert surveyors' forecasts based on the knowledge of building type and size alone were, however, found to be of similar accuracy to that of average practitioners pricing full bills of quantities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several methods of estimating the costs or price of construction projects are now available for use in the construction industry. It is difficult due to the conservative approach of estimators and quantity surveyors, and the fact that the industry is undergoing one of its deepest recessions this century, to implement any changes in these processes. Several methods have been tried and tested and probably discarded forever, whereas other methods are still in their infancy. There is also a movement towards greater use of the computer, whichever method seems to be adopted. An important consideration with any method of estimating is the accuracy by which costs can be calculated. Any improvement in this consideration will be welcomed by a11 parties, because existing methods are poor when measured by this criteria. Estimating, particularly by contractors, has always carried some mystic, and many of the processes discussed both in the classroom and in practice are little more than fallacy when properly investigated. What makes an estimator or quantity surveyor good at forecasting the right price? To what extent does human behaviour influence or have a part to play? These and some of the other aspects of effective estimating are now examined in more detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are no population studies of prevalence or incidence of child maltreatment in Australia. Child protection data gives some understanding but is restricted by system capacity and definitional issues across jurisdictions. Child protection data currently suggests that numbers of reports are increasing yearly, and the child protection system then becomes focussed on investigating all reports and diluting available resources for those children who are most in need of intervention. A public health response across multiple agencies enables responses to child safety across the entire population. All families are targeted at the primary level; examples include ensuring all parents know the dangers of shaking a baby or teaching children to say no if a situation makes them uncomfortable. The secondary level of prevention targets families with a number of risk factors, for example subsidised child care so children aren't left unsupervised after school when both parents have to be at work or home visiting for drug-addicted parents to ensure children are cared for. The tertiary response then becomes the responsibility of the child protection system and is reserved for those children where abuse and neglect are identified. This model requires that child safety is seen in a broader context than just the child protection system, and increasingly health professionals are being identified as an important component in the public health framework. If all injury is viewed as preventable and considered along a continuum of 'accidental' through to 'inflicted', it becomes possible to conceptualise child maltreatment in an injury context. Parental intent may not be to cause harm to the child, but by lack of insight or concern about risk, the potential for injury is high. The mechanisms for unintentional and intentional injury overlap and some suggest that by segregating child abuse (with the possible exception of sexual abuse) from unintentional injury, child abuse is excluded from the broader injury prevention initiative that is gaining momentum in the community. This research uses a public health perspective, specifically that of injury prevention, to consider the problem of child abuse. This study employed a mixed method design that incorporates secondary data analysis, data linkage and structured interviews of different professional groups. Datasets from the Queensland Injury Surveillance Unit (QISU) and The Department of Child Safety (DCS) were evaluated. Coded injury data was grouped according to intent of injury according to those with a code that indicated the ED presentation was due to child abuse, a code indicating that the injury was possibly due to abuse or, in the third group, the intent code indicated that the injury was unintentional and not due to abuse. Primary data collection from ED records was undertaken and information recoded to assess reliability and completeness. Emergency department data (QISU) was linked to Department of Child Safety Data to examine concordance and data quality. Factors influencing the collection and collation of these data were identified through structured interview methodology and analysed using qualitative methods. Secondary analysis of QISU data indicated that codes lacking specific information on the injury event were more likely to also have an intent code indicating abuse than those records where there was specific information on the injury event. Codes for abuse appeared in only 1.2% of the 84,765 records analysed. Unintentional injury was the most commonly coded intent (95.3%). In the group with a definite abuse code assigned at triage, 83% linked to a record with DCS and cases where documentation indicated police involvement were significantly more likely to be associated with a DCS record than those without such documentation. In those coded with an unintentional injury code, 22% linked to a DCS record with cases assigned an urgent triage category more likely to link than those with a triage category for resuscitation and children who presented to regional or remote hospitals more likely to link to a DCS record than those presenting to urban hospitals. Twenty-nine per cent of cases with a code indicating possible abuse linked to a DCS record. In documentation that indicated police involvement in the case, a code for unspecified activity when compared to cases with a code indicating involvement in a sporting activity and children less than 12 months of age compared to those in the 13-17 year old age group were all variables significantly associated with linkage to a DCS record. Only 13% of records contained documentation indicating that child abuse and neglect were considered in the diagnosis of the injury despite almost half of the sample having a code of abuse or possible abuse. Doctors and nurses were confident in their knowledge of the process of reporting child maltreatment but less confident about identifying child abuse and neglect and what should be reported. Many were concerned about implications of reporting, for the child and family and for themselves. A number were concerned about the implications of not reporting, mostly for the wellbeing of the child and a few in terms of their legal obligations as mandatory reporters. The outcomes of this research will help improve the knowledge of barriers to effective surveillance of child abuse in emergency departments. This will, in turn, ensure better identification and reporting practises; more reliable official statistical collections and the potential of flagging high-risk cases to ensure adequate departmental responses have been initiated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous research employing indirect measures of arch structure, such as those derived from footprints, have indicated that obesity results in a “flatter” foot type. In the absence of radiographic measures, however, definitive conclusions regarding the osseous alignment of the foot cannot be made. We determined the effect of body mass index (BMI) on radiographic and footprint‐based measures of arch structure. The research was a cross‐sectional study in which radiographic and footprint‐based measures of foot structure were made in 30 subjects (10 males, 20 female) in addition to standard anthropometric measures of height, weight, and BMI. Multiple (univariate) regression analysis demonstrated that both BMI ( β  = 0.39, t 26  = 2.12, p  = 0.04) and radiographic arch alignment ( β  = 0.51, t 26  = 3.32, p  < 0.01) were significant predictors of footprint‐based measures of arch height after controlling for all variables in the model ( R 2  = 0.59, F 3,26  = 12.3, p  < 0.01). In contrast, radiographic arch alignment was not significantly associated with BMI ( β  = −0.03, t 26  = −0.13, p  = 0.89) when Arch Index and age were held constant ( R 2  = 0.52, F 3,26  = 9.3, p  < 0.01). Adult obesity does not influence osseous alignment of the medial longitudinal arch, but selectively distorts footprint‐based measures of arch structure. Footprint‐based measures of arch structure should be interpreted with caution when comparing groups of varying body composition.