973 resultados para Error in substance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Travel time estimation and prediction on motorways has long been a topic of research. Prediction modeling generally assumes that the estimation is perfect. No matter how good is the prediction modeling- the errors in estimation can significantly deteriorate the accuracy and reliability of the prediction. Models have been proposed to estimate travel time from loop detector data. Generally, detectors are closely spaced (say 500m) and travel time can be estimated accurately. However, detectors are not always perfect, and even during normal running conditions few detectors malfunction, resulting in increase in the spacing between the functional detectors. Under such conditions, error in the travel time estimation is significantly large and generally unacceptable. This research evaluates the in-practice travel time estimation model during different traffic conditions. It is observed that the existing models fail to accurately estimate travel time during large detector spacing and congestion shoulder periods. Addressing this issue, an innovative Hybrid model that only considers loop data for travel time estimation is proposed. The model is tested using simulation and is validated with real Bluetooth data from Pacific Motorway Brisbane. Results indicate that during non free flow conditions and larger detector spacing Hybrid model provides significant improvement in the accuracy of travel time estimation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Finite element method (FEM) relies on an approximate function to fit into a governing equation and minimizes the residual error in the integral sense in order to generate solutions for the boundary value problems (nodal solutions). Because of this FEM does not show simultaneous capacities for accurate displacement and force solutions at node and along an element, especially when under the element loads, which is of much ubiquity. If the displacement and force solutions are strictly confined to an element’s or member’s ends (nodal response), the structural safety along an element (member) is inevitably ignored, which can definitely hinder the design of a structure for both serviceability and ultimate limit states. Although the continuous element deflection and force solutions can be transformed into the discrete nodal solutions by mesh refinement of an element (member), this setback can also hinder the effective and efficient structural assessment as well as the whole-domain accuracy for structural safety of a structure. To this end, this paper presents an effective, robust, applicable and innovative approach to generate accurate nodal and element solutions in both fields of displacement and force, in which the salient and unique features embodies its versatility in applications for the structures to account for the accurate linear and second-order elastic displacement and force solutions along an element continuously as well as at its nodes. The significance of this paper is on shifting the nodal responses (robust global system analysis) into both nodal and element responses (sophisticated element formulation).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurement of discrimination against 18O during dark respiration in plants is currently accepted as the only reliable method of estimating the partitioning of electrons between the cytochrome and alternative pathways. In this paper, we review the theory of the technique and its application to a gas-phase system. We extend it to include sampling effects and show that the isotope discrimination factor, D, is calculated as –dln(1 + δ)/dlnO*, where δ is isotopic composition of the substrate oxygen and O*=[O2]/[N2] in a closed chamber containing tissue respiring in the dark. It is not necessary to integrate the expression but, if the integrated form is used, the resultant regression should not be constrained through the origin. This is important since any error in D will have significant effects on the estimation of the flux of electrons through the two pathways.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Participation in drinking games (DGs) by university students is often associated with heavy drinking and negative social and health impacts. Although research in Australia indicates that university students tend to drink at risky levels, there is paucity of literature on DGs among students, especially those residing at regional universities. This research examined drinking among female college students of white background. Eighteen female students participated in face-to-face in-depth interviews to describe their DG experiences. Most women played DGs for social and monetary reasons, with many drinking high volumes of alcohol during the game. Excessive drinking was linked with the type of beverage consumed. Despite knowing the health risks associated with DGs, there was a strong social imperative for these young women to play these games. Research and public health initiatives to better understand and address problematic drinking activities in rural and regional Australia have tended to ignore women and the dominant white populations whose heavy drinking has been largely restricted to private spheres.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inspired by similar reforms introduced in New Zealand, Canada and the United States, the Commonwealth, with the co-operation of the States, seeks in the Personal Property Securities Bill 2008 (the Bill) to introduce a central repository of recorded information reflecting particular security interests in personal property in Australia. Specifically, the interest recorded is an interest in personal property provided for by a transaction that in substance secures the payment or the performance of an obligation. In addition to providing a notification of the use of the personal property as collateral to secure the payment of monies or the performance of an obligation, the Bill proposes to introduce a regime of prioritising interests in the same collateral. Central to this prioritisation are the concepts of a ‘perfected security interests’and ‘unperfected security interests’. Relevantly, a perfected security interest in collateral has priority over an unperfected security interest in the same collateral. The proposed mechanisms rely on the fundamental integer of personal property, which is defined as any property other than land. Recognising that property may take a tangible as well as an intangible form, the Bill reflects an appreciation of the fact that some property may have a tangible form which may act as collateral, and simultaneously the same property may involve other property, intangible property in the form of intellectual property rights, which in their own right may be the subject of a‘security agreement’. An example set out in the Commentary on the Consultation Draft of the Bill (the Commentary), indicates the practical implications involving certain property which have multiple profiles for the purposes of the Bill. This submission is concerned with the presumptions made in relation to the interphase between tangible property and intangible property arising from the same personal property, as set out in s 30 of the Bill.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents an empirical study of the effects of topology on cellular automata rule spaces. The classical definition of a cellular automaton is restricted to that of a regular lattice, often with periodic boundary conditions. This definition is extended to allow for arbitrary topologies. The dynamics of cellular automata within the triangular tessellation were analysed when transformed to 2-manifolds of topological genus 0, genus 1 and genus 2. Cellular automata dynamics were analysed from a statistical mechanics perspective. The sample sizes required to obtain accurate entropy calculations were determined by an entropy error analysis which observed the error in the computed entropy against increasing sample sizes. Each cellular automata rule space was sampled repeatedly and the selected cellular automata were simulated over many thousands of trials for each topology. This resulted in an entropy distribution for each rule space. The computed entropy distributions are indicative of the cellular automata dynamical class distribution. Through the comparison of these dynamical class distributions using the E-statistic, it was identified that such topological changes cause these distributions to alter. This is a significant result which implies that both global structure and local dynamics play a important role in defining long term behaviour of cellular automata.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Hot air ballooning incidents are relatively rare, however, when they do occur they are likely to result in a fatality or serious injury. Human error is commonly attributed as the cause of hot air ballooning incidents; however, error in itself is not an explanation for safety failures. This research aims to identify, and establish the relative importance of factors contributing towards hot air ballooning incidents. Methods: Twenty-two Australian Ballooning Federation (ABF) incident reports were thematically coded using a bottom up approach to identify causal factors. Subsequently, 69 balloonists (mean 19.51 years’ experience) participated in a survey to identify additional causal factors and rate (out of seven) the perceived frequency and potential impact to ballooning operations of each of the previously identified causal factors. Perceived associated risk was calculated by multiplying mean perceived frequency and impact ratings. Results: Incident report coding identified 54 causal factors within nine higher level areas: Attributes, Crew resource management, Equipment, Errors, Instructors, Organisational, Physical Environment, Regulatory body and Violations. Overall, ‘weather’, ‘inexperience’ and ‘poor/inappropriate decisions’ were rated as having greatest perceived associated risk. Discussion: Although errors were nominated as a prominent cause of hot air ballooning incidents, physical environment and personal attributes are also particularly important for safe hot air ballooning operations. In identifying a range of causal factors the areas of weakness surrounding ballooning operations have been defined; it is hoped that targeted safety and training strategies can now be put into place removing these contributing factors and reducing the chance of pilot error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many species of bat use ultrasonic frequency modulated (FM) pulses to measure the distance to objects by timing the emission and reception of each pulse. Echolocation is mainly used in flight. Since the flight speed of bats often exceeds 1% of the speed of sound, Doppler effects will lead to compression of the time between emission and reception as well as an elevation of the echo frequencies, resulting in a distortion of the perceived range. This paper describes the consequences of these Doppler effects on the ranging performance of bats using different pulse designs. The consequences of Doppler effects on ranging performance described in this paper assume bats to have a very accurate ranging resolution, which is feasible with a filterbank receiver. By modeling two receiver types, it was first established that the effects of Doppler compression are virtually independent of the receiver type. Then, used a cross-correlation model was used to investigate the effect of flight speed on Doppler tolerance and range–Doppler coupling separately. This paper further shows how pulse duration, bandwidth, function type, and harmonics influence Doppler tolerance and range–Doppler coupling. The influence of each signal parameter is illustrated using calls of several bat species. It is argued that range–Doppler coupling is a significant source of error in bat echolocation, and various strategies bats could employ to deal with this problem, including the use of range rate information are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many species of bat use ultrasonic frequency modulated (FM) pulses to measure the distance to objects by timing the emission and reception of each pulse. Echolocation is mainly used in flight. Since the flight speed of bats often exceeds 1% of the speed of sound,Doppler effects will lead to compression of the time between emission and reception as well as an elevation of the echo frequencies, resulting in a distortion of the perceived range. This paper describes the consequences of these Doppler effects on the ranging performance of bats using different pulse designs. The consequences of Doppler effects on ranging performance described in this paper assume bats to have a very accurate ranging resolution, which is feasible with a filterbank receiver. By modeling two receiver types, it was first established that the effects of Doppler compression are virtually independent of the receiver type. Then, used a cross-correlation model was used to investigate the effect of flight speed on Doppler tolerance and range–Doppler coupling separately. This paper further shows how pulse duration, bandwidth, function type, and harmonics influence Doppler tolerance and range–Doppler coupling. The influence of each signal parameter is illustrated using calls of several bat species. It is argued that range–Doppler coupling is a significant source of error in bat echolocation, and various strategies bats could employ to deal with this problem, including the use of range rate information are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dynamic nature of tissue temperature and the subcutaneous properties, such as blood flow, fatness, and metabolic rate, leads to variation in local skin temperature. Therefore, we investigated the effects of using multiple regions of interest when calculating weighted mean skin temperature from four local sites. Twenty-six healthy males completed a single trial in a thermonetural laboratory (mean ± SD): 24.0 (1.2) °C; 56 (8%) relative humidity; < 0.1 m/s air speed). Mean skin temperature was calculated from four local sites (neck, scapula, hand and shin) in accordance with International Standards using digital infrared thermography. A 50 x 50 mm square, defined by strips of aluminium tape, created six unique regions of interest, top left quadrant, top right quadrant, bottom left quadrant, bottom right quadrant, centre quadrant and the entire region of interest, at each of the local sites. The largest potential error in weighted mean skin temperature was calculated using a combination of a) the coolest and b) the warmest regions of interest at each of the local sites. Significant differences between the six regions interest were observed at the neck (P < 0.01), scapula (P < 0.001) and shin (P < 0.05); but not at the hand (P = 0.482). The largest difference (± SEM) at each site was as follows: neck 0.2 (0.1) °C; scapula 0.2 (0.0) °C; shin 0.1 (0.0) °C and hand 0.1 (0.1) °C. The largest potential error (mean ± SD) in weighted mean skin temperature was 0.4 (0.1) °C (P < 0.001) and the associated 95% limits of agreement for these differences was 0.2 to 0.5 °C. Although we observed differences in local and mean skin temperature based on the region of interest employed, these differences were minimal and are not considered physiologically meaningful.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Urbanisation significantly changes the characteristics of a catchment as natural areas are transformed to impervious surfaces such as roads, roofs and parking lots. The increased fraction of impervious surfaces leads to changes to the stormwater runoff characteristics, whilst a variety of anthropogenic activities common to urban areas generate a range of pollutants such as nutrients, solids and organic matter. These pollutants accumulate on catchment surfaces and are removed and trans- ported by stormwater runoff and thereby contribute pollutant loads to receiving waters. In summary, urbanisation influences the stormwater characteristics of a catchment, including hydrology and water quality. Due to the growing recognition that stormwater pollution is a significant environmental problem, the implementation of mitigation strategies to improve the quality of stormwater runoff is becoming increasingly common in urban areas. A scientifically robust stormwater quality treatment strategy is an essential requirement for effective urban stormwater management. The efficient design of treatment systems is closely dependent on the state of knowledge in relation to the primary factors influencing stormwater quality. In this regard, stormwater modelling outcomes provide designers with important guidance and datasets which significantly underpin the design of effective stormwater treatment systems. Therefore, the accuracy of modelling approaches and the reliability modelling outcomes are of particular concern. This book discusses the inherent complexity and key characteristics in the areas of urban hydrology and stormwater quality, based on the influence exerted by a range of rainfall and catchment characteristics. A comprehensive field sampling and testing programme in relation to pollutant build-up, an urban catchment monitoring programme in relation to stormwater quality and the outcomes from advanced statistical analyses provided the platform for the knowledge creation. Two case studies and two real-world applications are discussed to illustrate the translation of the knowledge created to practical use in relation to the role of rainfall and catchment characteristics on urban stormwater quality. An innovative rainfall classification based on stormwater quality was developed to support the effective and scientifically robust design of stormwater treatment systems. Underpinned by the rainfall classification methodology, a reliable approach for design rainfall selection is proposed in order to optimise stormwater treatment based on both, stormwater quality and quantity. This is a paradigm shift from the common approach where stormwater treatment systems are designed based solely on stormwater quantity data. Additionally, how pollutant build-up and stormwater runoff quality vary with a range of catchment characteristics was also investigated. Based on the study out- comes, it can be concluded that the use of only a limited number of catchment parameters such as land use and impervious surface percentage, as it is the case in current modelling approaches, could result in appreciable error in water quality estimation. Influential factors which should be incorporated into modelling in relation to catchment characteristics, should also include urban form and impervious surface area distribution. The knowledge created through the research investigations discussed in this monograph is expected to make a significant contribution to engineering practice such as hydrologic and stormwater quality modelling, stormwater treatment design and urban planning, as the study outcomes provide practical approaches and recommendations for urban stormwater quality enhancement. Furthermore, this monograph also demonstrates how fundamental knowledge of stormwater quality processes can be translated to provide guidance on engineering practice, the comprehensive application of multivariate data analyses techniques and a paradigm on integrative use of computer models and mathematical models to derive practical outcomes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to estimate the expected Remaining Useful Life (RUL) is critical to reduce maintenance costs, operational downtime and safety hazards. In most industries, reliability analysis is based on the Reliability Centred Maintenance (RCM) and lifetime distribution models. In these models, the lifetime of an asset is estimated using failure time data; however, statistically sufficient failure time data are often difficult to attain in practice due to the fixed time-based replacement and the small population of identical assets. When condition indicator data are available in addition to failure time data, one of the alternate approaches to the traditional reliability models is the Condition-Based Maintenance (CBM). The covariate-based hazard modelling is one of CBM approaches. There are a number of covariate-based hazard models; however, little study has been conducted to evaluate the performance of these models in asset life prediction using various condition indicators and data availability. This paper reviews two covariate-based hazard models, Proportional Hazard Model (PHM) and Proportional Covariate Model (PCM). To assess these models’ performance, the expected RUL is compared to the actual RUL. Outcomes demonstrate that both models achieve convincingly good results in RUL prediction; however, PCM has smaller absolute prediction error. In addition, PHM shows over-smoothing tendency compared to PCM in sudden changes of condition data. Moreover, the case studies show PCM is not being biased in the case of small sample size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction and Aims Wastewater analysis provides a non-intrusive way of measuring drug use within a population. We used this approach to determine daily use of conventional illicit drugs [cannabis, cocaine, methamphetamine and 3,4-methylenedioxymethamphetamine (MDMA)] and emerging illicit psychostimulants (benzylpiperazine, mephedrone and methylone) in two consecutive years (2010 and 2011) at an annual music festival. Design and Methods Daily composite wastewater samples, representative of the festival, were collected from the on-site wastewater treatment plant and analysed for drug metabolites. Data over 2 years were compared using Wilcoxon matched-pair test. Data from 2010 festival were compared with data collected at the same time from a nearby urban community using equivalent methods. Results Conventional illicit drugs were detected in all samples whereas emerging illicit psychostimulants were found only on specific days. The estimated per capita consumption of MDMA, cocaine and cannabis was similar between the two festival years. Statistically significant (P < 0.05; Z = −2.0–2.2) decreases were observed in use of methamphetamine and one emerging illicit psychostimulant (benzyl piperazine). Only consumption of MDMA was elevated at the festival compared with the nearby urban community. Discussion and Conclusions Rates of substance use at this festival remained relatively consistent over two monitoring years. Compared with the urban community, drug use among festival goers was only elevated for MDMA, confirming its popularity in music settings. Our study demonstrated that wastewater analysis can objectively capture changes in substance use at a music setting without raising major ethical issues. It would potentially allow effective assessments of drug prevention strategies in such settings in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is an error in the JANAF (1985) data on the standard enthalpy, Gibbs energy and equilibrium constant for the formation of C2H2 (g) from elements. The error has arisen on account of an incorrect expression used for computing these parameters from the heat capacity, entropy and the relative heat content. Presented in this paper are the corrected values of the enthalpy, the Gibbs energy of formation and the corresponding equilibrium constant.