959 resultados para estimate


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proportion of functional sequence in the human genome is currently a subject of debate. The most widely accepted figure is that approximately 5% is under purifying selection. In Drosophila, estimates are an order of magnitude higher, though this corresponds to a similar quantity of sequence. These estimates depend on the difference between the distribution of genomewide evolutionary rates and that observed in a subset of sequences presumed to be neutrally evolving. Motivated by the widening gap between these estimates and experimental evidence of genome function, especially in mammals, we developed a sensitive technique for evaluating such distributions and found that they are much more complex than previously apparent. We found strong evidence for at least nine well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least seven classes in an alignment of four mammals, including human. We also identified at least three rate classes in human ancestral repeats. By positing that the largest of these ancestral repeat classes is neutrally evolving, we estimate that the proportion of nonneutrally evolving sequence is 30% of human ancestral repeats and 45% of the aligned portion of the genome. However, we also question whether any of the classes represent neutrally evolving sequences and argue that a plausible alternative is that they reflect variable structure-function constraints operating throughout the genomes of complex organisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract As regional and continental carbon balances of terrestrial ecosystems become available, it becomes clear that the soils are the largest source of uncertainty. Repeated inventories of soil organic carbon (SOC) organized in soil monitoring networks (SMN) are being implemented in a number of countries. This paper reviews the concepts and design of SMNs in ten countries, and discusses the contribution of such networks to reducing the uncertainty of soil carbon balances. Some SMNs are designed to estimate country-specific land use or management effects on SOC stocks, while others collect soil carbon and ancillary data to provide a nationally consistent assessment of soil carbon condition across the major land-use/soil type combinations. The former use a single sampling campaign of paired sites, while for the latter both systematic (usually grid based) and stratified repeated sampling campaigns (5–10 years interval) are used with densities of one site per 10–1,040 km². For paired sites, multiple samples at each site are taken in order to allow statistical analysis, while for the single sites, composite samples are taken. In both cases, fixed depth increments together with samples for bulk density and stone content are recommended. Samples should be archived to allow for re-measurement purposes using updated techniques. Information on land management, and where possible, land use history should be systematically recorded for each site. A case study of the agricultural frontier in Brazil is presented in which land use effect factors are calculated in order to quantify the CO2 fluxes from national land use/management conversion matrices. Process-based SOC models can be run for the individual points of the SMN, provided detailed land management records are available. These studies are still rare, as most SMNs have been implemented recently or are in progress. Examples from the USA and Belgium show that uncertainties in SOC change range from 1.6–6.5 Mg C ha−1 for the prediction of SOC stock changes on individual sites to 11.72 Mg C ha−1 or 34% of the median SOC change for soil/land use/climate units. For national SOC monitoring, stratified sampling sites appears to be the most straightforward attribution of SOC values to units with similar soil/land use/climate conditions (i.e. a spatially implicit upscaling approach). Keywords Soil monitoring networks - Soil organic carbon - Modeling - Sampling design

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To examine the prevalence of multiple types of maltreatment (MTM), potentially confounding factors and associations with depression, anxiety and self-esteem among adolescents in Viet Nam. Methods In 2006 we conducted a cross-sectional survey of 2591 students (aged 12–18 years; 52.1% female) from randomly-selected classes in eight secondary schools in urban (Hanoi) and rural (Hai Duong) areas of northern Viet Nam (response rate, 94.7%). Sequential multiple regression analyses were performed to estimate the relative influence of individual, family and social characteristics and of eight types of maltreatment, including physical, emotional and sexual abuse and physical or emotional neglect, on adolescent mental health. Findings Females reported more neglect and emotional abuse, whereas males reported more physical abuse, but no statistically significant difference was found between genders in the prevalence of sexual abuse. Adolescents were classified as having nil (32.6%), one (25.9%), two (20.7%), three (14.5%) or all four (6.3%) maltreatment types. Linear bivariate associations between MTM and depression, anxiety and low self-esteem were observed. After controlling for demographic and family factors, MTM showed significant independent effects. The proportions of the variance explained by the models ranged from 21% to 28%. Conclusion The combined influence of adverse individual and family background factors and of child maltreatment upon mental health in adolescents in Viet Nam is consistent with research in non-Asian countries. Emotional abuse was strongly associated with each health indicator. In Asian communities where child abuse is often construed as severe physical violence, it is important to emphasize the equally pernicious effects of emotional maltreatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work a novel hybrid approach is presented that uses a combination of both time domain and frequency domain solution strategies to predict the power distribution within a lossy medium loaded within a waveguide. The problem of determining the electromagnetic fields evolving within the waveguide and the lossy medium is decoupled into two components, one for computing the fields in the waveguide including a coarse representation of the medium (the exterior problem) and one for a detailed resolution of the lossy medium (the interior problem). A previously documented cell-centred Maxwell’s equations numerical solver can be used to resolve the exterior problem accurately in the time domain. Thereafter the discrete Fourier transform can be applied to the computed field data around the interface of the medium to estimate the frequency domain boundary condition in-formation that is needed for closure of the interior problem. Since only the electric fields are required to compute the power distribution generated within the lossy medium, the interior problem can be resolved efficiently using the Helmholtz equation. A consistent cell-centred finite-volume method is then used to discretise this equation on a fine mesh and the underlying large, sparse, complex matrix system is solved for the required electric field using the iterative Krylov subspace based GMRES iterative solver. It will be shown that the hybrid solution methodology works well when a single frequency is considered in the evaluation of the Helmholtz equation in a single mode waveguide. A restriction of the scheme is that the material needs to be sufficiently lossy, so that any penetrating waves in the material are absorbed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An initialisation process is a key component in modern stream cipher design. A well-designed initialisation process should ensure that each key-IV pair generates a different key stream. In this paper, we analyse two ciphers, A5/1 and Mixer, for which this does not happen due to state convergence. We show how the state convergence problem occurs and estimate the effective key-space in each case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The role of ions in the production of atmospheric particles has gained wide interest due to their profound impact on climate. Away from anthropogenic sources, molecules are ionized by alpha radiation from radon exhaled from the ground and cosmic gamma radiation from space. These molecular ions quickly form into ‘cluster ions’, typically smaller than about 1.5 nm. Using our measurements and the published literature, we present evidence to show that cluster ion concentrations in forest areas are consistently higher than outside. Since alpha radiation cannot penetrate more than a few centimetres of soil, radon present deep in the ground cannot directly contribute to the measured cluster ion concentrations. We propose an additional mechanism whereby radon, which is water soluble, is brought up by trees and plants through the uptake of groundwater and released into the atmosphere by transpiration. We estimate that, in a forest comprising eucalyptus trees spaced 4m apart, approximately 28% of the radon in the air may be released by transpiration. Considering that 24% of the earth’s land area is still covered in forests; these findings have potentially important implications for atmospheric aerosol formation and climate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an Australian context, the term hooning refers to risky driving behaviours such as illegal street racing and speed trials, as well as behaviours that involve unnecessary noise and smoke, which include burn outs, donuts, fish tails, drifting and other skids. Hooning receives considerable negative media attention in Australia, and since the 1990s all Australian jurisdictions have implemented vehicle impoundment programs to deal with the problem. However, there is limited objective evidence of the road safety risk associated with hooning behaviours. Attempts to estimate the risk associated with hooning are limited by official data collection and storage practices, and the willingness of drivers to admit to their illegal behaviour in the event of a crash. International evidence suggests that illegal street racing is associated with only a small proportion of fatal crashes; however, hooning in an Australian context encompasses a broader group of driving behaviours than illegal street racing alone, and it is possible that the road safety risks will differ with these behaviours. There is evidence from North American jurisdictions that vehicle impoundment programs are effective for managing drink driving offenders, and drivers who continue to drive while disqualified or suspended both during and post-impoundment. However, these programs used impoundment periods of 30 – 180 days (depending on the number of previous offences). In Queensland the penalty for a first hooning offence is 48 hours, while the vehicle can be impounded for up to 3 months for a second offence, or permanently for a third or subsequent offence within three years. Thus, it remains unclear whether similar effects will be seen for hooning offenders in Australia, as no evaluations of vehicle impoundment programs for hooning have been published. To address these research needs, this program of research consisted of three complementary studies designed to: (1) investigate the road safety implications of hooning behaviours in terms of the risks associated with the specific behaviours, and the drivers who engage in these behaviours; and (2) assess the effectiveness of current approaches to dealing with the problem; in order to (3) inform policy and practice in the area of hooning behaviour. Study 1 involved qualitative (N = 22) and quantitative (N = 290) research with drivers who admitted engaging in hooning behaviours on Queensland roads. Study 2 involved a systematic profile of a large sample of drivers (N = 834) detected and punished for a hooning offence in Queensland, and a comparison of their driving and crash histories with a randomly sampled group of Queensland drivers with the same gender and age distribution. Study 3 examined the post-impoundment driving behaviour of hooning offenders (N = 610) to examine the effects of vehicle impoundment on driving behaviour. The theoretical framework used to guide the research incorporated expanded deterrence theory, social learning theory, and driver thrill-seeking perspectives. This framework was used to explore factors contributing to hooning behaviours, and interpret the results of the aspects of the research designed to explore the effectiveness of vehicle impoundment as a countermeasure for hooning. Variables from each of the perspectives were related to hooning measures, highlighting the complexity of the behaviour. This research found that the road safety risk of hooning behaviours appears low, as only a small proportion of the hooning offences in Study 2 resulted in a crash. However, Study 1 found that hooning-related crashes are less likely to be reported than general crashes, particularly when they do not involve an injury, and that higher frequencies of hooning behaviours are associated with hooning-related crash involvement. Further, approximately one fifth of drivers in Study 1 reported being involved in a hooning-related crash in the previous three years, which is comparable to general crash involvement among the general population of drivers in Queensland. Given that hooning-related crashes represented only a sub-set of crash involvement for this sample, this suggests that there are risks associated with hooning behaviour that are not apparent in official data sources. Further, the main evidence of risk associated with the behaviour appears to relate to the hooning driver, as Study 2 found that these drivers are likely to engage in other risky driving behaviours (particularly speeding and driving vehicles with defects or illegal modifications), and have significantly more traffic infringements, licence sanctions and crashes than drivers of a similar (i.e., young) age. Self-report data from the Study 1 samples indicated that Queensland’s vehicle impoundment and forfeiture laws are perceived as severe, and that many drivers have reduced their hooning behaviour to avoid detection. However, it appears that it is more common for drivers to have simply changed the location of their hooning behaviour to avoid detection. When the post-impoundment driving behaviour of the sample of hooning offenders was compared to their pre-impoundment behaviour to examine the effectiveness of vehicle impoundment in Study 3, it was found that there was a small but significant reduction in hooning offences, and also for other traffic infringements generally. As Study 3 was observational, it was not possible to control for extraneous variables, and is, therefore, possible that some of this reduction was due to other factors, such as a reduction in driving exposure, the effects of changes to Queensland’s Graduated Driver Licensing scheme that were implemented during the study period and affected many drivers in the offender sample due to their age, or the extension of vehicle impoundment to other types of offences in Queensland during the post-impoundment period. However, there was a protective effect observed, in that hooning offenders did not show the increase in traffic infringements in the post period that occurred within the comparison sample. This suggests that there may be some effect of vehicle impoundment on the driving behaviour of hooning offenders, and that this effect is not limited to their hooning driving behaviour. To be more confident in these results, it is necessary to measure driving exposure during the post periods to control for issues such as offenders being denied access to vehicles. While it was not the primary aim of this program of research to compare the utility of different theoretical perspectives, the findings of the research have a number of theoretical implications. For example, it was found that only some of the deterrence variables were related to hooning behaviours, and sometimes in the opposite direction to predictions. Further, social learning theory variables had stronger associations with hooning. These results suggest that a purely legal approach to understanding hooning behaviours, and designing and implementing countermeasures designed to reduce these behaviours, are unlikely to be successful. This research also had implications for policy and practice, and a number of recommendations were made throughout the thesis to improve the quality of relevant data collection practices. Some of these changes have already occurred since the expansion of the application of vehicle impoundment programs to other offences in Queensland. It was also recommended that the operational and resource costs of these laws should be compared to the road safety benefits in ongoing evaluations of effectiveness to ensure that finite traffic policing resources are allocated in a way that produces maximum road safety benefits. However, as the evidence of risk associated with the hooning driver is more compelling than that associated with hooning behaviour, it was argued that the hooning driver may represent the better target for intervention. Suggestions for future research include ongoing evaluations of the effectiveness of vehicle impoundment programs for hooning and other high-risk driving behaviours, and the exploration of additional potential targets for intervention to reduce hooning behaviour. As the body of knowledge regarding the factors contributing to hooning increases, along with the identification of potential barriers to the effectiveness of current countermeasures, recommendations for changes in policy and practice for hooning behaviours can be made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND - High-density lipoprotein (HDL) protects against arterial atherothrombosis, but it is unknown whether it protects against recurrent venous thromboembolism. METHODS AND RESULTS - We studied 772 patients after a first spontaneous venous thromboembolism (average follow-up 48 months) and recorded the end point of symptomatic recurrent venous thromboembolism, which developed in 100 of the 772 patients. The relationship between plasma lipoprotein parameters and recurrence was evaluated. Plasma apolipoproteins AI and B were measured by immunoassays for all subjects. Compared with those without recurrence, patients with recurrence had lower mean (±SD) levels of apolipoprotein AI (1.12±0.22 versus 1.23±0.27 mg/mL, P<0.001) but similar apolipoprotein B levels. The relative risk of recurrence was 0.87 (95% CI, 0.80 to 0.94) for each increase of 0.1 mg/mL in plasma apolipoprotein AI. Compared with patients with apolipoprotein AI levels in the lowest tertile (<1.07 mg/mL), the relative risk of recurrence was 0.46 (95% CI, 0.27 to 0.77) for the highest-tertile patients (apolipoprotein AI >1.30 mg/mL) and 0.78 (95% CI, 0.50 to 1.22) for midtertile patients (apolipoprotein AI of 1.07 to 1.30 mg/mL). Using nuclear magnetic resonance, we determined the levels of 10 major lipoprotein subclasses and HDL cholesterol for 71 patients with recurrence and 142 matched patients without recurrence. We found a strong trend for association between recurrence and low levels of HDL particles and HDL cholesterol. CONCLUSIONS - Patients with high levels of apolipoprotein AI and HDL have a decreased risk of recurrent venous thromboembolism. © 2007 American Heart Association, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Lockyer Valley in southeast Queensland supports important and intensive irrigation which is dependant on the quality and availability of groundwater. Prolonged drought conditions from ~1997 resulted in a depletion of the alluvial aquifers, and concern for the long-term sustainability of this resource. By 2008, many areas of the valley were at < 20% of storage. Some relief occurred with rain events in early 2009, then in December 2010 - January 2011, most of southeast Queensland experienced unprecedented flooding. These storm-based events have caused a shift in research focus from investigations of drought conditions and mitigation to flood response analysis. For the alluvial aquifer system of the valley, a preliminary assessment of groundwater observation bore data, prior to and during the flood, indicates that there is a spatially variable aquifer response. While water levels in some bores screened in unconfined shallow aquifers have recovered by more than 10 m within a short period of time (months), others show only a small or moderate response. Measurements of pre- and post-flood groundwater levels and high-resolution time-series records from data loggers are considered within the framework of a 3D geological model of the Lockyer Valley using Groundwater Visualisation System(GVS). Groundwater level fluctuations covering both drought and flood periods are used to estimate groundwater recharge using the water table fluctuation method (WTF), supplemented by estimates derived using chloride mass balance. The presentation of hydraulic and recharge information in a 3D format has considerable advantages over the traditional 2D presentation of data. The 3D approach allows the distillation of multiple types of information(topography, geological, hydraulic and spatial) into one representation that provides valuable insights into the major controls of groundwater flow and recharge. The influence of aquifer lithology on the spatial variability of groundwater recharge is also demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.