953 resultados para function estimation
Resumo:
This paper presents an innovative prognostics model based on health state probability estimation embedded in the closed loop diagnostic and prognostic system. To employ an appropriate classifier for health state probability estimation in the proposed prognostic model, the comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault levels of three faults in HP-LNG pump. Two sets of impeller-rubbing data were employed for the prediction of pump remnant life based on estimation of discrete health state probability using an outstanding capability of SVM and a feature selection technique. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
To date, attempts to regenerate a complete tooth, including the critical periodontal tissues associated with the tooth root, have not been successful. Controversy still exists regarding the origin of the cell source for cellular cementum (epithelial or mesenchymal). This disagreement may be partially due to a lack of understanding of the events leading to the initiation and development of the tooth roots and supportive tissues, such as the cementum. Osterix (OSX) is a transcriptional factor essential for osteogenesis, but its role in cementogenesis has not been addressed. In the present study, we first documented a close relationship between the temporal- and spatial-expression pattern of OSX and the formation of cellular cementum. We then generated 3.6 Col 1-OSX transgenic mice, which displayed accelerated cementum formation vs. WT controls. Importantly, the conditional deletion of OSX in the mesenchymal cells with two different Cre systems (the 2.3 kb Col 1 and an inducible CAG-CreER) led to a sharp reduction in cellular cementum formation (including the cementum mass and mineral deposition rate) and gene expression of dentin matrix protein 1 (DMP1) by cementocytes. However, the deletion of the OSX gene after cellular cementum formed did not alter the properties of the mature cementum as evaluated by backscattered SEM and resin-cast SEM. Transient transfection of Osx in the cementoblasts in vitro significantly inhibited cell proliferation and increased cell differentiation and mineralization. Taken together, these data support 1) the mesenchymal origin of cellular cementum (from PDL progenitor cells); 2) the vital role of OSX in controlling the formation of cellular cementum; and 3) the limited remodeling of cellular cementum in adult mice.
Resumo:
Background The increasing popularity and use of the internet makes it an attractive option for providing health information and treatment, including alcohol/other drug use. There is limited research examining how people identify and access information about alcohol or other drug (AOD) use online, or how they assess the usefulness of the information presented. This study examined the strategies that individuals used to identify and navigate a range of AOD websites, along with the attitudes concerning presentation and content. Methods Members of the general community in Brisbane and Roma (Queensland, Australia) were invited to participate in a 30-minute search of the internet for sites related to AOD use, followed by a focus group discussion. Fifty one subjects participated in the study across nine focus groups. Results Participants spent a maximum of 6.5 minutes on any one website, and less if the user was under 25 years of age. Time spent was as little as 2 minutes if the website was not the first accessed. Participants recommended that AOD-related websites should have an engaging home or index page, which quickly and accurately portrayed the site’s objectives, and provided clear site navigation options. Website content should clearly match the title and description of the site that is used by internet search engines. Participants supported the development of a portal for AOD websites, suggesting that it would greatly facilitate access and navigation. Treatment programs delivered online were initially viewed with caution. This appeared to be due to limited understanding of what constituted online treatment, including its potential efficacy. Conclusions A range of recommendations arise from this study regarding the design and development of websites, particularly those related to AOD use. These include prudent use of text and information on any one webpage, the use of graphics and colours, and clear, uncluttered navigation options. Implications for future website development are discussed.
Resumo:
Here we present a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model which is essentially a function of importance sampling weights. Other methods for this task such as quadrature, often used in design, suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from neurological diseases such as Motor Neuron disease.
Resumo:
Precise identification of the time when a change in a hospital outcome has occurred enables clinical experts to search for a potential special cause more effectively. In this paper, we develop change point estimation methods for survival time of a clinical procedure in the presence of patient mix in a Bayesian framework. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the mean survival time of patients who underwent cardiac surgery. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time CUSUM control charts for different magnitude scenarios. The proposed estimator shows a better performance where a longer follow-up period, censoring time, is applied. In comparison with the alternative built-in CUSUM estimator, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
A system is described for calculating volume from a sequence of multiplanar 2D ultrasound images. Ultrasound images are captured using a video digitising card (Hauppauge Win/TV card) installed in a personal computer, and regions of interest transformed into 3D space using position and orientation data obtained from an electromagnetic device (Polbemus, Fastrak). The accuracy of the system was assessed by scanning 10 water filled balloons (13-141 ml), 10 kidneys (147 200 ml) and 16 fetal livers (8 37 ml) in water using an Acuson 128XP/10 (5 MHz curvilinear probe). Volume was calculated using the ellipsoid, planimetry, tetrahedral and ray tracing methods and compared with the actual volume measured by weighing (balloons) and water displacement (kidneys and livers). The mean percentage error for the ray tracing method was 0.9 ± 2.4%, 2.7 ± 2.3%, 6.6 ± 5.4% for balloons, kidneys and livers, respectively. So far the system has been used clinically to scan fetal livers and lungs, neonate brain ventricles and adult prostate glands.
Resumo:
Sixteen formalin-fixed foetal livers were scanned in vitro using a new system for estimating volume from a sequence of multiplanar 2D ultrasound images. Three different scan techniques were used (radial, parallel and slanted) and four volume estimation algorithms (ellipsoid, planimetry, tetrahedral and ray tracing). Actual liver volumes were measured by water displacement. Twelve of the sixteen livers also received x-ray computed tomography (CT) and magnetic resonance (MR) scans and the volumes were calculated using voxel counting and planimetry. The percentage accuracy (mean ± SD) was 5.3 ± 4.7%, −3.1 ± 9.6% and −0.03 ± 9.7% for ultrasound (radial scans, ray volumes), MR and CT (voxel counting) respectively. The new system may be useful for accurately estimating foetal liver volume in utero.
Resumo:
The accuracy of measurement of mechanical properties of a material using instrumented nanoindentation at extremely small penetration depths heavily relies on the determination of the contact area of the indenter. Our experiments have demonstrated that the conventional area function could lead to a significant error when the contact depth was below 40. nm, due to the singularity in the first derivation of the function in this region and thus, the resultant unreasonable sharp peak on the function curve. In this paper, we proposed a new area function that was used to calculate the contact area for the indentations where the contact depths varied from 10 to 40. nm. The experimental results have shown that the new area function has produced better results than the conventional function. © 2011 Elsevier B.V.
Resumo:
Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.
Resumo:
In recent years, a number of phylogenetic methods have been developed for estimating molecular rates and divergence dates under models that relax the molecular clock constraint by allowing rate change throughout the tree. These methods are being used with increasing frequency, but there have been few studies into their accuracy. We tested the accuracy of several relaxed-clock methods (penalized likelihood and Bayesian inference using various models of rate change) using nucleotide sequences simulated on a nine-taxon tree. When the sequences evolved with a constant rate, the methods were able to infer rates accurately, but estimates were more precise when a molecular clock was assumed. When the sequences evolved under a model of autocorrelated rate change, rates were accurately estimated using penalized likelihood and by Bayesian inference using lognormal and exponential models of rate change, while other models did not perform as well. When the sequences evolved under a model of uncorrelated rate change, only Bayesian inference using an exponential rate model performed well. Collectively, the results provide a strong recommendation for using the exponential model of rate change if a conservative approach to divergence time estimation is required. A case study is presented in which we use a simulation-based approach to examine the hypothesis of elevated rates in the Cambrian period, and it is found that these high rate estimates might be an artifact of the rate estimation method. If this bias is present, then the ages of metazoan divergences would be systematically underestimated. The results of this study have implications for studies of molecular rates and divergence dates.
Resumo:
The estimation of phylogenetic divergence times from sequence data is an important component of many molecular evolutionary studies. There is now a general appreciation that the procedure of divergence dating is considerably more complex than that initially described in the 1960s by Zuckerkandl and Pauling (1962, 1965). In particular, there has been much critical attention toward the assumption of a global molecular clock, resulting in the development of increasingly sophisticated techniques for inferring divergence times from sequence data. In response to the documentation of widespread departures from clocklike behavior, a variety of local- and relaxed-clock methods have been proposed and implemented. Local-clock methods permit different molecular clocks in different parts of the phylogenetic tree, thereby retaining the advantages of the classical molecular clock while casting off the restrictive assumption of a single, global rate of substitution (Rambaut and Bromham 1998; Yoder and Yang 2000).
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.
Resumo:
Proteasomes are cylindrical particles made up of a stack of four heptameric rings. In animal cells the outer rings are made up of 7 different types of alpha subunits and the inner rings are composed of 7 out of 10 possible different beta subunits. Regulatory complexes can bind to the ends of the cylinder.We have investigated aspects of the assembly, activity and subunit composition of core proteasome particles and 26S proteasomes, the localization of proteasome subpopulations, and the possible role of phosphorylation in determining proteasome localization, activities and association with regulatory components.