913 resultados para Relative errors
Resumo:
Background: Considering the broad variation in the expression of housekeeping genes among tissues and experimental situations, studies using quantitative RT-PCR require strict definition of adequate endogenous controls. For glioblastoma, the most common type of tumor in the central nervous system, there was no previous report regarding this issue. Results: Here we show that amongst seven frequently used housekeeping genes TBP and HPRT1 are adequate references for glioblastoma gene expression analysis. Evaluation of the expression levels of 12 target genes utilizing different endogenous controls revealed that the normalization method applied might introduce errors in the estimation of relative quantities. Genes presenting expression levels which do not significantly differ between tumor and normal tissues can be considered either increased or decreased if unsuitable reference genes are applied. Most importantly, genes showing significant differences in expression levels between tumor and normal tissues can be missed. We also demonstrated that the Holliday Junction Recognizing Protein, a novel DNA repair protein over expressed in lung cancer, is extremely over-expressed in glioblastoma, with a median change of about 134 fold. Conclusion: Altogether, our data show the relevance of previous validation of candidate control genes for each experimental model and indicate TBP plus HPRT1 as suitable references for studies on glioblastoma gene expression.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
The mandible has a mixed embryological origin, and its growth is associated with the secondary cartilage of the condyle process (CP). In this area, growth depends on an array of intrinsic and extrinsic factors that influence protein metabolism. In the present study, we used an adolescent rat model to evaluate the growth and development of the CP under conditions of pre- and postnatal protein deficiency, combined with or without the stress of severe burn injury (BI). We found that protein deficiency severely undermined the growth of the CP, by altering the thickness of its constituent layers. BI is also capable of affecting CP growth, although the effect is less severe than protein deficiency. Interestingly, the summed effect of protein deficiency and BI on the CP is less severe than protein deficiency alone. A possible explanation is that the increased carbohydrates in a hypoproteic diet stimulate the production of endogenous insulin and protein synthesis, which partially compensates for the loss of lean body mass caused by BI.
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
This paper describes a new and simple method to determine the molecular weight of proteins in dilute solution, with an error smaller than similar to 10%, by using the experimental data of a single small-angle X-ray scattering (SAXS) curve measured on a relative scale. This procedure does not require the measurement of SAXS intensity on an absolute scale and does not involve a comparison with another SAXS curve determined from a known standard protein. The proposed procedure can be applied to monodisperse systems of proteins in dilute solution, either in monomeric or multimeric state, and it has been successfully tested on SAXS data experimentally determined for proteins with known molecular weights. It is shown here that the molecular weights determined by this procedure deviate from the known values by less than 10% in each case and the average error for the test set of 21 proteins was 5.3%. Importantly, this method allows for an unambiguous determination of the multimeric state of proteins with known molecular weights.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
The thermo-solvatochromism of 2,6-dibromo-4-[(E)-2-(1-methylpyridinium-4-yl)ethenyl] phenolate, MePMBr(2), has been studied in mixtures of water, W, with ionic liquids, ILs, in the temperature range of 10 to 60 degrees C, where feasible. The objectives of the study were to test the applicability of a recently introduced solvation model, and to assess the relative importance of solute-solvent solvophobic interactions. The ILs were 1-allyl-3-alkylimidazolium chlorides, where the alkyl groups are methyl, 1-butyl, and 1-hexyl, respectively. The equilibrium constants for the interaction of W and the ILs were calculated from density data; they were found to be linearly dependent on N(C), the number of carbon atoms of the alkyl group; van't Hoff equation (log K versus 1/T) applied satisfactorily. Plots of the empirical solvent polarities, E(T) (MePMBr(2)) in kcal mol(-1), versus the mole fraction of water in the binary mixture, chi(w), showed non-linear, i.e., non-ideal behavior. The dependence of E(T) (MePMBr(2)) on chi(w), has been conveniently quantified in terms of solvation by W, IL, and the ""complex"" solvent IL-W. The non-ideal behavior is due to preferential solvation by the IL and, more efficiently, by IL-W. The deviation from linearity increases as a function of increasing N(C) of the IL, and is stronger than that observed for solvation of MePMBr(2) by aqueous 1-propanol, a solvent whose lipophilicity is 12.8 to 52.1 times larger than those of the ILs investigated. The dependence on N(C) is attributed to solute-solvent solvophobic interactions, whose relative contribution to solvation are presumably greater than that in mixtures of water and 1-propanol.
Resumo:
The reconstruction of physical environments of Amazonian areas is of great interest to determine the dynamic evolution of the Amazon drainage basin. However. few studies have emphasized the Quaternary deposits in this region. which is mostly due to the lack of natural exposures imposed by the low topography. This work integrates facies analysis. radiocarbon dating, delta(13)C, delta(15)N, and C/N of an 124 m-thick core from an area located at the mouth of the Amazon River. northeastern Amazonia. The study records deposits up to 50.795 (14)C yr B P. in age. which formed in a variety of depositional environments including fluvial channel, tidal flat, outer estuarine basin to shallow marine. inner estuarine basin, estuarine channel and lagoon. Facies interpretation was significantly improved with the inclusion of delta(13)C, delta(15)N, and C/N analyses of organic matter extracted from the sediments The obtained values conform to a transitional. mostly estuarine paleosetting evolved during successive relative sea-level fluctuations. The results suggest fluvial deposition between 40,950 (+/- 590) and 50.795 (14)C yr B P, with a rise in relative sea level that commenced between 35,567 (+/- 649) and 39,079 (+/- 1114) (14)C yr B P. An overall transgression took place until 29,340 (+/- 340) (14)C yr B P., after which the relative sea level dropped, favoring valley rejuvenation and incision. Following this time up to 10,479 (+/- 34) (14)C yr B.P. a rise in relative sea level filled up the valley with estuarine deposits After 10.479(+/- 34) (14)C yr B.P., the estuary was replaced by a lagoon At the end of the Holocene, the coastline prograided approximately 45 km northward, replaci ng the lagoon by a lake system Despite the influence of eustatic fluctuations. regional tectonics played a significant role to create new space where these Late Pleistocene and Holocene sediments accumulated. (C) 2009 Elsevier B V All rights reserved.
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
The aim of this study was to investigate the effects of knowledge of results (KR) frequency and task complexity on motor skill acquisition. The task consisted of throwing a bocha ball to place it as close as possible to the target ball. 120 students ages 11 to 73 years were assigned to one of eight experimental groups according to knowledge of results frequency (25, 50, 75, and 100%) and task complexity (simple and complex). Subjects performed 90 trials in the acquisition phase and 10 trials in the transfer test. The results showed that knowledge of results given at a frequency of 25% resulted in an inferior absolute error than 50% and inferior variable error than 50, 75, and 100 I frequencies, but no effect of task complexity was found.
Resumo:
Medication administration errors (MAE) are the most frequent kind of medication errors. Errors with antimicrobial drugs (AD) are relevant because they may interfere inpatient safety and in the development of microbial resistance. The aim of this study is to analyze the AD errors detected in a Brazilian multicentric study of MAE. It was a devcriptive and explorotory study carried out in clinical units in five Brazilian teaching hospitals. The hospitals were investigated during 30 days. MAE were detected by observation technique. MAE were classified in categories: wrong route(WR), wrong patient(WP), wrong dose(WD) wrong time (WT) and unordered drug (UD). AD with MA E were classified by Anatomical-Therapeutical-Chemical Classification System. AD with narrow therapeutic index (NTI) wet-e identified A descriptive statistical analysis was performed using SPSS version 11.5 software. A total of 1500 errors were observed, 277 (18.5%) of them were error with AD. The hopes of AD error were: WT87.7%, QD 6.9%, WR 1.5%, UD 3.2% and WP 0.7%. The number of AD found was 36. The mostly ATC class were fluoroquinolones 13.9%, combinations of penicillin 13.9%, macrolides 8.3% and third-generation cephalosporines 5.6%. The parenteral drug dosage form was associated with 55.6% of AD. 16.7% of AD were NTI. 47.4% of WD and 21.8% WT were with NTI drugs. This study shows that these errors should be considered potential areas for improvement in the medication process and patient safety plus there is requirement to develop rational drug use of AD.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Diagnostic errors and repetitive sequential classifications in on-line process control by attributes
Resumo:
The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.