974 resultados para mean-variance efficiency


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simian rotavirus SA-11, experimentally seeded, was recovered from raw domestic sewage by a two-step concentration procedure, using filtration through a positively charged microporous filter (Zeta Plus 60 S) followed by ultracentrifugation, effecting an 8,000-fold concentration. By this method, a mean recovery of 81% ± 7.5 of the SA-11 virus, was achieved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Pain is a major issue after burns even when large doses of opioids are prescribed. The study focused on the impact of a pain protocol using hypnosis on pain intensity, anxiety, clinical course, and costs. METHODS: All patients admitted to the ICU, aged >18 years, with an ICU stay >24h, accepting to try hypnosis, and treated according to standardized pain protocol were included. Pain was scaled on the Visual Analog Scale (VAS) (mean of daily multiple recordings), and basal and procedural opioid doses were recorded. Clinical outcome and economical data were retrieved from hospital charts and information system, respectively. Treated patients were matched with controls for sex, age, and the burned surface area. FINDINGS: Forty patients were admitted from 2006 to 2007: 17 met exclusion criteria, leaving 23 patients, who were matched with 23 historical controls. Altogether patients were 36+/-14 years old and burned 27+/-15%BSA. The first hypnosis session was performed after a median of 9 days. The protocol resulted in the early delivery of higher opioid doses/24h (p<0.0001) followed by a later reduction with lower pain scores (p<0.0001), less procedural related anxiety, less procedures under anaesthesia, reduced total grafting requirements (p=0.014), and lower hospital costs per patient. CONCLUSION: A pain protocol including hypnosis reduced pain intensity, improved opioid efficiency, reduced anxiety, improved wound outcome while reducing costs. The protocol guided use of opioids improved patient care without side effects, while hypnosis had significant psychological benefits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report, the efficiency of Adultrap under field conditions is compared to a CDC backpack aspirator and to MosquiTRAP. An urban dengue-endemic area of Rio de Janeiro was selected to evaluate the efficiency of mosquito traps in capturing Aedes aegypti females. Adultrap and aspirator captured similar numbers of Ae. aegypti females, with the former showing high specificity to gravid individuals (93.6%). A subsequent mark-release-recapture experiment was conducted to evaluate Adultrap and MosquiTRAP efficiency concomitantly. With a 6.34% recapture rate, MosquiTRAP captured a higher mean number of female Ae. aegypti per trap than Adultrap (Ç2 = 14.26; df = 1; p < 0,05). However, some MosquiTRAPs (28.12%) contained immature Ae. aegypti after 18 days of exposure in the field and could be pointed as an oviposition site for female mosquitoes. Both trapping methods, designed to collect gravid Ae. aegypti females, seem to be efficient, reliable and may aid routine Ae. aegypti surveillance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of our study is to assess the diagnostic profi tability of procalcitonin (PCT) in septic shock and another biomarker as C-reactive protein (CRP). Results: Fifty-four septic patients were assessed, 66% were males; mean age, 63 years. Eighty-eight percent was diagnosed as septic shock and 11% severe sepsis. Seventy-six percent were medical patients. Positive blood cultures in 42.5%. Sepsis origin: respiratory 46%, neurological 5%, digestive 37% and urinary 3%. Average SOFA score was 10.4. Conclusions: PCT and CRP have the same efficiency in early sepsis diagnosis. The PCT and CRP effi ciency diagnostic together is signifi cant but small. We suggest using both with the doubt of sepsis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates a simple procedure to estimate robustly the mean of an asymmetric distribution. The procedure removes the observations which are larger or smaller than certain limits and takes the arithmetic mean of the remaining observations, the limits being determined with the help of a parametric model, e.g., the Gamma, the Weibull or the Lognormal distribution. The breakdown point, the influence function, the (asymptotic) variance, and the contamination bias of this estimator are explored and compared numerically with those of competing estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic diversity might increase the performance of social groups by improving task efficiency or disease resistance, but direct experimental tests of these hypotheses are rare. We manipulated the level of genetic diversity in colonies of the Argentine ant Linepithema humile, and then recorded the short-term task efficiency of these experimental colonies. The efficiency of low and high genetic diversity colonies did not differ significantly for any of the following tasks: exploring a new territory, foraging, moving to a new nest site, or removing corpses. The tests were powerful enough to detect large effects, but may have failed to detect small differences. Indeed, observed effect sizes were generally small, except for the time to create a trail during nest emigration. In addition, genetic diversity had no statistically significant impact on the number of workers, males and females produced by the colony, but these tests had low power. Higher genetic diversity also did not result in lower variance in task efficiency and productivity. In contrast to genetic diversity, colony size was positively correlated with the efficiency at performing most tasks and with colony productivity. Altogether, these results suggest that genetic diversity does not strongly improve short-term task efficiency in L. humile, but that worker number is a key factor determining the success of this invasive species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECT: The aim of this study was to evaluate the long-term safety and efficacy of bilateral contemporaneous deep brain stimulation (DBS) in patients who have levodopa-responsive parkinsonism with untreatable motor fluctuations. Bilateral pallidotomy carries a high risk of corticobulbar and cognitive dysfunction. Deep brain stimulation offers new alternatives with major advantages such as reversibility of effects, minimal permanent lesions, and adaptability to individual needs, changes in medication, side effects, and evolution of the disease. METHODS: Patients in whom levodopa-responsive parkinsonism with untreatable severe motor fluctuations has been clinically diagnosed underwent bilateral pallidal magnetic resonance image-guided electrode implantation while receiving a local anesthetic. Pre- and postoperative evaluations at 3-month intervals included Unified Parkinson's Disease Rating Scale (UPDRS) scoring, Hoehn and Yahr staging, 24-hour self-assessments, and neuropsychological examinations. Six patients with a mean age of 55 years (mean 42-67 years), a mean duration of disease of 15.5 years (range 12-21 years), a mean "on/off' Hoehn and Yahr stage score of 3/4.2 (range 3-5), and a mean "off' time of 40% (range 20-50%) underwent bilateral contemporaneous pallidal DBS, with a minimum follow-up period lasting 24 months (range 24-30 months). The mean dose of levodopa in these patients could not be changed significantly after the procedure and pergolide was added after 12 months in five patients because of recurring fluctuations despite adjustments in stimulation parameters. All but two patients had no fluctuations until 9 months. Two of the patients reported barely perceptible fluctuations at 12 months and two at 15 months; however, two patients remain without fluctuations at 2 years. The mean improvements in the UPDRS motor score in the off time and the activities of daily living (ADL) score were more than 50%; the mean off time decreased from 40 to 10%, and the mean dyskinesia and complication of treatment scores were reduced to one-third until pergolide was introduced at 12 months. No significant improvement in "on" scores was observed. A slight worsening after 1 year was observed and three patients developed levodopa- and stimulation-resistant gait ignition failure and minimal fluctuations at 1 year. Side effects, which were controlled by modulation of stimulation, included dysarthria, dystonia, and confusion. CONCLUSIONS: Bilateral pallidal DBS is safe and efficient in patients who have levodopa-responsive parkinsonism with severe fluctuations. Major improvements in motor score, ADL score, and off time persisted beyond 2 years after the operation, but signs of decreased efficacy started to be seen after 12 months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a series of seminal articles in 1974, 1975, and 1977, J. H. Gillespie challenged the notion that the "fittest" individuals are those that produce on average the highest number of offspring. He showed that in small populations, the variance in fecundity can determine fitness as much as mean fecundity. One likely reason why Gillespie's concept of within-generation bet hedging has been largely ignored is the general consensus that natural populations are of large size. As a consequence, essentially no work has investigated the role of the fecundity variance on the evolutionary stable state of life-history strategies. While typically large, natural populations also tend to be subdivided in local demes connected by migration. Here, we integrate Gillespie's measure of selection for within-generation bet hedging into the inclusive fitness and game theoretic measure of selection for structured populations. The resulting framework demonstrates that selection against high variance in offspring number is a potent force in large, but structured populations. More generally, the results highlight that variance in offspring number will directly affect various life-history strategies, especially those involving kin interaction. The selective pressures on three key traits are directly investigated here, namely within-generation bet hedging, helping behaviors, and the evolutionary stable dispersal rate. The evolutionary dynamics of all three traits are markedly affected by variance in offspring number, although to a different extent and under different demographic conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treatments, obtained from the analysis with recovery of interblock information, having as mean square of the error the mean effective variance of this same analysis with recovery of inter-block information. For the four alternatives of analysis, the estimators and estimates were obtained for the variance components and heritability coefficients. The classification of material was also studied. The present study suggests that for each experiment and depending of the objectives of the analysis, one should observe which alternative of analysis is preferable, mainly in cases where a negative estimate is obtained for the variance component due to effects of blocks within adjusted repetitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schizophrenia is postulated to be the prototypical dysconnection disorder, in which hallucinations are the core symptom. Due to high heterogeneity in methodology across studies and the clinical phenotype, it remains unclear whether the structural brain dysconnection is global or focal and if clinical symptoms result from this dysconnection. In the present work, we attempt to clarify this issue by studying a population considered as a homogeneous genetic sub-type of schizophrenia, namely the 22q11.2 deletion syndrome (22q11.2DS). Cerebral MRIs were acquired for 46 patients and 48 age and gender matched controls (aged 6-26, respectively mean age = 15.20 ± 4.53 and 15.28 ± 4.35 years old). Using the Connectome mapper pipeline (connectomics.org) that combines structural and diffusion MRI, we created a whole brain network for each individual. Graph theory was used to quantify the global and local properties of the brain network organization for each participant. A global degree loss of 6% was found in patients' networks along with an increased Characteristic Path Length. After identifying and comparing hubs, a significant loss of degree in patients' hubs was found in 58% of the hubs. Based on Allen's brain network model for hallucinations, we explored the association between local efficiency and symptom severity. Negative correlations were found in the Broca's area (p < 0.004), the Wernicke area (p < 0.023) and a positive correlation was found in the dorsolateral prefrontal cortex (DLPFC) (p < 0.014). In line with the dysconnection findings in schizophrenia, our results provide preliminary evidence for a targeted alteration in the brain network hubs' organization in individuals with a genetic risk for schizophrenia. The study of specific disorganization in language, speech and thought regulation networks sharing similar network properties may help to understand their role in the hallucination mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plant growth analysis presents difficulties related to statistical comparison of growth rates, and the analysis of variance of primary data could guide the interpretation of results. The objective of this work was to evaluate the analysis of variance of data from distinct harvests of an experiment, focusing especially on the homogeneity of variances and the choice of an adequate ANOVA model. Data from five experiments covering different crops and growth conditions were used. From the total number of variables, 19% were originally homoscedastic, 60% became homoscedastic after logarithmic transformation, and 21% remained heteroscedastic after transformation. Data transformation did not affect the F test in one experiment, whereas in the other experiments transformation modified the F test usually reducing the number of significant effects. Even when transformation has not altered the F test, mean comparisons led to divergent interpretations. The mixed ANOVA model, considering harvest as a random effect, reduced the number of significant effects of every factor which had the F test modified by this model. Examples illustrated that analysis of variance of primary variables provides a tool for identifying significant differences in growth rates. The analysis of variance imposes restrictions to experimental design thereby eliminating some advantages of the functional growth analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to evaluate a generalized response function to the atmospheric CO2 concentration [f(CO2)] by the radiation use efficiency (RUE) in rice. Experimental data on RUE at different CO2 concentrations were collected from rice trials performed in several locations around the world. RUE data were then normalized, so that all RUE at current CO2 concentration were equal to 1. The response function was obtained by fitting normalized RUE versus CO2 concentration to a Morgan-Mercer-Flodin (MMF) function, and by using Marquardt's method to estimate the model coefficients. Goodness of fit was measured by the standard deviation of the estimated coefficients, the coefficient of determination (R²), and the root mean square error (RMSE). The f(CO2) describes a nonlinear sigmoidal response of RUE in rice, in function of the atmospheric CO2 concentration, which has an ecophysiological background, and, therefore, renders a robust function that can be easily coupled to rice simulation models, besides covering the range of CO2 emissions for the next generation of climate scenarios for the 21st century.