64 resultados para probability and reinforcement proportion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dystrophin, the product of the Duchenne muscular dystrophy (DMD) gene, was studied in muscle from 16 human fetuses at risk for the disease. Eleven high risk (greater than 95% probability) and 5 low-risk (less than 25% probability) fetuses were studied with antibodies raised to different regions of the protein. All low-risk fetuses showed a similar pattern to that of normal fetuses of a comparable age: using Western blot analysis, a protein was detected of similar size and abundance to that of normal fetuses (i.e. smaller molecular weight than that of adult muscle); immunocytochemistry showed uniform sarcolemmal staining in fetuses older than 18 weeks gestation and differential staining of myotubes at different stages of development (distinguished by size) in younger fetuses (less than 15 weeks gestation). In contrast, Western blot analysis of high-risk fetuses detected low levels of dystrophin in 4 cases; 7 fetuses had no detectable protein. Immunocytochemistry with some dystrophin antibodies showed weak staining of the sarcolemma and around central nuclei in younger fetuses; in older fetuses there was little sarcolemmal staining with any antibody other than occasional positive fibres. These results indicate that careful study of dystrophin in fetuses at risk for DMD can be used to establish the clinical phenotype and provide additional information for future family counselling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classic financial agency theory recommends compensation through stock options rather than shares to counteract excessive risk aversion in agents. In a setting where any kind of risk taking is suboptimal for shareholders, we show that excessive risk taking may occur for one of two reasons: risk preferences or incentives. Even when compensated through restricted company stock, experimental CEOs take large amounts of excessive risk. This contradicts classical financial theory, but can be explained through risk preferences that are not uniform over the probability and outcome spaces, and in particular, risk seeking for small probability gains and large probability losses. Compensation through options further increases risk taking as expected. We show that this effect is driven mainly by the personal asset position of the experimental CEO, thus having deleterious effects on company performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of Research Theme 4 (RT4) was to advance understanding of the basic science issues at the heart of the ENSEMBLES project, focusing on the key processes that govern climate variability and change, and that determine the predictability of climate. Particular attention was given to understanding linear and non-linear feedbacks that may lead to climate surprises,and to understanding the factors that govern the probability of extreme events. Improved understanding of these issues will contribute significantly to the quantification and reduction of uncertainty in seasonal to decadal predictions and projections of climate change. RT4 exploited the ENSEMBLES integrations (stream 1) performed in RT2A as well as undertaking its own experimentation to explore key processes within the climate system. It was working at the cutting edge of problems related to climate feedbacks, the interaction between climate variability and climate change � especially how climate change pertains to extreme events, and the predictability of the climate system on a range of time-scales. The statisticalmethodologies developed for extreme event analysis are new and state-of-the-art. The RT4-coordinated experiments, which have been conducted with six different atmospheric GCMs forced by common timeinvariant sea surface temperature (SST) and sea-ice fields (removing some sources of inter-model variability), are designed to help to understand model uncertainty (rather than scenario or initial condition uncertainty) in predictions of the response to greenhouse-gas-induced warming. RT4 links strongly with RT5 on the evaluation of the ENSEMBLES prediction system and feeds back its results to RT1 to guide improvements in the Earth system models and, through its research on predictability, to steer the development of methods for initialising the ensembles

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Individual identification via DNA profiling is important in molecular ecology, particularly in the case of noninvasive sampling. A key quantity in determining the number of loci required is the probability of identity (PIave), the probability of observing two copies of any profile in the population. Previously this has been calculated assuming no inbreeding or population structure. Here we introduce formulae that account for these factors, whilst also accounting for relatedness structure in the population. These formulae are implemented in API-CALC 1.0, which calculates PIave for either a specified value, or a range of values, for F-IS and F-ST.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Based on the potential benefits of cis-9, trans- 11 conjugated linoleic acid (CLA) for human health there is a need to develop effective strategies for enhancing milk fat CLA concentrations. In this experiment, the effect of forage type and level of concentrate in the diet on milk fatty acid composition was examined in cows given a mixture of fish oil and sunflower oil. Four late lactation Holstein-British Friesian cows were used in a 4 x 4 Latin-square experiment with a 2 x 2 factorial arrangement of treatments and 21-day experimental periods. Treatments consisted of grass (G) or maize (M) silage supplemented with low (L) or high (H) levels of concentrates (65: 35 and 35: 65; forage: concentrate ratio, on a dry matter (DM) basis, respectively) offered as a total mixed ration at a restricted level of intake (20 kg DM per day). Lipid supplements (30 g/kg DM) containing fish oil and sunflower oil (2: 3 w/w) were offered during the last 14 days of each experimental period. Treatments had no effect on total DM intake, milk yield, milk constituent output or milk fat content, but milk protein concentrations were lower (P<0.05) for G than M diets (mean 43.0 and 47.3 g/kg, respectively). Compared with grass silage, milk fat contained higher (P<0.05) amounts Of C-12: 0, C-14: 0, trans C-18:1 and long chain >= C20 (n-3) polyunsaturated fatty acids (PUFA) and lower (P<0.05) levels Of C-18:0 and trans C-18:2 when maize silage was offered. Increases in the proportion of concentrate in the diet elevated (P<0.05) C-18:2 (n-6) and long chain >= C20 (n-3) PUFA content, but reduced (P<0.05) the amount Of C-18:3 (n-3). Concentrations of trans-11 C-18:1 in milk were independent of forage type, but tended (P<0.10) to be lower for high concentrate diets (mean 7.2 and 4.0 g/100 g fatty acids, for L and H respectively). Concentrations of trans-10 C-18:1 were higher (P<0.05) in milk from maize compared with grass silage (mean 10.3 and 4.1 g/100 g fatty acids, respectively) and increased in response to high levels of concentrates in the diet (mean 4.1 and 10.3 g/100 g fatty acids, for L and H, respectively). Forage type had no effect (P>0.05) on total milk conjugated linoleic acid (CLA) (2.7 and 2.8 g/100 g fatty acids, for M and G, respectively) or cis-9, trans-11 CLA content (2.2 and 2.4 g/100 g fatty acids). Feeding high concentrate diets tended (P<0.10) to decrease total CLA (3.3 and 2.2 g/100 g fatty acids, for L and H, respectively) and cis-9, trans-11 CLA (2.9 and 1/7 g/100 g fatty acids) concentrations and increase milk trans-9, cis-11 CLA and trans-10, cis-12 CLA content. In conclusion, the basal diet is an important determinant of milk fatty acid composition when a supplement of fish oil and sunflower oil is given.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The capability of a feature model of immediate memory (Nairne, 1990; Neath, 2000) to predict and account for a relationship between absolute and proportion scoring of immediate serial recall when memory load is varied (the list-length effect, LLE) is examined. The model correctly predicts the novel finding of an LLE in immediate serial order memory similar to that observed with free recall and previously assumed to be attributable to the long-term memory component of that procedure (Glanzer, 1972). The usefulness of formal models as predictive tools and the continuity between short-term serial order and longer term item memory are considered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The application of automatic segmentation methods in lesion detection is desirable. However, such methods are restricted by intensity similarities between lesioned and healthy brain tissue. Using multi-spectral magnetic resonance imaging (MRI) modalities may overcome this problem but it is not always practicable. In this article, a lesion detection approach requiring a single MRI modality is presented, which is an improved method based on a recent publication. This new method assumes that a low similarity should be found in the regions of lesions when the likeness between an intensity based fuzzy segmentation and a location based tissue probabilities is measured. The usage of a normalized similarity measurement enables the current method to fine-tune the threshold for lesion detection, thus maximizing the possibility of reaching high detection accuracy. Importantly, an extra cleaning step is included in the current approach which removes enlarged ventricles from detected lesions. The performance investigation using simulated lesions demonstrated that not only the majority of lesions were well detected but also normal tissues were identified effectively. Tests on images acquired in stroke patients further confirmed the strength of the method in lesion detection. When compared with the previous version, the current approach showed a higher sensitivity in detecting small lesions and had less false positives around the ventricle and the edge of the brain

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper examines the impact of the auction process of residential properties that whilst unsuccessful at auction sold subsequently. The empirical analysis considers both the probability of sale and the premium of the subsequent sale price over the guide price, reserve and opening bid. The findings highlight that the final achieved sale price is influenced by key price variables revealed both prior to and during the auction itself. Factors such as auction participation, the number of individual bidders and the number of bids are significant in a number of the alternative specifications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.