947 resultados para PROBABILISTIC FORECASTS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Institute of Public health in Ireland (IPH) produces population prevalence estimates and forecasts for a number of chronic conditions among adults. IPH has now applied the methodology to examine health conditions and injuries among young children across the island of Ireland.This short report is a supplement to a previous IPH report that examines health conditions among three-year-olds in the Republic of Ireland. It provides estimates of the prevalence of injuries that required hospital admission or treatment among three-year-olds in the Republic of Ireland in 2011. The analysis identifies risk factors associated with child injuries and provides estimates of the prevalence of these conditions for each of the 34 administrative cities and counties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Institute of Public health in Ireland (IPH) produces population prevalence estimates and forecasts for a number of chronic conditions among adults. IPH has now applied the methodology to examine health conditions among young children across the island of Ireland.This report uses information collected from parents in the Millennium Cohort Study (MCS) along with population data collected in the 2011 Northern Ireland Census to estimate the prevalence of any longstanding condition, asthma, eczema, sight problems and hearing problems among seven-year-olds in Northern Ireland in 2011. The analysis identifies risk factors associated with each condition and provides estimates of the prevalence of these conditions for each of the 11 Local Government Districts.A report on health conditions among three-year-olds in the Republic of Ireland has previously been published by the IPH.See the Chronic Conditions Hub for more details.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Boundaries for delta, representing a "quantitatively significant" or "substantively impressive" distinction, have not been established, analogous to the boundary of alpha, usually set at 0.05, for the stochastic or probabilistic component of "statistical significance". To determine what boundaries are being used for the "quantitative" decisions, we reviewed pertinent articles in three general medical journals. For each contrast of two means, contrast of two rates, or correlation coefficient, we noted the investigators' decisions about stochastic significance, stated in P values or confidence intervals, and about quantitative significance, indicated by interpretive comments. The boundaries between impressive and unimpressive distinctions were best formed by a ratio of greater than or equal to 1.2 for the smaller to the larger mean in 546 comparisons, by a standardized increment of greater than or equal to 0.28 and odds ratio of greater than or equal to 2.2 in 392 comparisons of two rates; and by an r value of greater than or equal to 0.32 in 154 correlation coefficients. Additional boundaries were also identified for "substantially" and "highly" significant quantitative distinctions. Although the proposed boundaries should be kept flexible, indexes and boundaries for decisions about "quantitative significance" are particularly useful when a value of delta must be chosen for calculating sample size before the research is done, and when the "statistical significance" of completed research is appraised for its quantitative as well as stochastic components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Government’s Action Plan for Jobs contained the following commitment regarding a review of apprenticeship: “Initiate a review of the apprenticeship training model, including costs, duration and demand with a view to providing an updated model of training that delivers the necessary skilled workforce to service the needs of a rapidly changing economy and ensures appropriate balance between supply and demand.” The first stage of the review process involves the preparation of this background issues paper which, inter alia, provides a factual description of the current system of apprenticeship, including the governance arrangements, trends and forecasts in relation to recruitment and identified strengths and weaknesses of the model and proposes a range of possible options for change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the aim of determining the prevalence of asymptomatic Plasmodium spp. infection by thick smear and PCR and its association with demographic and epidemiological characteristics in the village of Nuevo Tay, Tierralta, Córdoba, Colombia, a cross-sectional population study was carried out, using random probabilistic sampling. Venous blood samples were taken from 212 people on day 0 for thick smear and PCR. Clinical follow-up and thick smears were carried out on days 14 and 28. The prevalence of Plasmodium spp. infection was 17.9% (38/212; 95% CI: 12.5-23.3%) and the prevalence of asymptomatic Plasmodiumspp. infection was 14.6% (31/212; 95% CI: 9.6-19.6%). Plasmodium vivax was found more frequently (20/31; 64.5%) than Plasmodium falciparum (9/31; 29%) and mixed infections (2/31; 6.5%). A significantly higher prevalence of asymptomatic infection was found in men (19.30%) than in women (9.18%) (prevalence ratio: 2.10; 95% CI: 1.01-4.34%; p = 0.02). People who developed symptoms had a significantly higher parasitemia on day 0 than those who remained asymptomatic, of 1,881.5 ± 3,759 versus 79 ± 106.9 (p = 0.008). PCR detected 50% more infections than the thick smears. The presence of asymptomatic Plasmodium spp. infection highlights the importance of carrying out active searches amongst asymptomatic populations residing in endemic areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early detection of breast cancer (BC) with mammography may cause overdiagnosis andovertreatment, detecting tumors which would remain undiagnosed during a lifetime. The aims of this study were: first, to model invasive BC incidence trends in Catalonia (Spain) taking into account reproductive and screening data; and second, to quantify the extent of BC overdiagnosis. We modeled the incidence of invasive BC using a Poisson regression model. Explanatory variables were:age at diagnosis and cohort characteristics (completed fertility rate, percentage of women that use mammography at age 50, and year of birth). This model also was used to estimate the background incidence in the absence of screening. We used a probabilistic model to estimate the expected BC incidence if women in the population usedmammography as reported in health surveys. The difference between the observed and expected cumulative incidences provided an estimate of overdiagnosis.Incidence of invasive BC increased, especially in cohorts born from 1940 to 1955. The biggest increase was observed in these cohorts between the ages of 50 to 65 years, where the final BC incidence rates more than doubled the initial ones. Dissemination of mammography was significantly associated with BC incidence and overdiagnosis. Our estimates of overdiagnosis ranged from 0.4% to 46.6%, for women born around 1935 and 1950, respectively.Our results support the existence of overdiagnosis in Catalonia attributed to mammography usage, and the limited malignant potential of some tumors may play an important role. Women should be better informed about this risk. Research should be oriented towards personalized screening and risk assessment tools

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigated procedural learning in 18 children with basal ganglia (BG) lesions or dysfunctions of various aetiologies, using a visuo-motor learning test, the Serial Reaction Time (SRT) task, and a cognitive learning test, the Probabilistic Classification Learning (PCL) task. We compared patients with early (<1 year old, n=9), later onset (>6 years old, n=7) or progressive disorder (idiopathic dystonia, n=2). All patients showed deficits in both visuo-motor and cognitive domains, except those with idiopathic dystonia, who displayed preserved classification learning skills. Impairments seem to be independent from the age of onset of pathology. As far as we know, this study is the first to investigate motor and cognitive procedural learning in children with BG damage. Procedural impairments were documented whatever the aetiology of the BG damage/dysfunction and time of pathology onset, thus supporting the claim of very early skill learning development and lack of plasticity in case of damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A compositional time series is obtained when a compositional data vector is observed atdifferent points in time. Inherently, then, a compositional time series is a multivariatetime series with important constraints on the variables observed at any instance in time.Although this type of data frequently occurs in situations of real practical interest, atrawl through the statistical literature reveals that research in the field is very much in itsinfancy and that many theoretical and empirical issues still remain to be addressed. Anyappropriate statistical methodology for the analysis of compositional time series musttake into account the constraints which are not allowed for by the usual statisticaltechniques available for analysing multivariate time series. One general approach toanalyzing compositional time series consists in the application of an initial transform tobreak the positive and unit sum constraints, followed by the analysis of the transformedtime series using multivariate ARIMA models. In this paper we discuss the use of theadditive log-ratio, centred log-ratio and isometric log-ratio transforms. We also presentresults from an empirical study designed to explore how the selection of the initialtransform affects subsequent multivariate ARIMA modelling as well as the quality ofthe forecasts

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: All kinds of blood manipulations aim to increase the total hemoglobin mass (tHb-mass). To establish tHb-mass as an effective screening parameter for detecting blood doping, the knowledge of its normal variation over time is necessary. The aim of the present study, therefore, was to determine the intraindividual variance of tHb-mass in elite athletes during a training year emphasizing off, training, and race seasons at sea level. METHODS: tHb-mass and hemoglobin concentration ([Hb]) were determined in 24 endurance athletes five times during a year and were compared with a control group (n = 6). An analysis of covariance was used to test the effects of training phases, age, gender, competition level, body mass, and training volume. Three error models, based on 1) a total percentage error of measurement, 2) the combination of a typical percentage error (TE) of analytical origin with an absolute SD of biological origin, and 3) between-subject and within-subject variance components as obtained by an analysis of variance, were tested. RESULTS: In addition to the expected influence of performance status, the main results were that the effects of training volume (P = 0.20) and training phases (P = 0.81) on tHb-mass were not significant. We found that within-subject variations mainly have an analytical origin (TE approximately 1.4%) and a very small SD (7.5 g) of biological origin. CONCLUSION: tHb-mass shows very low individual oscillations during a training year (<6%), and these oscillations are below the expected changes in tHb-mass due to Herythropoetin (EPO) application or blood infusion (approximately 10%). The high stability of tHb-mass over a period of 1 year suggests that it should be included in an athlete's biological passport and analyzed by recently developed probabilistic inference techniques that define subject-based reference ranges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Drugs for inhalation are the cornerstone of therapy in obstructive lung disease. We have observed that up to 75 % of patients do not perform a correct inhalation technique. The inability of patients to correctly use their inhaler device may be a direct consequence of insufficient or poor inhaler technique instruction. The objective of this study is to test the efficacy of two educational interventions to improve the inhalation techniques in patients with Chronic Obstructive Pulmonary Disease (COPD). METHODS This study uses both a multicenter patients´ preference trial and a comprehensive cohort design with 495 COPD-diagnosed patients selected by a non-probabilistic method of sampling from seven Primary Care Centers. The participants will be divided into two groups and five arms. The two groups are: 1) the patients´ preference group with two arms and 2) the randomized group with three arms. In the preference group, the two arms correspond to the two educational interventions (Intervention A and Intervention B) designed for this study. In the randomized group the three arms comprise: intervention A, intervention B and a control arm. Intervention A is written information (a leaflet describing the correct inhalation techniques). Intervention B is written information about inhalation techniques plus training by an instructor. Every patient in each group will be visited six times during the year of the study at health care center. DISCUSSION Our hypothesis is that the application of two educational interventions in patients with COPD who are treated with inhaled therapy will increase the number of patients who perform a correct inhalation technique by at least 25 %. We will evaluate the effectiveness of these interventions on patient inhalation technique improvement, considering that it will be adequate and feasible within the context of clinical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Study on the likelihood and prevalence of patients with copd, over a year in a family medicine consultation, during 2012 and first two months of 2013. In a query of a health center about 15oo patients every 6 months probabilistic evolution was studied according to the theory of Laplace. Analyze both the COPD, its symptoms, etiology, clinical consultation and treatment in Family Medicine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.