989 resultados para Uncertainty Modelling
Resumo:
We re-mapped the soils of the Murray-Darling Basin (MDB) in 1995-1998 with a minimum of new fieldwork, making the most out of existing data. We collated existing digital soil maps and used inductive spatial modelling to predict soil types from those maps combined with environmental predictor variables. Lithology, Landsat Multi Spectral Scanner (Landsat MSS), the 9-s digital elevation model (DEM) of Australia and derived terrain attributes, all gridded to 250-m pixels, were the predictor variables. Because the basin-wide datasets were very large data mining software was used for modelling. Rule induction by data mining was also used to define the spatial domain of extrapolation for the extension of soil-landscape models from existing soil maps. Procedures to estimate the uncertainty associated with the predictions and quality of information for the new soil-landforms map of the MDB are described. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Producer decisionmaking under uncertainty is characterized using indirect objective functions. The characterization is for the class of producers with continuous and nondecreasing preferences over stochastic incomes who face both price and production uncertainty. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Functional magnetic resonance imaging (FMRI) analysis methods can be quite generally divided into hypothesis-driven and data-driven approaches. The former are utilised in the majority of FMRI studies, where a specific haemodynamic response is modelled utilising knowledge of event timing during the scan, and is tested against the data using a t test or a correlation analysis. These approaches often lack the flexibility to account for variability in haemodynamic response across subjects and brain regions which is of specific interest in high-temporal resolution event-related studies. Current data-driven approaches attempt to identify components of interest in the data, but currently do not utilise any physiological information for the discrimination of these components. Here we present a hypothesis-driven approach that is an extension of Friman's maximum correlation modelling method (Neurolmage 16, 454-464, 2002) specifically focused on discriminating the temporal characteristics of event-related haemodynamic activity. Test analyses, on both simulated and real event-related FMRI data, will be presented.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Objective: The Assessing Cost-Effectiveness - Mental Health (ACE-MH) study aims to assess from a health sector perspective, whether there are options for change that could improve the effectiveness and efficiency of Australia's current mental health services by directing available resources toward 'best practice' cost-effective services. Method: The use of standardized evaluation methods addresses the reservations expressed by many economists about the simplistic use of League Tables based on economic studies confounded by differences in methods, context and setting. The cost-effectiveness ratio for each intervention is calculated using economic and epidemiological data. This includes systematic reviews and randomised controlled trials for efficacy, the Australian Surveys of Mental Health and Wellbeing for current practice and a combination of trials and longitudinal studies for adherence. The cost-effectiveness ratios are presented as cost (A$) per disability-adjusted life year (DALY) saved with a 95% uncertainty interval based on Monte Carlo simulation modelling. An assessment of interventions on 'second filter' criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') allows broader concepts of 'benefit' to be taken into account, as well as factors that might influence policy judgements in addition to cost-effectiveness ratios. Conclusions: The main limitation of the study is in the translation of the effect size from trials into a change in the DALY disability weight, which required the use of newly developed methods. While comparisons within disorders are valid, comparisons across disorders should be made with caution. A series of articles is planned to present the results.
Resumo:
Objective: To assess from a health sector perspective the incremental cost-effectiveness of cognitive behavioural therapy (CBT) and selective serotonin reuptake inhibitors (SSRIs) for the treatment of major depressive disorder (MDD) in children and adolescents, compared to 'current practice'. Method: The health benefit is measured as a reduction in disability-adjusted life years (DALYs), based on effect size calculations from meta-analysis of randomised controlled trials. An assessment on second stage filter criteria ('equity'; 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') is also undertaken to incorporate additional factors that impact on resource allocation decisions. Costs and benefits are tracked for the duration of a new episode of MDD arising in eligible children (age 6-17 years) in the Australian population in the year 2000. Simulation-modelling techniques are used to present a 95% uncertainty interval (UI) around the cost-effectiveness ratios. Results: Compared to current practice, CBT by public psychologists is the most cost-effective intervention for MDD in children and adolescents at A$9000 per DALY saved (95% UI A$3900 to A$24 000). SSRIs and CBT by other providers are less cost-effective but likely to be less than A$50 000 per DALY saved (> 80% chance). CBT is more effective than SSRIs in children and adolescents, resulting in a greater total health benefit (DALYs saved) than could be achieved with SSRIs. Issues that require attention for the CBT intervention include equity concerns, ensuring an adequate workforce, funding arrangements and acceptability to various stakeholders. Conclusions: Cognitive behavioural therapy provided by a public psychologist is the most effective and cost-effective option for the first-line treatment of MDD in children and adolescents. However, this option is not currently accessible by all patients and will require change in policy to allow more widespread uptake. It will also require 'start-up' costs and attention to ensuring an adequate workforce.
Resumo:
Objective: To analyze from a health sector perspective the cost-effectiveness of dexamphetamine (DEX) and methylphenidate (MPH) interventions to treat childhood attention deficit hyperactivity disorder (ADHD), compared to current practice. Method: Children eligible for the interventions are those aged between 4 and 17 years in 2000, who had ADHD and were seeking care for emotional or behavioural problems, but were not receiving stimulant medication. To determine health benefit, a meta-analysis of randomized controlled trials was performed for DEX and MPH, and the effect sizes were translated into utility values. An assessment on second stage filter criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') is also undertaken to incorporate additional factors that impact on resource allocation decisions. Simulation modelling techniques are used to present a 95% uncertainty interval (UI) around the incremental cost-effectiveness ratio (ICER), which is calculated in cost (in A$) per DALY averted. Results: The ICER for DEX is A$4100/DALY saved (95% UI: negative to A$14 000) and for MPH is A$15 000/DALY saved (95% UI: A$9100-22 000). DEX is more costly than MPH for the government, but much less costly for the patient. Conclusions: MPH and DEX are cost-effective interventions for childhood ADHD. DEX is more cost-effective than MPH, although if MPH were listed at a lower price on the Pharmaceutical Benefits Scheme it would become more cost-effective. Increased uptake of stimulants for ADHD would require policy change. However, the medication of children and wider availability of stimulants may concern parents and the community.
Resumo:
Objective: To assess from a health sector perspective the incremental cost-effectiveness of interventions for generalized anxiety disorder (cognitive behavioural therapy [CBT] and serotonin and noradrenaline reuptake inhibitors [SNRIs]) and panic disorder (CBT, selective serotonin reuptake inhibitors [SSRIs] and tricyclic antidepressants [TCAs]). Method: The health benefit is measured as a reduction in disability-adjusted life years (DALYs), based on effect size calculations from meta-analyses of randomised controlled trials. An assessment on second stage filters ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') is also undertaken to incorporate additional factors that impact on resource allocation decisions. Costs and benefits are calculated for a period of one year for the eligible population (prevalent cases of generalized anxiety disorder/panic disorder identified in the National Survey of Mental Health and Wellbeing, extrapolated to the Australian population in the year 2000 for those aged 18 years and older). Simulation modelling techniques are used to present 95% uncertainty intervals (UI) around the incremental cost-effectiveness ratios (ICERs). Results: Compared to current practice, CBT by a psychologist on a public salary is the most cost-effective intervention for both generalized anxiety disorder (A$6900/DALY saved; 95% UI A$4000 to A$12 000) and panic disorder (A$6800/DALY saved; 95% UI A$2900 to A$15 000). Cognitive behavioural therapy results in a greater total health benefit than the drug interventions for both anxiety disorders, although equity and feasibility concerns for CBT interventions are also greater. Conclusions: Cognitive behavioural therapy is the most effective and cost-effective intervention for generalized anxiety disorder and panic disorder. However, its implementation would require policy change to enable more widespread access to a sufficient number of trained therapists for the treatment of anxiety disorders.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Phylogenetic trees can provide a stable basis for a higher-level classification of organisms that reflects evolutionary relationships. However, some lineages have a complex evolutionary history that involves explosive radiation or hybridisation. Such histories have become increasingly apparent with the use of DNA sequence data for phylogeny estimation and explain, in part, past difficulties in producing stable morphology-based classifications for some groups. We illustrate this situation by using the example of tribe Mirbelieae (Fabaceae), whose generic classification has been fraught for decades. In particular, we discuss a recent proposal to combine 19 of the 25 Mirbelieae genera into a single genus, Pultenaea sens. lat., and how we might find stable and consistent ways to squeeze something as complex as life into little boxes for our own convenience. © CSIRO.
Resumo:
The solidification of intruded magma in porous rocks can result in the following two consequences: (1) the heat release due to the solidification of the interface between the rock and intruded magma and (2) the mass release of the volatile fluids in the region where the intruded magma is solidified into the rock. Traditionally, the intruded magma solidification problem is treated as a moving interface (i.e. the solidification interface between the rock and intruded magma) problem to consider these consequences in conventional numerical methods. This paper presents an alternative new approach to simulate thermal and chemical consequences/effects of magma intrusion in geological systems, which are composed of porous rocks. In the proposed new approach and algorithm, the original magma solidification problem with a moving boundary between the rock and intruded magma is transformed into a new problem without the moving boundary but with the proposed mass source and physically equivalent heat source. The major advantage in using the proposed equivalent algorithm is that a fixed mesh of finite elements with a variable integration time-step can be employed to simulate the consequences and effects of the intruded magma solidification using the conventional finite element method. The correctness and usefulness of the proposed equivalent algorithm have been demonstrated by a benchmark magma solidification problem. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Carbon monoxide, the chief killer in fires, and other species are modelled for a series of enclosure fires. The conditions emulate building fires where CO is formed in the rich, turbulent, nonpremixed flame and is transported frozen to lean mixtures by the ceiling jet which is cooled by radiation and dilution. Conditional moment closure modelling is used and computational domain minimisation criteria are developed which reduce the computational cost of this method. The predictions give good agreement for CO and other species in the lean, quenched-gas stream, holding promise that this method may provide a practical means of modelling real, three-dimensional fire situations. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Pollution by polycyclic aromatic hydrocarbons(PAHs) is widespread due to unsuitable disposal of industrial waste. They are mostly defined as priority pollutants by environmental protection authorities worldwide. Phenanthrene, a typical PAH, was selected as the target in this paper. The PAH-degrading mixed culture, named ZM, was collected from a petroleum contaminated river bed. This culture was injected into phenanthrene solutions at different concentrations to quantify the biodegradation process. Results show near-complete removal of phenanthrene in three days of biodegradation if the initial phenanthrene concentration is low. When the initial concentration is high, the removal rate is increased but 20%-40% of the phenanthrene remains at the end of the experiment. The biomass shows a peak on the third day due to the combined effects of microbial growth and decay. Another peak is evident for cases with a high initial concentration, possibly due to production of an intermediate metabolite. The pH generally decreased during biodegradation because of the production of organic acid. Two phenomenological models were designed to simulate the phenanthrene biodegradation and biomass growth. A relatively simple model that does not consider the intermediate metabolite and its inhibition of phenanthrene biodegradation cannot fit the observed data. A modified Monod model that considered an intermediate metabolite (organic acid) and its inhibiting reversal effect reasonably depicts the experimental results.