866 resultados para Best evidence rule
Resumo:
The purpose of this study is to develop a decision making system to evaluate the risks in E-Commerce (EC) projects. Competitive software businesses have the critical task of assessing the risk in the software system development life cycle. This can be conducted on the basis of conventional probabilities, but limited appropriate information is available and so a complete set of probabilities is not available. In such problems, where the analysis is highly subjective and related to vague, incomplete, uncertain or inexact information, the Dempster-Shafer (DS) theory of evidence offers a potential advantage. We use a direct way of reasoning in a single step (i.e., extended DS theory) to develop a decision making system to evaluate the risk in EC projects. This consists of five stages 1) establishing knowledge base and setting rule strengths, 2) collecting evidence and data, 3) determining evidence and rule strength to a mass distribution for each rule; i.e., the first half of a single step reasoning process, 4) combining prior mass and different rules; i.e., the second half of the single step reasoning process, 5) finally, evaluating the belief interval for the best support decision of EC project. We test the system by using potential risk factors associated with EC development and the results indicate that the system is promising way of assisting an EC project manager in identifying potential risk factors and the corresponding project risks.
Resumo:
In this paper, we investigate the problem encountered by Dempster's combination rule in view of Dempster's original combination framework. We first show that the root of Dempster's combination rule (defined and named by Shafer) is Dempster's original idea on evidence combination. We then argue that Dempster's original idea on evidence combination is, in fact, richer than what has been formulated in the rule. We conclude that, by strictly following what Dempster has suggested, there should be no counterintuitive results when combining evidence.
Resumo:
Dealing with uncertainty problems in intelligent systems has attracted a lot of attention in the AI community. Quite a few techniques have been proposed. Among them, the Dempster-Shafer theory of evidence (DS theory) has been widely appreciated. In DS theory, Dempster's combination rule plays a major role. However, it has been pointed out that the application domains of the rule are rather limited and the application of the theory sometimes gives unexpected results. We have previously explored the problem with Dempster's combination rule and proposed an alternative combination mechanism in generalized incidence calculus. In this paper we give a comprehensive comparison between generalized incidence calculus and the Dempster-Shafer theory of evidence. We first prove that these two theories have the same ability in representing evidence and combining DS-independent evidence. We then show that the new approach can deal with some dependent situations while Dempster's combination rule cannot. Various examples in the paper show the ways of using generalized incidence calculus in expert systems.
Resumo:
Poverty alleviation lies at the heart of contemporary international initiatives on development. The key to development is the creation of an environment in which people can develop their potential, leading productive, creative lives in accordance with their needs, interests and faith. This entails, on the one hand, protecting the vulnerable from things that threaten their survival, such as inadequate nutrition, disease, conflict, natural disasters and the impact of climate change, thereby enhancing the poor’s capabilities to develop resilience in difficult conditions. On the other hand, it also requires a means of empowering the poor to act on their own behalf, as individuals and communities, to secure access to resources and the basic necessities of life such as water, food, shelter, sanitation, health and education. ‘Development’, from this perspective, seeks to address the sources of human insecurity, working towards ‘freedom from want, freedom from fear’ in ways that empower the vulnerable as agents of development (not passive recipients of benefaction).
Recognition of the magnitude of the problems confronted by the poor and failure of past interventions to tackle basic issues of human security led the United Nations (UN) in September 2000 to set out a range of ambitious, but clearly defined, development goals to be achieved by 2015. These are known as the Millennium Development Goals (MDGs). The intention of the UN was to mobilise multilateral international organisations, non-governmental organisations and the wider international community to focus attention on fulfilling earlier promises to combat global poverty. This international framework for development prioritises: the eradication of extreme poverty and hunger; achieving universal primary education; promoting gender equality and empowering women; reducing child mortality; improving maternal health; combating HIV/AIDS, malaria and other diseases; ensuring environmental sustainability; and developing a global partnership for development. These goals have been mapped onto specific targets (18 in total) against which outcomes of associated development initiatives can be measured and the international community held to account. If the world achieves the MDGs, more than 500 million people will be lifted out of poverty. However, the challenges the goals represent are formidable. Interim reports on the initiative indicate a need to scale-up efforts and accelerate progress.
Only MDG 7, Target 11 explicitly identifies shelter as a priority, identifying the need to secure ‘by 2020 a significant improvement in the lives of at least 100 million slum dwellers’. This raises a question over how Habitat for Humanity’s commitment to tackling poverty housing fits within this broader international framework designed to allievate global poverty. From an analysis of HFH case studies, this report argues that the processes by which Habitat for Humanity tackles poverty housing directly engages with the agenda set by the MDGs. This should not be regarded as a beneficial by-product of the delivery of decent, affordable shelter, but rather understood in terms of the ways in which Habitat for Humanity has translated its mission and values into a participatory model that empowers individuals and communities to address the interdependencies between inadequate shelter and other sources of human insecurity. What housing can deliver is as important as what housing itself is.
Examples of the ways in which Habitat for Humanity projects engage with the MDG framework include the incorporation of sustainable livelihoods strategies, up-grading of basic infrastructure and promotion of models of good governance. This includes housing projects that have also offered training to young people in skills used in the construction industry, microfinanced loans for women to start up their own home-based businesses, and the provision of food gardens. These play an important role in lifting families out of poverty and ensuring the sustainability of HFH projects. Studies of the impact of improved shelter and security of livelihood upon family life and the welfare of children evidence higher rates of participation in education, more time dedicated to study and greater individual achievement. Habitat for Humanity projects also typically incorporate measures to up-grade the provision of basic sanitation facilities and supplies of safe, potable drinking water. These measures not only directly help reduce mortality rates (e.g. diarrheal diseases account for around 2 million deaths annually in children under 5), but also, when delivered through HFH project-related ‘community funds’, empower the poor to mobilise community resources, develop local leadership capacities and even secure de facto security of tenure from government authorities.
In the process of translating its mission and values into practical measures, HFH has developed a range of innovative practices that deliver much more than housing alone. The organisation’s participatory model enables both direct beneficiaries and the wider community to tackle the insecurities they face, unlocking latent skills and enterprise, building sustainable livelihood capabilities. HFH plays an important role as a catalyst for change, delivering through the vehicle of housing the means to address the primary causes of poverty itself. Its contribution to wider development priorities deserves better recognition. In calibrating the success of HFH projects in terms of units completed or renovated alone, the significance of the process by which HFH realises these outcomes is often not sufficiently acknowledged, both within the organisation and externally. As the case studies developed in the report illustrate, the methodologies Habitat for Humanity employs to address the issue of poverty housing within the developing world, place the organisation at the centre of a global strategic agenda to address the root causes of poverty through community empowerment and the transformation of structures of governance.
Given this, the global network of HFH affiliates constitutes a unique organisational framework to faciliate sharing resources, ideas and practical experience across a diverse range of cultural, political and institutional environments. This said, it is apparent that work needs to be done to better to faciliate the pooling of experience and lessons learnt from across its affiliates. Much is to be gained from learning from less successful projects, sharing innovative practices, identifying strategic partnerships with donors, other NGOs and CBOs, and engaging with the international development community on how housing fits within a broader agenda to alleviate poverty and promote good governance.
Resumo:
The results of three experiments investigating the role of deductive inference in Wason's selection task are reported. In Experiment 1, participants received either a standard one-rule problem or a task containing a second rule, which specified an alternative antecedent. Both groups of participants were asked to select those cards that they considered were necessary to test whether the rule common to both problems was true or false. The results showed a significant suppression of q card selections in the two-rule condition. In addition there was weak evidence for both decreased p selection and increased not-q selection. In Experiment 2 we again manipulated number of rules and found suppression of q card selections only. Finally, in Experiment 3 we compared one- and two-rule conditions with a two-rule condition where the second rule specified two alternative antecedents in the form of a disjunction. The q card selections were suppressed in both of the two-rule conditions but there was no effect of whether the second rule contained one or two alternative antecedents. We argue that our results support the claim that people make inferences about the unseen side of the cards when engaging with the indicative selection task.
Resumo:
This article applies the panel stationarity test with a break proposed by Hadri and Rao (2008) to examine whether 14 macroeconomic variables of OECD countries can be best represented as random walk or stationary fluctuations around a deterministic trend. In contrast to previous studies, based essentially on visual inspection of the break type or just applying the most general break model, we use a model selection procedure based on BIC. We do this for each time series so that heterogeneous break models are allowed for in the panel. Our results suggest, overwhelmingly, that if we account for a structural break, cross-sectional dependence and choose the break models to be congruent with the data, then the null of stationarity cannot be rejected for all the 14 macroeconomic variables examined in this article. This is in sharp contrast with the results obtained by Hurlin (2004), using the same data but a different methodology.
Resumo:
Galactic bulge planetary nebulae show evidence of mixed chemistry with emission from both silicate dust and polycyclic aromatic hydrocarbons (PAHs). This mixed chemistry is unlikely to be related to carbon dredge-up, as third dredge-up is not expected to occur in the low-mass bulge stars. We show that the phenomenon is widespread and is seen in 30 nebulae out of 40 of our sample, selected on the basis of their infrared flux. Hubble Space Telescope (HST) images and Ultraviolet and Visual Echelle Spectrograph (UVES) spectra show that the mixed chemistry is not related to the presence of emission-line stars, as it is in the Galactic disc population. We also rule out interaction with the interstellar medium (ISM) as origin of the PAHs. Instead, a strong correlation is found with morphology and the presence of a dense torus. A chemical model is presented which shows that hydrocarbon chains can form within oxygen-rich gas through gas-phase chemical reactions. The model predicts two layers, one at A_V~ 1.5, where small hydrocarbons form from reactions with C+, and one at A_V~ 4, where larger chains (and by implication, PAHs) form from reactions with neutral, atomic carbon. These reactions take place in a mini-photon-dominated region (PDR). We conclude that the mixed-chemistry phenomenon occurring in the Galactic bulge planetary nebulae is best explained through hydrocarbon chemistry in an ultraviolet (UV)-irradiated, dense torus.
Resumo:
We present a new, detailed analysis of late-time mid-infrared observations of the Type II-P supernova (SN) 2003gd. At about 16 months after the explosion, the mid-IR flux is consistent with emission from 4 x 10(-5) M. of newly condensed dust in the ejecta. At 22 months emission from pointlike sources close to the SN position was detected at 8 and 24 mu m. By 42 months the 24 mu m flux had faded. Considerations of luminosity and source size rule out the ejecta of SN 2003gd as the main origin of the emission at 22 months. A possible alternative explanation for the emission at this later epoch is an IR echo from preexisting circumstellar or interstellar dust. We conclude that, contrary to the claim of Sugerman and coworkers, the mid-IR emission from SN 2003gd does not support the presence of 0.02 M. of newly formed dust in the ejecta. There is, as yet, no direct evidence that core-collapse supernovae are major dust factories.
Resumo:
In many environmental valuation applications standard sample sizes for choice modelling surveys are impractical to achieve. One can improve data quality using more in-depth surveys administered to fewer respondents. We report on a study using high quality rank-ordered data elicited with the best-worst approach. The resulting "exploded logit" choice model, estimated on 64 responses per person, was used to study the willingness to pay for external benefits by visitors for policies which maintain the cultural heritage of alpine grazing commons. We find evidence supporting this approach and reasonable estimates of mean WTP, which appear theoretically valid and policy informative. © The Author (2011).
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Glaucoma is characterized by a typical appearance of the optic disc and peripheral visual field loss. However, diagnosis may be challenging even for an experienced clinician due to wide variability among normal and glaucomatous eyes. Standard automated perimetry is routinely used to establish the diagnosis of glaucoma. However, there is evidence that substantial retinal ganglion cell damage may occur in glaucoma before visual field defects are seen. The introduction of newer imaging devices such as confocal scanning laser ophthalmoscopy, scanning laser polarimetry and optical coherence tomography for measuring structural changes in the optic nerve head and retinal nerve fiber layer seems promising for early detection of glaucoma. New functional tests may also help in the diagnosis. However, there is no evidence that a single measurement is superior to the others and a combination of tests may be needed for detecting early damage in glaucoma. © 2010 Expert Reviews Ltd.
Resumo:
Why do firms pay dividends? To answer this question, we use a hand-collected data set of companies traded on the London stock market between 1825 and 1870. As tax rates were effectively zero, the capital market was unregulated, and there were no institutional stockholders, we can rule out these potential determinants ex ante. We find that, even though they were legal, share repurchases were not used by firms to return cash to shareholders. Instead, our evidence provides support for the information–communication explanation for dividends, while providing little support for agency, illiquidity, catering, or behavioral explanations. © The Authors 2013. Published by Oxford University Press [on behalf of the European Finance Association]. All rights reserved.
Resumo:
Education has a powerful and long-term effect on people’s lives and therefore should be based on evidence of what works best. This assertion warrants a definition of what constitutes good research evidence. Two research designs that are often thought to come from diametrically opposed fields, single-subject research designs and randomised controlled-trials, are described and common features, such as the use of probabilistic assumptions and the aim of discovering causal relations are delineated. Differences between the two research designs are also highlighted and this is used as the basis to set out how these two research designs might better be used to complement one another. Recommendations for future action are made accordingly.
Resumo:
The present study investigated the longitudinal relationship between alcohol consumption at age 13, and at age 16. Alcohol-specific measures were frequency of drinking, amount consumed at last use and alcohol related harms. Self-report data were gathered from 1113 high school students at T1, and 981 students at T2. Socio-demographic data were gathered, as was information on context of use, alcohol-related knowledge and attitudes, four domains of aggression and delay reward discounting. Results indicated that any consumption of alcohol, even supervised consumption, at T1 was associated with significantly poorer outcomes at T2. In other words, compared to those still abstinent at age 13, those engaging in alcohol use in any context reported significantly more frequent drinking, more alcohol-related harms and more units consumed at last use at age 16. Results also support the relationship between higher levels of physical aggression at T1 and a greater likelihood of more problematic alcohol use behaviours at T2. The findings support other evidence suggesting that abstinence in early adolescence has better longitudinal outcomes that supervised consumption of alcohol. These results suggest support for current guidance on adolescent drinking in the United Kingdom (UK).
Resumo:
For more than fifty years evidence has been accrued regarding the efficacy of applied behaviour analysis-based interventions for individuals with autism spectrum disorders. Despite this history of empirical evidence, some researchers and ASD experts still are reluctant to accept behavioral interventions as best practice for ASD. In this paper, we consider both random control trials and single subject experimental designs as forms of evidenced-based practice (EBP). Specific application of these methods to ASD research is considered. In an effort to provide scientifically based evidence for interventions for ASD, EBP standards have been debated without a consensus being achieved. Service users of ASD interventions need access to sound empirical evidence to choose appropriate programmes for those they care for with ASD rather than putting their hopes in therapies backed by pseudoscience and celebrity endorsements.