877 resultados para decision analysis
Resumo:
OBJECTIVE: We aim to explore how health surrogates of patients with dementia proceed in decision making, which considerations are decisive, and whether family surrogates and professional guardians decide differently. METHODS: We conducted an experimental vignette study using think aloud protocol analysis. Thirty-two family surrogates and professional guardians were asked to decide on two hypothetical case vignettes, concerning a feeding tube placement and a cardiac pacemaker implantation in patients with end-stage dementia. They had to verbalize their thoughts while deciding. Verbalizations were audio-recorded, transcribed, and analyzed according to content analysis. By experimentally changing variables in the vignettes, the impact of these variables on the outcome of decision making was calculated. RESULTS: Although only 25% and 31% of the relatives gave their consent to the feeding tube and pacemaker placement, respectively, 56% and 81% of the professional guardians consented to these life-sustaining measures. Relatives decided intuitively, referred to their own preferences, and focused on the patient's age, state of wellbeing, and suffering. Professional guardians showed a deliberative approach, relied on medical and legal authorities, and emphasized patient autonomy. Situational variables such as the patient's current behavior and the views of health care professionals and family members had higher impacts on decisions than the patient's prior statements or life attitudes. CONCLUSIONS: Both the process and outcome of surrogate decision making depend heavily on whether the surrogate is a relative or not. These findings have implications for the physician-surrogate relationship and legal frameworks regarding surrogacy. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
We study the induced aggregation operators. The analysis begins with a revision of some basic concepts such as the induced ordered weighted averaging (IOWA) operator and the induced ordered weighted geometric (IOWG) operator. We then analyze the problem of decision making with Dempster-Shafer theory of evidence. We suggest the use of induced aggregation operators in decision making with Dempster-Shafer theory. We focus on the aggregation step and examine some of its main properties, including the distinction between descending and ascending orders and different families of induced operators. Finally, we present an illustrative example in which the results obtained using different types of aggregation operators can be seen.
Resumo:
We study the induced aggregation operators. The analysis begins with a revision of some basic concepts such as the induced ordered weighted averaging (IOWA) operator and the induced ordered weighted geometric (IOWG) operator. We then analyze the problem of decision making with Dempster-Shafer theory of evidence. We suggest the use of induced aggregation operators in decision making with Dempster-Shafer theory. We focus on the aggregation step and examine some of its main properties, including the distinction between descending and ascending orders and different families of induced operators. Finally, we present an illustrative example in which the results obtained using different types of aggregation operators can be seen.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
This empirical work applies a duration model to the study of factors determining privatization of local water services. I assess how factors determining privatization decision evolve as time goes by. A sample of 133 Spanish municipalities during the six terms of office taken place during the 1980-2002 period is analyzed. A dynamic neighboring effect is hypothesized and successfully tested. In a first stage, private water supply firms may try to expand to regions where there is no service privatized, in order to spread over this region after having being installed thanks to its scale advantages. Other factors influencing privatization decision evolve during the two decades under study, from the priority to fix old infrastructures to the concern about service efficiency. Some complementary results regarding political and budgetary factors are also obtained
Resumo:
The authors are discussing the results of the international literature with regards to referrals between ambulatory physicians. There are still few studies on this problem and the methodologies used are often too different to make valid comparisons. However, the earned results suggest more questions than they give answers to the determinants of the referral process. This can be explained by the multidimensionality of factors which are involved in the decision to refer a patient to another practitioner, particularly by the complex interaction between the characteristics of each patient, practitioner and the sanitary system itself.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study.
Resumo:
The development of new rail systems in the first part of the 21st century is the result of a wide range of trends that are making it increasingly difficult to maintain regional mobility using the two dominant intercity travel modes, auto and air. These trends include the changing character of the economic structure of industry. The character of the North American industrial structure is moving rapidly from a manufacturing base to a service based economy. This is increasing the need for business travel while the increase in disposable income due to higher salaries has promoted increased social and tourist travel. Another trend is the change in the regulatory environment. The trend towards deregulation has dramatically reduced the willingness of the airlines to operate from smaller airports and the level of service has fallen due to the creation of hub and spoke systems. While new air technology such as regional jets may mitigate this trend to some degree in medium-size airports, smaller airports will continue to lose out. Finally, increasing environmental concerns have reduced the ability of the automobile to meet intercity travel needs because of increased suburban congestion and limited highway capacity in big cities. Against this background the rail mode offers new options due to first, the existing rail rights-of-way offering direct access into major cities that, in most cases, have significant capacity available and, second, a revolution in vehicle technology that makes new rail rolling stock faster and less expensive to purchase and operate. This study is designed to evaluate the potential for rail service making an important contribution to maintaining regional mobility over the next 30 to 50 years in Iowa. The study evaluates the potential for rail service on three key routes across Iowa and assesses the impact of new train technology in reducing costs and improving rail service. The study also considers the potential for developing the system on an incremental basis. The service analysis and recommendations do not involve current Amtrak intercity service. That service is presumed to continue on its current route and schedule. The study builds from data and analyses that have been generated for the Midwest Rail Initiative (MWRI) Study. For example, the zone system and operating and capital unit cost assumptions are derived from the MWRI study. The MWRI represents a cooperative effort between nine Midwest states, Amtrak and the Federal Railroad Administration (FRA) contracting with Transportation Economics & Management Systems, Inc. to evaluate the potential for a regional rail system. The 1 The map represents the system including the decision on the Iowa route derived from the current study. Iowa Rail Route Alternatives Analysis TEMS 1-2 system is to offer modern, frequent, higher speed train service to the region, with Chicago as the connecting hub. Exhibit 1-1 illustrates the size of the system, and how the Iowa route fits in to the whole.
Resumo:
The Bridges Decision Support Model is a geographic information system (GIS) that assembles existing data on archaeological sites, surveys, and their geologic contexts to assess the risk of bridge replacement projects encountering 13,000- to 150-year-old Native American sites. This project identifies critical variables for assessing prehistoric sites potential, examines the quality of available data about the variables, and applies the data to creating a decision support framework for use by the Iowa Department of Transportation (Iowa DOT) and others. An analysis of previous archaeological surveys indicates that subsurface testing to discover buried sites became increasingly common after 1980, but did not become routine until after the adoption of guidelines recommending such testing, in 1993. Even then, the average depth of testing has been relatively shallow. Alluvial deposits of sufficient age, deposited in depositional environments conducive to human habitation, are considerably thicker than archaeologists have routinely tested.
Resumo:
This article employs a unique data set - covering 25 popular votes on foreign, European and immigration/asylum policy held between 1992 and 2006 in Switzerland - in order to examine the conditional impact of context upon utilitarian, cultural, political and cognitive determinants of individual attitudes toward international openness. Our results reveal clear patterns of cross-level interactions between individual determinants and the project-related context of the vote. Thus, although party cues and political competence have a strong impact on individuals' support for international openness, this impact is substantially mediated by the type of coalition that is operating within the party elite. Similarly, subjective utilitarian and cultural considerations influence the voters' decision in interaction with the content of the proposal submitted to the voters as well as with the framing of the voting campaign.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.