995 resultados para Statistical decision


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To understand human behavior, it is important to know under what conditions people deviate from selfish rationality. This study explores the interaction of natural survival instincts and internalized social norms using data on the sinking of the Titanic and the Lusitania. We show that time pressure appears to be crucial when explaining behavior under extreme conditions of life and death. Even though the two vessels and the composition of their passengers were quite similar, the behavior of the individuals on board was dramatically different. On the Lusitania, selfish behavior dominated (which corresponds to the classical homo oeconomicus); on the Titanic, social norms and social status (class) dominated, which contradicts standard economics. This difference could be attributed to the fact that the Lusitania sank in 18 minutes, creating a situation in which the short-run flight impulse dominates behavior. On the slowly sinking Titanic (2 hours, 40 minutes), there was time for socially determined behavioral patterns to re-emerge. To our knowledge, this is the first time that these shipping disasters have been analyzed in a comparative manner with advanced statistical (econometric) techniques using individual data of the passengers and crew. Knowing human behavior under extreme conditions allows us to gain insights about how varied human behavior can be depending on differing external conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Now in its second edition, this book describes tools that are commonly used in transportation data analysis. The first part of the text provides statistical fundamentals while the second part presents continuous dependent variable models. With a focus on count and discrete dependent variable models, the third part features new chapters on mixed logit models, logistic regression, and ordered probability models. The last section provides additional coverage of Bayesian statistical modeling, including Bayesian inference and Markov chain Monte Carlo methods. Data sets are available online to use with the modeling techniques discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most infrastructure project developments are complex in nature, particularly in the planning phase. During this stage, many vague alternatives are tabled - from the strategic to operational level. Human judgement and decision making are characterised by biases, errors and the use of heuristics. These factors are intangible and hard to measure because they are subjective and qualitative in nature. The problem with human judgement becomes more complex when a group of people are involved. The variety of different stakeholders may cause conflict due to differences in personal judgements. Hence, the available alternatives increase the complexities of the decision making process. Therefore, it is desirable to find ways of enhancing the efficiency of decision making to avoid misunderstandings and conflict within organisations. As a result, numerous attempts have been made to solve problems in this area by leveraging technologies such as decision support systems. However, most construction project management decision support systems only concentrate on model development and neglect fundamentals of computing such as requirement engineering, data communication, data management and human centred computing. Thus, decision support systems are complicated and are less efficient in supporting the decision making of project team members. It is desirable for decision support systems to be simpler, to provide a better collaborative platform, to allow for efficient data manipulation, and to adequately reflect user needs. In this chapter, a framework for a more desirable decision support system environment is presented. Some key issues related to decision support system implementation are also described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The field of collaborative health planning faces significant challenges due to the lack of effective information, systems and the absence of a framework to make informed decisions. These challenges have been magnified by the rise of the healthy cities movement, consequently, there have been more frequent calls for localised, collaborative and evidence-driven decision-making. Some studies in the past have reported that the use of decision support systems (DSS) for planning healthy cities may lead to: increase collaboration between stakeholders and the general public, improve the accuracy and quality of the decision-making processes and improve the availability of data and information for health decision-makers. These links have not yet been fully tested and only a handful of studies have evaluated the impact of DSS on stakeholders, policy-makers and health planners. This study suggests a framework for developing healthy cities and introduces an online Geographic Information Systems (GIS)-based DSS for improving the collaborative health planning. It also presents preliminary findings of an ongoing case study conducted in the Logan-Beaudesert region of Queensland, Australia. These findings highlight the perceptions of decision-making prior to the implementation of the DSS intervention. Further, the findings help us to understand the potential role of the DSS to improve collaborative health planning practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the interplay between individual values, espoused organisational values and the values of the organisational culture in practice in light of a recent Royal Commission in Queensland, Australia, which highlighted systematic failures in patient care. The lack of congruence among values at these levels impacts upon the ethical decision making of health managers. The presence of institutional ethics regimes such as the Public Sector Ethics Act 1994 (Qld) and agency codes of conduct are not sufficient to counteract the negative influence of informal codes of practice that undermine espoused organisational values and community standards. The ethical decision-making capacity of health care managers remains at the front line in the battle against unethical and unprofessional practice. What is known about the topic? Value congruence theory focusses on the conflicts between individual and organisational values. Congruence between individual values, espoused values and values expressed in everyday practice can only be achieved by ensuring that such shared values are an ever-present factor in managerial decision making. What does this paper add? The importance of value congruence in building and sustaining a healthy organisational culture is confirmed by the evidence presented in the Bundaberg Hospital Inquiry. The presence of strong individual values among staff and strong espoused values in line with community expectations and backed up by legislation and ethics regimes were not, in themselves, sufficient to ensure a healthy organisational culture and prevent unethical, and possibly illegal, behaviour. What are the implications for practitioners? Managers must incorporate ethics in decision making to establish and maintain the nexus between individual and organisational values that is a vital component of a healthy organisational culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of locally-based healthcare initiatives, such as community health coalitions that focus on capacity building programs and multi-faceted responses to long-term health problems, have become an increasingly important part of the public health landscape. As a result of their complexity and the level of investment, it has become necessary to develop innovative ways to help manage these new healthcare approaches. Geographical Information Systems (GIS) have been suggested as one of the innovative approaches that will allow community health coalitions to better manage and plan their activities. The focus of this paper is to provide a commentary on the use of GIS as a tool for community coalitions and discuss some of the potential benefits and issues surrounding the development of these tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This overview focuses on the application of chemometrics techniques for the investigation of soils contaminated by polycyclic aromatic hydrocarbons (PAHs) and metals because these two important and very diverse groups of pollutants are ubiquitous in soils. The salient features of various studies carried out in the micro- and recreational environments of humans, are highlighted in the context of the various multivariate statistical techniques available across discipline boundaries that have been effectively used in soil studies. Particular attention is paid to techniques employed in the geosciences that may be effectively utilized for environmental soil studies; classical multivariate approaches that may be used in isolation or as complementary methods to these are also discussed. Chemometrics techniques widely applied in atmospheric studies for identifying sources of pollutants or for determining the importance of contaminant source contributions to a particular site, have seen little use in soil studies, but may be effectively employed in such investigations. Suitable programs are also available for suggesting mitigating measures in cases of soil contamination, and these are also considered. Specific techniques reviewed include pattern recognition techniques such as Principal Components Analysis (PCA), Fuzzy Clustering (FC) and Cluster Analysis (CA); geostatistical tools include variograms, Geographical Information Systems (GIS), contour mapping and kriging; source identification and contribution estimation methods reviewed include Positive Matrix Factorisation (PMF), and Principal Component Analysis on Absolute Principal Component Scores (PCA/APCS). Mitigating measures to limit or eliminate pollutant sources may be suggested through the use of ranking analysis and multi criteria decision making methods (MCDM). These methods are mainly represented in this review by studies employing the Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and its associated graphic output, Geometrical Analysis for Interactive Aid (GAIA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article explores how the boards of small firms actually undertake to perform strategic tasks. Board strategic involvement has seldom been investigated in the context of small firms. We seek to make a contribution by investigating antecedents of board strategic involvement. The antecedents are “board working style” and “board quality attributes”, which go beyond the board composition features of board size, CEO duality, the ratio of non-executive to executive directors and ownership. Hypotheses were tested on a sample of 497 Norwegian firms (from 5 to 30 employees). Our results show that board working style and board quality attributes rather than board composition features enhance board strategic involvement. Moreover, board quality attributes outperform board working style in fostering board strategic involvement

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Among the many requirements of establishing community health, a healthy urban environment stands out as significant one. A healthy urban environment constantly changes and improves community well-being and expands community resources. The promotion efforts for such an environment, therefore, must include the creation of structures and processes that actively work to dismantle existing community inequalities. In general, these processes are hard to manage; therefore, they require reliable planning and decision support systems. Current and previous practices justify that the use of decision support systems in planning for healthy communities have significant impacts on the communities. These impacts include but are not limited to: increasing collaboration between stakeholders and the general public; improving the accuracy and quality of the decision making process; enhancing healthcare services; and improving data and information availability for health decision makers and service planners. Considering the above stated reasons, this study investigates the challenges and opportunities of planning for healthy communities with the specific aim of examining the effectiveness of participatory planning and decision systems in supporting the planning for such communities. Methods This study introduces a recently developed methodology, which is based on an online participatory decision support system. This new decision support system contributes to solve environmental and community health problems, and to plan for healthy communities. The system also provides a powerful and effective platform for stakeholders and interested members of the community to establish an empowered society and a transparent and participatory decision making environment. Results The paper discusses the preliminary findings from the literature review of this decision support system in a case study of Logan City, Queensland. Conclusion The paper concludes with future research directions and applicability of this decision support system in health service planning elsewhere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter investigates the challenges and opportunities associated with planning for a competitive city. The chapter is based on the assumption that a healthy city is a fundamental prerequisite for a competitive city. Thus, it is critical to examine the local determinants of health and factor these into any planning efforts. The main focus of the chapter is on the role of e-health planning, by utilising web-based geographic decision support systems. The proposed novel decision support system would provide a powerful and effective platform for stakeholders to access essential data for decision-making purposes. The chapter also highlights the need for a comprehensive information framework to guide the process of planning for healthy cities. Additionally, it discusses the prospects and constraints of such an approach. In summary, this chapter outlines the potential insights of using information science-based framework and suggests practical planning methods, as part of a broader e-health approach for improving the health characteristics of competitive cities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.