915 resultados para Uncertainty in Illness Theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A címben említett három fogalom a közgazdasági elméletben központi szerepet foglal el. Ezek viszonya elsősorban a közgazdaságtudományi megismerés határait feszegeti. Mit tudunk a gazdasági döntésekről? Milyen információk alapján születnek a döntések? Lehet-e a gazdasági döntéseket „tudományos” alapra helyezni? A bizonytalanság kérdéséről az 1920-as években való megjelenése óta mindent elmondtak. Megvizsgálták a kérdést filozófiailag, matematikailag. Tárgyalták a kérdés számtalan elméleti és gyakorlati aspektusát. Akkor miért kell sokadszorra is foglalkozni a témával? A válasz igen egyszerű: azért, mert a kérdés minden szempontból ténylegesen alapvető, és mindenkor releváns. Úgy hírlik, hogy a római diadalmenetekben a győztes szekerén mindig volt egy rabszolga is, aki folyamatosan figyelmeztette a diadaltól megmámorosodott vezért, hogy ő is csak egy ember, ezt ne feledje el. A gazdasági döntéshozókat hasonló módon újra és újra figyelmeztetni kell arra, hogy a gazdasági döntések a bizonytalanság jegyében születnek. A gazdasági folyamatok megérthetőségének és kontrollálhatóságának van egy igen szoros korlátja. Ezt a korlátot a folyamatok inherens bizonytalansága adja. A gazdasági döntéshozók fülébe folyamatosan duruzsolni kell: ők is csak emberek, és ezért ismereteik igen korlátozottak. A „bátor” döntések során az eredmény bizonytalan, a tévedés azonban bizonyosra vehető. / === / In the article the author presents some remarks on the application of probability theory in financial decision making. From mathematical point of view the risk neutral measures used in finance are some version of separating hyperplanes used in optimization theory and in general equilibrium theory. Therefore they are just formally a probabilities. They interpretation as probabilities are misleading analogies leading to wrong decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce the concept of a TUU-game, a transferableutilitygame with uncertainty. In a TUU-game there is uncertainty regarding the payoffs of coalitions. One out of a finite number of states of nature materializes and conditional on the state, the players are involved in a particular transferableutilitygame. We consider the case without ex ante commitment possibilities and propose the Weak Sequential Core as a solution concept. We characterize the Weak Sequential Core and show that it is non-empty if all ex post TU-games are convex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Groundwater systems of different densities are often mathematically modeled to understand and predict environmental behavior such as seawater intrusion or submarine groundwater discharge. Additional data collection may be justified if it will cost-effectively aid in reducing the uncertainty of a model's prediction. The collection of salinity, as well as, temperature data could aid in reducing predictive uncertainty in a variable-density model. However, before numerical models can be created, rigorous testing of the modeling code needs to be completed. This research documents the benchmark testing of a new modeling code, SEAWAT Version 4. The benchmark problems include various combinations of density-dependent flow resulting from variations in concentration and temperature. The verified code, SEAWAT, was then applied to two different hydrological analyses to explore the capacity of a variable-density model to guide data collection. ^ The first analysis tested a linear method to guide data collection by quantifying the contribution of different data types and locations toward reducing predictive uncertainty in a nonlinear variable-density flow and transport model. The relative contributions of temperature and concentration measurements, at different locations within a simulated carbonate platform, for predicting movement of the saltwater interface were assessed. Results from the method showed that concentration data had greater worth than temperature data in reducing predictive uncertainty in this case. Results also indicated that a linear method could be used to quantify data worth in a nonlinear model. ^ The second hydrological analysis utilized a model to identify the transient response of the salinity, temperature, age, and amount of submarine groundwater discharge to changes in tidal ocean stage, seasonal temperature variations, and different types of geology. The model was compared to multiple kinds of data to (1) calibrate and verify the model, and (2) explore the potential for the model to be used to guide the collection of data using techniques such as electromagnetic resistivity, thermal imagery, and seepage meters. Results indicated that the model can be used to give insight to submarine groundwater discharge and be used to guide data collection. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between uncertainty and firms’ risk-taking behaviour has been a focus of investigation since early discussion of the nature of enterprise activity. Here, we focus on how firms’ perceptions of environmental uncertainty and their perceptions of the risks involved impact on their willingness to undertake green innovation. Analysis is based on a cross-sectional survey of UK food companies undertaken in 2008. The results reinforce the relationship between perceived environmental uncertainty and perceived innovation risk and emphasise the importance of macro-uncertainty in shaping firms’ willingness to undertake green innovation. The perceived (market-related) riskiness of innovation also positively influences the probability of innovating, suggesting either a proactive approach to stimulating market disruption or an opportunistic approach to innovation leadership.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Current thinking about ‘patient safety’ emphasises the causal relationship between the work environment and the delivery of clinical care. This research draws on the theory of Normal Accidents to extend this analysis and better understand the ‘organisational factors’ that threaten safety. Methods: Ethnographic research methods were used, with observations of the operating department setting for 18 month and interviews with 80 members of hospital staff. The setting for the study was the Operating Department of a large teaching hospital in the North-West of England. Results: The work of the operating department is determined by inter-dependant, ‘tightly coupled’ organisational relationships between hospital departments based upon the timely exchange of information, services and resources required for the delivery of care. Failures within these processes, manifest as ‘breakdowns’ within inter-departmental relationships lead to situations of constraint, rapid change and uncertainty in the work of the operating department that require staff to break with established routines and work with increased time and emotional pressures. This means that staff focus on working quickly, as opposed to working safely. Conclusion: Analysis of safety needs to move beyond a focus on the immediate work environment and individual practice, to consider the more complex and deeply structured organisational systems of hospital activity. For departmental managers the scope for service planning to control for safety may be limited as the structured ‘real world’ situation of service delivery is shaped by inter-department and organisational factors that are perhaps beyond the scope of departmental management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge of the efficacy of an intervention for disease control on an individual farm is essential to make good decisions on preventive healthcare, but the uncertainty in outcome associated with undertaking a specific control strategy has rarely been considered in veterinary medicine. The purpose of this research was to explore the uncertainty in change in disease incidence and financial benefit that could occur on different farms, when two effective farm management interventions are undertaken. Bovine mastitis was used as an example disease and the research was conducted using data from an intervention study as prior information within an integrated Bayesian simulation model. Predictions were made of the reduction in clinical mastitis within 30 days of calving on 52 farms, attributable to the application of two herd interventions previously reported as effective; rotation of dry cow pasture and differential dry cow therapy. Results indicated that there were important degrees of uncertainty in the predicted reduction in clinical mastitis for individual farms when either intervention was undertaken; the magnitude of the 95% credible intervals for reduced clinical mastitis incidence were substantial and of clinical relevance. The large uncertainty associated with the predicted reduction in clinical mastitis attributable to the interventions resulted in important variability in possible financial outcomes for each farm. The uncertainty in outcome associated with farm control measures illustrates the difficulty facing a veterinary clinician when making an on-farm decision and highlights the importance of iterative herd health procedures (continual evaluation, reassessment and adjusted interventions) to optimise health in an individual herd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Leadership Academy Workshop presentation focused on 'Trust and Leadership in the Downturn', with particular reference to the public sector and to education. The presentation discussed a range of definitions of trust, including the view of Mayer, Davis and Schoorman (1995) that trust can be described as 'the willingness of a person to be vulnerable to the actions of another, based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that action'. The presentation then focused on the reasons why this relational psychological state is important,particularly in an economic recession when people were facing job cuts and economic uncertainty in a wider political and social environment characterised by cynicism and a downturn in trust. If trust is defined in part as a belief in the honesty, competence and benevolence of others, it tends to act like 'social glue', cushioning difficult situations and enabling actions to take place easily that otherwise would not be permissible. A worrying state of affairs has recently been developing across the world, however, in the economic downturn, as reported in the Edelman Trust Barometer for 2009, in which there was a marked diminuition of trust in corporations, businesses and government, as a result of the credit crunch. While the US and parts of Europe was showing recovery from a generalised loss of trust by mid-year 2009, the UK had not. It seems that social attitudes in Britain may be hardening - it seems that from being a nation of sceptics we may be becoming a nation of cynics: for example, 69% of the population surveyed by Edelman trust the government less than six months ago. In this situation, there is a need to promote positive measures to build trust, including the establishment of more transparent and honest business practices and practices to ensure that employees are treated well. Following the presentation, a workshop was held to discuss the nature of a possible loss of trust in the downturn in the UK and its implications for leadership practices and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the ‘self’ of six Irish working-class women, all parenting alone and all returned to the field of adult education. Bourdieu’s concepts of habitus, field and capital are the backdrop for the study of the ‘self’, which is viewed through his lens. This study commenced in September 2012 and concluded in August 2014, in a small urban educational setting in an Irish city. All of the women in the study are single parents, most of them did not complete second level education, and none of them had experienced adult or third level education. Their ages vary from 30 to 55 years. The study pursues the women’s motivations for returning to education, the challenges they faced throughout the journey, and their experiences, views and perspectives of Adult Education. The methodology chosen for this research is critical eethnography, and as an emerging ethnographer, I was able to view the phenomena from both an emic (inside) and an etic (outside) perspective. The critically oriented approach is a branch of qualitative research. It is a holistic and humanistic approach that is cyclical and reflective. The critical ethnographic case studies that developed are theoretically framed in critical theory and critical pedagogy. The data is collected from classroom observations (recorded in a journal) and interviews (both individual and group). The women's life experiences inform their sense of self and their capital reserves derive from their experience of habitus. It also attempts to understand the delivery of the programmes and how it can impact the journey of the adult learners. The analysis of the interviews, observations, field notes and reflective journals demonstrate what the women have to say about their new journey in adult education. This crucial information informs best practice for adult education programmes. This study also considers the complexity of habitus and the many forms of capital. The theme of adults returning to education and their disposition to this is one of the major themes of this study. Findings reflect this uncertainty but also underline how the women unshackled themselves of some of the constraints of a restricted view of self. Witnessing this new habitus forming was the core of their transformational possibility becoming real. The study provides a unique contribution to knowledge as it utilises Bourdieuian concepts and theories, not only as theoretical tools but as conceptual tools for analysis. The study examined transformative pedagogy in the field of adult education and it offers important recommendations for future policy and practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents four essays in energy economics. The first essay investigates one of the workhorse models of resource economics, the Hotelling model of an inter-temporally optimizing resource extracting firm. The Hotelling model provides a convincing theory of fundamental concepts like resource scarcity, but very few empirical validations of the model have been conducted. This essay attempts to empirically validate the Hotelling model by first expanding it to include exploration activity and market power and then using a newly constructed data set for the uranium mining industry to test whether a major resource extracting mining firm in the industry is following the theory’s predictions. The results show that the theory is rejected in all considered settings. The second and third essays investigate the difference in market outcomes under spot-market based trade as compared to long-term contract based trade in oligopolistic markets with investments. The second essay investigates analytically the difference in market outcomes in an electricity market setting, showing that investments and consumer welfare may be higher under spot-market based trade than under long-term contracts. The third essay proposes techniques to solve large-scale models of this kind, empirically, by exploring the practicability of this approach in an application to the international metallurgical coal market. The final essay investigates the influence of policy uncertainty on investment decisions. With France debating the role of nuclear technology, this essay analyses how policy uncertainty regarding nuclear power in France may feature in the French and European power sector. Applying a stochastic model for the European power system, the analysis shows that the costs of uncertainty in this particular application are rather low compared to the overall costs of a nuclear phase-out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The training of nursing students in the context of clinical practice, is characterized by educational experiences, subject to various emotional stress (stress, ambivalence, frustration, conflict), sometimes making it very vulnerable student.However not all students use the same strategies minimizing their meanings and negative effects on the level of your health and well-being Objetiv:To analyze the perception that nursing students have about the determinants of their health status and well-being in clinical practice Methods: Exploratory research Results:The results reveal the complexity of the teaching / learning process in clinical practice, identified determinants that limit and / or promote health and well-being of students, or not contributing to their motivation, self-confidence and learning. All students value the presence of the following dimensions: affective-emotional (humanization in learning experiences); relational dynamics (interactions developed with all stakeholders); methods used (professional competence of the clinical supervisor and teacher); school curriculum (adaptation of learning in theory); socialization to the profession (become nurse).Conclusions: The results indicate, that although all students evidencing the dimensions described as fundamental to learning in clinical practice, the study results are dichotomous and ambivalent. Students 2nd and 3ºanos refer a low perception in clinical practice, the indicated dimensions, and for these source of concern and uncertainty in learning, such as limiting their health condition and well-being. For students of the 4th year, these dimensions are percecionadas as gifts, and sources of motivation, learning and catalysts such as promoting their health and well-being.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microtube is a simple and cheap emitter that was widely used throughout the world in the early days of drip irrigation. Its length can be adjusted according to the pressure distribution along the lateral line and the discharge from the microtube can be adjusted by its length. This not only counters the pressure loss due to pipe friction but also makes it suitable for undulating and hilly conditions, where pressure in the lateral line varies considerably according to the differences in elevation. This is the major problem facing the designer, i.e., emitter flow changes as the acting pressure head changes. In this study, a novel micro-sprinkler system is proposed that uses microtube as the emitter and where the length of the microtube can be varied in response to pressure changes along the lateral to give uniformity of emitter discharges. The objective of this work is to develop and validate empirical and semi-theoretical equations for the emitter hydraulics. Laboratory testing of two microtube emitters of different diameter over a range of pressures and discharges was used in the development of the equations relating pressure and discharge, and pressure and length for these emitters. The equations proposed will be used in the design of the micro-sprinkler system, to determine the length of microtube required to give the nominal discharge for any given pressure. The semi-theoretical approach underlined the importance of accurate measurements of the microtube diameter and the uncertainty in the estimation of the friction factor for these tubes.