939 resultados para Extreme Value Theory
Resumo:
The study provides an overview of the application possibilities of game theory to climate change. The characteristics of games are adapted to the topics of climate and carbon. The importance of uncertainty, probability, marginal value of adaptation, common pool resources, etc. are tailored to the context of international relations and the challenge of global warming.
Resumo:
The present article assesses agency theory related problems contributing to the fall of shopping centers. The negative effects of the financial and economic downturn started in 2008 were accentuated in emerging markets like Romania. Several shopping centers were closed or sold through bankruptcy proceedings or forced execution. These failed shopping centers, 10 in number, were selected in order to assess agency theory problems contributing to the failure of shopping centers; as research method qualitative multiple cases-studies is used. Results suggest, that in all of the cases the risk adverse behavior of the External Investor- Principal, lead to risk sharing problems and subsequently to the fall of the shopping centers. In some of the cases Moral Hazard (lack of Developer-Agent’s know-how and experience) as well as Adverse Selection problems could be identified. The novelty of the topic for the shopping center industry and the empirical evidences confer a significant academic and practical value to the present article.
Resumo:
We consider various lexicographic allocation procedures for coalitional games with transferable utility where the payoffs are computed in an externally given order of the players. The common feature of the methods is that if the allocation is in the core, it is an extreme point of the core. We first investigate the general relationship between these allocations and obtain two hierarchies on the class of balanced games. Secondly, we focus on assignment games and sharpen some of these general relationship. Our main result is the coincidence of the sets of lemarals (vectors of lexicographic maxima over the set of dual coalitionally rational payoff vectors), lemacols (vectors of lexicographic maxima over the core) and extreme core points. As byproducts, we show that, similarly to the core and the coalitionally rational payoff set, also the dual coalitionally rational payoff set of an assignment game is determined by the individual and mixed-pair coalitions, and present an efficient and elementary way to compute these basic dual coalitional values. This provides a way to compute the Alexia value (the average of all lemacols) with no need to obtain the whole coalitional function of the dual assignment game.
Innovációs kalandozások az elmélettől a stratégiáig = Innovation adventuring from theory to strategy
Resumo:
A cikk célja, hogy közelebb vigye az olvasót az innováció és az innovációmenedzsment kérdésköréhez. A tanulmány az innováció témakörének feldolgozását a vállalatelméleti alapoktól kezdi, majd konkrét stratégiai megfontolásokig jut el a cikk végére. A tanulmány széles körű hazai és nemzetközi szakirodalom alapján tárja fel az innováció vállalatelméleti gyökereit. A felhasznált irodalom nem ragad meg elméleti szinten, hiszen a tanulmány lefordítja ezeket az elméleti koncepciókat valós, gyakorlatorientált üzleti nyelvre. Célja, hogy a nagyvilágban szétszórt elméleteket letisztítsa, és a modern kori menedzsmentelvekkel szintetizálja. A cikk az innovációt a vállalati értékteremtés szemszögéből vizsgálja. Megállapítja, hogy az innováció számos vállalatelmélet tanait integrálja egybe, aminek következtében a stratégiai implikációk is széles spektrumon mozoghatnak. Ahogy az innováció változást indukál a szervezetben, úgy bukkannak fel komplex optimalizálási dilemmák, amelyek a turbulens gazdasági környezetben, a rövidülő reakcióidők miatt, egyre nagyobb kihívást okoznak a menedzserek számára. A cikk ezeket a dilemmákat mutatja be vitaindító attitűddel, valamint az elmélet és a gyakorlat szintetizálásával. ____ The aim of the article is to bring the reader closer to the topic of innovation management through several company theories and strategic implications. This study is based on a wide range of international literature which is interpreted in a practice oriented way. This article is summarizing the sporadic information on innovation as well as makes an effort on synthesizing this knowledge. The aspect of observation is mainly based on corporate value creation. As innovation causes changes into the organization company leaders have to face complex economic optimization questions. These management dilemmas are not easy to solve in this turbulent business environment. This article tries to highlight these strategic level issues around innovation management by synthesizing theoretical knowledge with business implications.
Resumo:
Heterogeneity of labour and its implications for the Marxian theory of value has been one of the most controversial issues in the literature of the Marxist political economy. The adoption of Marx's conjecture about a uniform rate of surplus value leads to a simultaneous determination of the values of common and labour commodities of different types and the uniform rate of surplus value. Determination of these variables can be formally represented as a parametric cigenvalue problem. Morishima's and Bródy's earlier results are analysed and given new interpretations in the light of the suggested procedure. The main questions are addressed in a more general context too. The analysis is extended to the problem of segmented labour market, as well.
Resumo:
This study focuses on empirical investigations and seeks implications by utilizing three different methodologies to test various aspects of trader behavior. The first methodology utilizes Prospect Theory to determine trader behavior during periods of extreme wealth contracting periods. Secondly, a threshold model to examine the sentiment variable is formulated and thirdly a study is made of the contagion effect and trader behavior. ^ The connection between consumers' sense of financial well-being or sentiment and stock market performance has been studied at length. However, without data on actual versus experimental performance, implications based on this relationship are meaningless. The empirical agenda included examining a proprietary file of daily trader activities over a five-year period. Overall, during periods of extreme wealth altering conditions, traders "satisfice" rather than choose the "best" alternative. A trader's degree of loss aversion depends on his/her prior investment performance. A model that explains the behavior of traders during periods of turmoil is developed. Prospect Theory and the data file influenced the design of the model. ^ Additional research included testing a model that permitted the data to signal the crisis through a threshold model. The third empirical study sought to investigate the existence of contagion caused by declining global wealth effects using evidence from the mining industry in Canada. Contagion, where a financial crisis begins locally and subsequently spreads elsewhere, has been studied in terms of correlations among similar regions. The results provide support for Prospect Theory in two out of the three empirical studies. ^ The dissertation emphasizes the need for specifying precise, testable models of investors' expectations by providing tools to identify paradoxical behavior patterns. True enhancements in this field must include empirical research utilizing reliable data sources to mitigate data mining problems and allow researchers to distinguish between expectations-based and risk-based explanations of behavior. Through this type of research, it may be possible to systematically exploit "irrational" market behavior. ^
Resumo:
This case study examines the factors that shaped the identity and landscape of a small island-urban-village between the north and south forks of the Middle River and north of an urban area in Broward County, Florida. The purpose of the study is to understand how Wilton Manors was transformed from a “whites only” enclave to the contemporary upscale, diverse, and third gayest city in the U.S. by positing that a dichotomy for urban places exists between their exchange value as seen by Logan and Molotch and the use value produced through everyday activity according to Lefebvre. Qualitative methods were used to gather evidence for reaching conclusions about the relationship among the worldview of residents, the tension between exchange value and use value in the restructuration of the city, and the transformation of Wilton Manors at the end of the 1990s. Semi-structured, in-depth interviews were conducted with 21 contemporary participants. In addition, thirteen taped CDs of selected members of founding families, previously taped in the 1970s, were analyzed using a grounded theory approach. My findings indicate that Wilton Manors’ residents share a common worldview which incorporates social inclusion as a use value, and individual agency in the community. This shared worldview can be traced to selected city pioneers whose civic mindedness helped shape city identity and laid the foundation for future restructuration. Currently, residents’ quality of life reflected in the city’s use value is more significant than exchange value as a primary force in the decisions that are made about the city’s development. With innovative ideas, buildings emulating the new urban mixed-use design, and a reputation as the third gayest city in the United States, Wilton Manors reflects a worldview where residents protect use value as primary over market value in the decisions they make that shape their city but not without contestation.^
Resumo:
This study explored the relationship between workplace discrimination climate on team effectiveness through three serial mediators: collective value congruence, team cohesion, and collective affective commitment. As more individuals of marginalized groups diversify the workforce and as more organizations move toward team-based work (Cannon-Bowers & Bowers, 2010), it is imperative to understand how employees perceive their organization’s discriminatory climate as well as its effect on teams. An archival dataset consisting of 6,824 respondents was used, resulting in 332 work teams with five or more members in each. The data were collected as part of an employee climate survey administered in 2011 throughout the United States’ Department of Defense. The results revealed that the indirect effect through M1 (collective value congruence) and M2 (team cohesion) best accounted for the relationship between workplace discrimination climate (X) and team effectiveness (Y). Meaning, on average, teams that reported a greater climate for workplace discrimination also reported less collective value congruence with their organization (a1 = -1.07, p < .001). With less shared perceptions of value congruence, there is less team cohesion (d21 = .45, p < .001), and with less team cohesion there is less team effectiveness (b2 = .57, p < .001). In addition, because of theoretical overlap, this study makes the case for studying workplace discrimination under the broader construct of workplace aggression within the I/O psychology literature. Exploratory and confirmatory factor analysis found that workplace discrimination based on five types of marginalized groups: race/ethnicity, gender, religion, age, and disability was best explained by a three-factor model, including: career obstruction based on age and disability bias (CO), verbal aggression based on multiple types of bias (VA), and differential treatment based on racial/ethnic bias (DT). There was initial support to claim that workplace discrimination items covary not only based on type, but also based on form (i.e., nonviolent aggressive behaviors). Therefore, the form of workplace discrimination is just as important as the type when studying climate perceptions and team-level effects. Theoretical and organizational implications are also discussed.
Resumo:
This study focuses on empirical investigations and seeks implications by utilizing three different methodologies to test various aspects of trader behavior. The first methodology utilizes Prospect Theory to determine trader behavior during periods of extreme wealth contracting periods. Secondly, a threshold model to examine the sentiment variable is formulated and thirdly a study is made of the contagion effect and trader behavior. The connection between consumers' sense of financial well-being or sentiment and stock market performance has been studied at length. However, without data on actual versus experimental performance, implications based on this relationship are meaningless. The empirical agenda included examining a proprietary file of daily trader activities over a five-year period. Overall, during periods of extreme wealth altering conditions, traders "satisfice" rather than choose the "best" alternative. A trader's degree of loss aversion depends on his/her prior investment performance. A model that explains the behavior of traders during periods of turmoil is developed. Prospect Theory and the data file influenced the design of the model. Additional research included testing a model that permitted the data to signal the crisis through a threshold model. The third empirical study sought to investigate the existence of contagion caused by declining global wealth effects using evidence from the mining industry in Canada. Contagion, where a financial crisis begins locally and subsequently spreads elsewhere, has been studied in terms of correlations among similar regions. The results provide support for Prospect Theory in two out of the three empirical studies. The dissertation emphasizes the need for specifying precise, testable models of investors' expectations by providing tools to identify paradoxical behavior patterns. True enhancements in this field must include empirical research utilizing reliable data sources to mitigate data mining problems and allow researchers to distinguish between expectations-based and risk-based explanations of behavior. Through this type of research, it may be possible to systematically exploit "irrational" market behavior.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Waterways have many more ties with society than as a medium for the transportation of goods alone. Waterway systems offer society many kinds of socio-economic value. Waterway authorities responsible for management and (re)development need to optimize the public benefits for the investments made. However, due to the many trade-offs in the system these agencies have multiple options for achieving this goal. Because they can invest resources in a great many different ways, they need a way to calculate the efficiency of the decisions they make. Transaction cost theory, and the analysis that goes with it, has emerged as an important means of justifying efficiency decisions in the economic arena. To improve our understanding of the value-creating and coordination problems for waterway authorities, such a framework is applied to this sector. This paper describes the findings for two cases, which reflect two common multi trade-off situations for waterway (re)development. Our first case study focuses on the Miami River, an urban revitalized waterway. The second case describes the Inner Harbour Navigation Canal in New Orleans, a canal and lock in an industrialized zone, in need of an upgrade to keep pace with market developments. The transaction cost framework appears to be useful in exposing a wide variety of value-creating opportunities and the resistances that come with it. These insights can offer infrastructure managers guidance on how to seize these opportunities.
Resumo:
The richness of dance comes from the need to work with an individual body. Still, the body of the dancer belongs to plural context, crossed by artistic and social traditions, which locate the artists in a given field. We claim that role conflict is an essential component of the structure of collective artistic creativity. We address the production of discourse in a British dance company, with data that spawns from the ethnography ‘Dance and Cognition’, directed by David Kirsh at the University of California, together with WayneMcGregor-Random Dance. Our Critical Discourse Analysis is based on multiple interviews to the dancers and choreographer. Our findings show how creativity in dance seems to be empirically observable, and thus embodied and distributed shaped by the dance habitus of the particular social context.
Resumo:
Introduction
This paper outlines an innovative approach to auditing and evaluating the content of a management and leadership module for undergraduate nursing students after their final management clinical placement. Normally evaluations of teaching in a module take place at the end of a teaching module and therefore do not properly reflect the value of the teaching in relation to practical clinical experience.
Aim
This audit and evaluation sought to explore both the practical value of the teaching and learning, and also the degree to which it the teaching reflected against the NMC Standards of Education and Learning (2010 domain 3).
Methods
Having piloted the evaluative tool with an earlier cohort of nursing students, this evaluation explored both a quantitative assessment employing a Personal Response System (n =172), together with a qualitative dimension (n=116), thus delivering paper-based comments and reflections from students on the value and practicality of the module teaching theory to their final clinical management experience. The quantitative audit data were analysed for frequencies and cross tabulation and the qualitative audit data were thematically analysed.
Results
Results suggest a significant proportion of the students, appreciated the quality of the standard of teaching, but more importantly, ‘valued or highly valued’ the teaching and learning in relation to how it helped to significantly inform their management placement experience. A smaller proportion of the students underlined limitations and areas in which further improvement can be made in teaching and learning to the module.
Conclusion
Significantly positive evaluation by the students of the practical value of teaching and learning, to the theoretical management module. This has proved a useful auditing approach in assessing the theoretical teaching to student’s Level 3 clinical experience, and facilitated significant recommendations as far as developing the teaching and learning to better reflect the practice needs of nursing students
Resumo:
This thesis makes use of the unique reregulation of pharmaceutical monopoly in Sweden to critically examine intraindustry firm heterogeneity. It contributes to existing divestiture research as it studies the dynamism in between reconfigurations of value constellations and its effects on value creation of divested pharmacies. Because the findings showed that the predominant theory of intraindustry firm heterogeneity could not explain firm performance, the value constellation concept was applied as it captured the phenomena. A patterned finding informed how reconfigurations of value constellations in a reregulated market characterized by strict rules, regulations, and high competition did not generate additional value for firms on short term. My study unveils that value creation is hampered in situations where rules and regulations significantly affect firms’ ability to reconfigure their value constellations. The key practical implication is an alternative perspective on fundamental aspects of the reregulation and how policy-makers may impede firm performance and the intended creation of new value for not only firms but for society as a whole.
Resumo:
Purpose: There is a need for theory development within the field of humanitarian logistics to understand logistics needs in different stages of a crisis and how to meet these. This paper aims to discuss three dimensions identified in logistics and organization theories and how they relate to three different cases of humanitarian logistics operations - the regional concept of the International Federation of Red Cross Red Crescent Societies, the development and working of the United Nations Joint Logistics Centre and coordination challenges of military logistics in UN mandated peacekeeping operations. The purpose is to build a framework to be used in further studies. Design/methodology/approach: A framework for the study of humanitarian logistics along three dimensions is developed, followed by a discussion of the chosen cases in relation to these dimensions. The framework will be used as basis for the case studies to be undertaken for the purpose of understanding and identification of new questions and needs for other or revised concepts from theory. Findings: The paper shows the relevance of a wide literature to the issues pertinent to humanitarian logistics. There is considerable promise in extant literature on logistics, SCM and coordination, but this needs to be confronted with the particular issues seen in the humanitarian logistics setting to achieve further theory development. Originality/value: The major contribution of the paper lies in its breadth of theoretical perspectives presented and combined in a preliminary theoretical framework. This is applied more specifically in the three case studies described in the paper.