941 resultados para Characteristic Initial Value Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: MRI and PET with 18F-fluoro-ethyl-tyrosine (FET) have been increasingly used to evaluate patients with gliomas. Our purpose was to assess the additive value of MR spectroscopy (MRS), diffusion imaging and dynamic FET-PET for glioma grading. PATIENTS, METHODS: 38 patients (42 ± 15 aged, F/M: 0.46) with untreated histologically proven brain gliomas were included. All underwent conventional MRI, MRS, diffusion sequences, and FET-PET within 3±4 weeks. Performances of tumour FET time-activity-curve, early-to-middle SUVmax ratio, choline / creatine ratio and ADC histogram distribution pattern for gliomas grading were assessed, as compared to histology. Combination of these parameters and respective odds were also evaluated. RESULTS: Tumour time-activity-curve reached the best accuracy (67%) when taken alone to distinguish between low and high-grade gliomas, followed by ADC histogram analysis (65%). Combination of time-activity-curve and ADC histogram analysis improved the sensitivity from 67% to 86% and the specificity from 63-67% to 100% (p < 0.008). On multivariate logistic regression analysis, negative slope of the tumour FET time-activity-curve however remains the best predictor of high-grade glioma (odds 7.6, SE 6.8, p = 0.022). CONCLUSION: Combination of dynamic FET-PET and diffusion MRI reached good performance for gliomas grading. The use of FET-PET/MR may be highly relevant in the initial assessment of primary brain tumours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Executive Summary I. Survey The Task Force conducted a wide-ranging survey of more than 9,000 licensed Iowa attorneys and judges to obtain their input on a variety of civil justice system topics. The survey results helped inform the Task Force of problem areas in Iowa’s civil justice system. II. Two-Tier Justice System The Task Force recommends a pilot program based on a two-tier civil justice system. A two-tier system would streamline litigation processes—including rules of evidence and discovery disclosures—and reduce litigation costs of certain cases falling below a threshold dollar value. III. One Judge/One Case and Date Certain for Trial Some jurisdictions in Iowa have adopted one judge/one case and date certain for trial in certain cases. The assignment of one judge to each case for the life of the matter and the establishment of dates certain for civil trials could enhance Iowans’ access to the courts, improve judicial management, promote consistency and adherence to deadlines, and reduce discovery excesses. IV. Discovery Processes Reforms addressing inefficient discovery processes will reduce delays in and costs of litigation. Such measures include adopting an aspirational purpose for discovery rules to “secure the just, speedy, and inexpensive determination of every action,” holding discovery proportional to the size and nature of the case, requiring initial disclosures, limiting the number of expert witnesses, and enforcing existing rules. V. Expert Witness Fees The Task Force acknowledges the probable need to revisit the statutory additional daily compensation limit for expert witness fees. Leaving the compensation level to the discretion of the trial court is one potential solution. VI. Jurors Additions to the standard juror questionnaire would provide a better understanding of the potential jurors’ backgrounds and suitability for jury service. The Task Force encourages adoption of more modern juror educational materials and video. Rehabilitation of prospective jurors who express an unwillingness or inability to be fair should include a presumption of dismissal. VII. Video and Teleconferencing Options When court resources are constrained both by limited numbers of personnel and budget cuts, it is logical to look to video and teleconferencing technology to streamline the court process and reduce costs. The judicial branch should embrace technological developments in ways that will not compromise the fairness, dignity, solemnity, and decorum of judicial proceedings. VIII. Court-Annexed Alternative Dispute Resolution(ADR) Litigants and practitioners in Iowa are generally satisfied with the current use of private, voluntary ADR for civil cases. There is concern, however, that maintaining the status quo may have steep future costs. Court-annexed ADR is an important aspect of any justice system reform effort, and the Task Force perceives benefits and detriments to reforming this aspect of the Iowa civil justice system. IX. Relaxed Requirement of Findings of Fact and Conclusions of Law A rule authorizing parties to waive findings of fact and conclusions of law could expedite resolution of nonjury civil cases. X. Business (Specialty) Courts Specialty business courts have achieved widespread support across the country. In addition, specialty courts provide excellent vehicles for implementing or piloting other court innovations that may be useful in a broader court system context. A business specialty court should be and could be piloted in Iowa within the existing court system framework of the Iowa Judicial Branch. Appendix included as a separate document, is 176 pages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation was initiated to determine the causes of a rutting problem that occurred on Interstate 80 in Adair County. 1-80 from Iowa 25 to the Dallas County line was opened to traffic in November, 1960. The original pavement consisted of 4-1/2" of asphalt cement concrete over 12" of rolled stone base and 12" of granular subbase. A 5-1/2" overlay of asphalt cement concrete was placed in 1964. In 1970-1972, the roadway was resurfaced with 3" of asphalt cement concrete. In 1982, an asphalt cement concrete inlay, designed for a 10-year life, was placed in the eastbound lane. The mix designs for all courses met or exceeded all current criteria being used to formulate job mixes. Field construction reports indicate .that asphalt usage, densities, field voids and filler bitumen determinations were well within specification limits on a very consistent basis. Field laboratory reports indicate that laboratory voids for the base courses were within the prescribed limits for the base course and below the prescribed limits for the surface course. Instructional memorandums do indicate that extreme caution should be exercised when the voids are at or near the lower limits and traffic is not minimal. There is also a provision that provides for field voids controlling when there is a conflict between laboratory voids and field voids. It appears that contract documents do not adequately address the directions that must be taken when this conflict arises since it can readily be shown that laboratory voids must be in the very low or dangerous range if field voids are to be kept below the maximum limit under the current density specifications. A rut depth survey of January, 1983, identified little or no rutting on this section of roadway. Cross sections obtained in October, 1983, identified rutting which ranged from 0 to 0.9" with a general trend of the rutting to increase from a value of approximately 0.3" at MP 88 to a rut depth of 0.7" at MP 98. No areas of significant rutting were identified in the inside lane. Structural evaluation with the Road Rater indicated adequate structural capacity and also indicated that the longitudinal subdrains were functioning properly to provide adequate soil support values. Two pavement sections taken from the driving lane indicated very little distortion in the lower 7" base course. Essentially all of the distortion had occurred in the upper 2" base course and the 1..;1/2" surface course. Analysis of cores taken from this section of Interstate 80 indicated very little densification of either the surface or the upper or lower base courses. The asphalt cement content of both the Type B base courses and the Type A surface course were substantially higher than the intended asphalt cement content. The only explanation for this is that the salvaged material contained a greater percent of asphalt cement than initial extractions indicated. The penetration and viscosity of the blend of new asphalt cement and the asphalt cement recovered from the salvaged material were relatively close to that intended for this project. The 1983 ambient temperatures were extremely high from June 20 through September 10. The rutting is a result of a combination of adverse factors including, (1) high asphalt content, (2) the difference between laboratory and field voids, (3) lack of intermediate sized crushed particles, (4) high ambient temperatures. The high asphalt content in the 2" upper base course produced an asphalt concrete mix that did not exhibit satisfactory resistance to deformation from heavy loading. The majority of the rutting resulted from distortion of the 2" upper base lift. Heater planing is recommended as an interim corrective action. Further recommendation is to design for a 20-year alternative by removing 2-1/2" of material from the driving lane by milling and replacing with 2-1/2" of asphalt concrete with improved stability. This would be .followed by placing 1-1/2" of high quality resurfacing on the entire roadway. Other recommendations include improved density and stability requirements for asphalt concrete on high traffic roadways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Screening of elevated blood pressure (BP) in children has been advocated to early identify hypertension. However, identification of children with sustained elevated BP is challenging due to the high BP variability. The value of an elevated BP measure during childhood and adolescence for the prediction of future elevated BP is not well described. Objectives: We assessed the positive (PPV) and negative (NPV) predictive value of high BP for sustained elevated BP in cohorts of children of the Seychelles, a rapidly developing island state in the African region. Methods: Serial school-based surveys of weight, height, and BP were conducted yearly between 1998-2006 among all students of the country in four school grades (kindergarten [G0, mean age (SD): 5.5 (0.4) yr], G4 [9.2 (0.4) yr], G7 [12.5 (0.4) yr] and G10 (15.6 (0.5) yr]. We constituted three cohorts of children examined twice at 3-4 years interval: 4,557 children examined at G0 and G4, 6,198 at G4 and G7, and 6,094 at G7 and G10. The same automated BP measurement devices were used throughout the study. BP was measured twice at each exam and averaged. Obesity and elevated BP were defined using the CDC (BMI_95th sex-, and age-specific percentile) and the NHBPEP criteria (BP_95th sex-, age-, and height specific percentile), respectively. Results: Prevalence of obesity was 6.1% at G0, 7.1% at G4, 7.5% at G7, and 6.5% at G10. Prevalence of elevated BP was 10.2% at G0, 9.9% at G4, 7.1% at G7, and 8.7% at G10. Among children with elevated BP at initial exam, the PPV of keeping elevated BP was low but increased with age: 13% between G0 and G4, 19% between G4 and G7, and 27% between G7 and G10. Among obese children with elevated BP, the PPV was higher: 33%, 35% and 39% respectively. Overall, the probability for children with normal BP to remain in that category 3-4 years later (NPV) was 92%, 95%, and 93%, respectively. By comparison, the PPV for children initially obese to remain obese was much higher at 71%, 71%, and 62% (G7-G10), respectively. The NPV (i.e. the probability of remaining at normal weight) was 94%, 96%, and 98%, respectively. Conclusion: During childhood and adolescence, having an elevated BP at one occasion is a weak predictor of sustained elevated BP 3-4 years later. In obese children, it is a better predictor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate the prognostic and predictive value of Ki-67 labeling index (LI) in a trial comparing letrozole (Let) with tamoxifen (Tam) as adjuvant therapy in postmenopausal women with early breast cancer. PATIENTS AND METHODS: Breast International Group (BIG) trial 1-98 randomly assigned 8,010 patients to four treatment arms comparing Let and Tam with sequences of each agent. Of 4,922 patients randomly assigned to receive 5 years of monotherapy with either agent, 2,685 had primary tumor material available for central pathology assessment of Ki-67 LI by immunohistochemistry and had tumors confirmed to express estrogen receptors after central review. The prognostic and predictive value of centrally measured Ki-67 LI on disease-free survival (DFS) were assessed among these patients using proportional hazards modeling, with Ki-67 LI values dichotomized at the median value of 11%. RESULTS: Higher values of Ki-67 LI were associated with adverse prognostic factors and with worse DFS (hazard ratio [HR; high:low] = 1.8; 95% CI, 1.4 to 2.3). The magnitude of the treatment benefit for Let versus Tam was greater among patients with high tumor Ki-67 LI (HR [Let:Tam] = 0.53; 95% CI, 0.39 to 0.72) than among patients with low tumor Ki-67 LI (HR [Let:Tam] = 0.81; 95% CI, 0.57 to 1.15; interaction P = .09). CONCLUSION: Ki-67 LI is confirmed as a prognostic factor in this study. High Ki-67 LI levels may identify a patient group that particularly benefits from initial Let adjuvant therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

General Introduction These three chapters, while fairly independent from each other, study economic situations in incomplete contract settings. They are the product of both the academic freedom my advisors granted me, and in this sense reflect my personal interests, and of their interested feedback. The content of each chapter can be summarized as follows: Chapter 1: Inefficient durable-goods monopolies In this chapter we study the efficiency of an infinite-horizon durable-goods monopoly model with a fmite number of buyers. We find that, while all pure-strategy Markov Perfect Equilibria (MPE) are efficient, there also exist previously unstudied inefficient MPE where high valuation buyers randomize their purchase decision while trying to benefit from low prices which are offered once a critical mass has purchased. Real time delay, an unusual monopoly distortion, is the result of this attrition behavior. We conclude that neither technological constraints nor concern for reputation are necessary to explain inefficiency in monopolized durable-goods markets. Chapter 2: Downstream mergers and producer's capacity choice: why bake a larger pie when getting a smaller slice? In this chapter we study the effect of downstream horizontal mergers on the upstream producer's capacity choice. Contrary to conventional wisdom, we find anon-monotonic relationship: horizontal mergers induce a higher upstream capacity if the cost of capacity is low, and a lower upstream capacity if this cost is high. We explain this result by decomposing the total effect into two competing effects: a change in hold-up and a change in bargaining erosion. Chapter 3: Contract bargaining with multiple agents In this chapter we study a bargaining game between a principal and N agents when the utility of each agent depends on all agents' trades with the principal. We show, using the Potential, that equilibria payoffs coincide with the Shapley value of the underlying coalitional game with an appropriately defined characteristic function, which under common assumptions coincides with the principal's equilibrium profit in the offer game. Since the problem accounts for differences in information and agents' conjectures, the outcome can be either efficient (e.g. public contracting) or inefficient (e.g. passive beliefs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoite on tutkia kulttuurista, funktionaalista ja arvojen diversiteettiä, niiden suhdetta innovatiivisuuteen ja oppimiseen sekä tarjota keinoja diversiteetin johtamiseen. Tämän lisäksi selvitetään linjaesimiesten haastattelujen kautta miten diversiteetti case -organisaatiossa tällä hetkellä koetaan. Organisaation diversiteetin tämänhetkisen tilan tunnistamisen kautta voidaan esittää parannusehdotuksia diversiteetin hallintaan. Tutkimus- ja tiedonkeruumenetelmänä käytetään kvalitatiivista focus group haastattelumenetelmää. Tutkimuksessa saatiin selkeä kuva kulttuurisen, funktionaalisen ja arvojen diversiteetin merkityksistä organisaation innovatiivisuudelle ja oppimiselle sekä löydettiin keinoja näiden diversiteetin tyyppien johtamiseen. Tutkimuksen tärkeä löydös on se, että diversiteetti vaikuttaa positiivisesti organisaation innovatiivisuuteen kun sitä johdetaan tehokkaasti ja kun organisaatioympäristö tukee avointa keskustelua ja mielipiteiden jakamista. Case organisaation tämänhetkistä diversiteetin tilaa selvitettäessä havaittiin että ongelma organisaatiossa ei ole diversiteetin puute, vaan paremminkin se, ettei diversiteettia osata hyödyntää. Organisaatio ei tue erilaisten näkemysten ja mielipiteiden vapaata esittämistä jahyväksikäyttöä ja siksi diversiteetin hyödyntäminen on epätäydellistä. Haastatteluissa tärkeinä seikkoina diversiteetin hyödyntämisen parantamisessa nähtiin kulttuurin muuttaminen avoimempaan suuntaan ja johtajien esimiestaitojen parantaminen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the classical theorems of extreme value theory the limits of suitably rescaled maxima of sequences of independent, identically distributed random variables are studied. The vast majority of the literature on the subject deals with affine normalization. We argue that more general normalizations are natural from a mathematical and physical point of view and work them out. The problem is approached using the language of renormalization-group transformations in the space of probability densities. The limit distributions are fixed points of the transformation and the study of its differential around them allows a local analysis of the domains of attraction and the computation of finite-size corrections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book explores Russian synthesis that occurred in Russian economic thought between 1890 and 1920. This includes all the attempts at synthesis between classical political economy and marginalism; the labour theory of value and marginal utility; and value and prices. The various ways in which Russian economists have approached these issues have generally been addressed in a piecemeal fashion in history of economic thought literature. This book returns to the primary sources in the Russian language, translating many into English for the first time, and offers the first comprehensive history of the Russian synthesis. The book first examines the origins of the Russian synthesis by determining the condition of reception in Russia of the various theories of value involved: the classical theories of value of Ricardo and Marx on one side; the marginalist theories of prices of Menger, Walras and Jevons on the other. It then reconstructs the three generations of the Russian synthesis: the first (Tugan-Baranovsky), the second, the mathematicians (Dmitriev, Bortkiewicz, Shaposhnikov, Slutsky, etc.) and the last (Yurovsky), with an emphasis on Tugan-Baranovsky's initial impetus. This volume is suitable for those studying economic theory and philosophy as well as those interested in the history of economic thought.