952 resultados para Nonhomogeneous initial-boundary-value problems
Resumo:
A general formulation of boundary conditions for semiconductor-metal contacts follows from a phenomenological procedure sketched here. The resulting boundary conditions, which incorporate only physically well-defined parameters, are used to study the classical unipolar drift-diffusion model for the Gunn effect. The analysis of its stationary solutions reveals the presence of bistability and hysteresis for a certain range of contact parameters. Several types of Gunn effect are predicted to occur in the model, when no stable stationary solution exists, depending on the value of the parameters of the injecting contact appearing in the boundary condition. In this way, the critical role played by contacts in the Gunn effect is clearly established.
Resumo:
Background: Screening of elevated blood pressure (BP) in children has been advocated to early identify hypertension. However, identification of children with sustained elevated BP is challenging due to the high BP variability. The value of an elevated BP measure during childhood and adolescence for the prediction of future elevated BP is not well described. Objectives: We assessed the positive (PPV) and negative (NPV) predictive value of high BP for sustained elevated BP in cohorts of children of the Seychelles, a rapidly developing island state in the African region. Methods: Serial school-based surveys of weight, height, and BP were conducted yearly between 1998-2006 among all students of the country in four school grades (kindergarten [G0, mean age (SD): 5.5 (0.4) yr], G4 [9.2 (0.4) yr], G7 [12.5 (0.4) yr] and G10 (15.6 (0.5) yr]. We constituted three cohorts of children examined twice at 3-4 years interval: 4,557 children examined at G0 and G4, 6,198 at G4 and G7, and 6,094 at G7 and G10. The same automated BP measurement devices were used throughout the study. BP was measured twice at each exam and averaged. Obesity and elevated BP were defined using the CDC (BMI_95th sex-, and age-specific percentile) and the NHBPEP criteria (BP_95th sex-, age-, and height specific percentile), respectively. Results: Prevalence of obesity was 6.1% at G0, 7.1% at G4, 7.5% at G7, and 6.5% at G10. Prevalence of elevated BP was 10.2% at G0, 9.9% at G4, 7.1% at G7, and 8.7% at G10. Among children with elevated BP at initial exam, the PPV of keeping elevated BP was low but increased with age: 13% between G0 and G4, 19% between G4 and G7, and 27% between G7 and G10. Among obese children with elevated BP, the PPV was higher: 33%, 35% and 39% respectively. Overall, the probability for children with normal BP to remain in that category 3-4 years later (NPV) was 92%, 95%, and 93%, respectively. By comparison, the PPV for children initially obese to remain obese was much higher at 71%, 71%, and 62% (G7-G10), respectively. The NPV (i.e. the probability of remaining at normal weight) was 94%, 96%, and 98%, respectively. Conclusion: During childhood and adolescence, having an elevated BP at one occasion is a weak predictor of sustained elevated BP 3-4 years later. In obese children, it is a better predictor.
Resumo:
The first phase of this research involved an effort to identify the issues relevant to gaining a better understanding of the County Engineering profession. A related objective was to develop strategies to attract responsible, motivated and committed professionals to pursue County Engineering positions. In an era where a large percentage of County Engineers are reaching retirement age, the shrinking employment pool may eventually jeopardize the quality of secondary road systems not only in Iowa, but nationwide. As we move toward the 21st century, in an era of declining resources, it is likely that professional staff members in charge of secondary roads will find themselves working with less flexible budgets for the construction and maintenance of roads and bridges. It was important to understand the challenges presented to them, and the degree to which those challenges will demand greater expertise in prioritizing resource allocations for the rehabilitation and maintenance of the 10 million miles of county roads nationwide. Only after understanding what a county engineer is and what this person does will it become feasible for the profession to begin "selling itself", i.e., attracting a new generation of County Engineers. Reaching this objective involved examining the responsibilities, goals, and, sometimes, the frustrations experienced by those persons in charge of secondary road systems in the nine states that agreed to participate in the study. The second phase of this research involved addressing ways to counter the problems associated with the exodus of County Engineers who are reaching retirement age. Many of the questions asked of participants asked them to compare the advantages and disadvantages of public sector work with the private sector. Based on interviews with nearly 50 County Engineers and feedback from 268 who returned surveys for the research, issues relevant to the profession were analyzed and recommendations were made to the profession as it prepares to attract a new generation. It was concluded that both State and Regional Associations for County Engineers, and the National Association of County Engineers are most well-situated to present opportunities for continued professional development. This factor is appealing for those who are interested in competitive advantages as professionals. While salaries in the public sector may not be able to effectively compete with those offered by the private sector, it was concluded that this is only one factor of concern to those who are in the business of "public service". It was concluded, however, that Boards of Supervisors and their equivalents in other states will need to more clearly understand the value of the contributions made by County Engineers. Then the selling points the profession can hope to capitalize on can focus on the strength of state organizations and a strong national organization that act as clearinghouses of information and advocates for the profession, as well as anchors that provide opportunities for staying current on issues and technologies.
Resumo:
The design of satisfactory supporting and expansion devices for highway bridges is a problem which has concerned bridge design engineers for many years. The problems associated with these devices have been emphasized by the large number of short span bridges required by the current expanded highway program of expressways and interstate highways. The initial objectives of this investigation were: (1) To review and make a field study of devices used for the support of bridge superstructures and for provision of floor expansion; (2) To analyze the forces or factors which influence the design and behavior of supporting devices and floor expansion systems; and (3) To ascertain the need for future research particularly on the problems of obtaining more economical and efficient supporting and expansion devices, and determining maximum allowable distance between such devices. The experimental portion was conducted to evaluate one of the possible simple and economical solutions to the problems observed in the initial portion. The investigation reported herein is divided into four major parts or phases as follows: (1) A review of literature; (2) A survey by questionnaire of design practice of a number of state highway departments and consulting firms; (3) Field observation of existing bridges; and, (4) An experimental comparison of the dynamic behavior of rigid and elastomeric bearings.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
PURPOSE: To evaluate the prognostic and predictive value of Ki-67 labeling index (LI) in a trial comparing letrozole (Let) with tamoxifen (Tam) as adjuvant therapy in postmenopausal women with early breast cancer. PATIENTS AND METHODS: Breast International Group (BIG) trial 1-98 randomly assigned 8,010 patients to four treatment arms comparing Let and Tam with sequences of each agent. Of 4,922 patients randomly assigned to receive 5 years of monotherapy with either agent, 2,685 had primary tumor material available for central pathology assessment of Ki-67 LI by immunohistochemistry and had tumors confirmed to express estrogen receptors after central review. The prognostic and predictive value of centrally measured Ki-67 LI on disease-free survival (DFS) were assessed among these patients using proportional hazards modeling, with Ki-67 LI values dichotomized at the median value of 11%. RESULTS: Higher values of Ki-67 LI were associated with adverse prognostic factors and with worse DFS (hazard ratio [HR; high:low] = 1.8; 95% CI, 1.4 to 2.3). The magnitude of the treatment benefit for Let versus Tam was greater among patients with high tumor Ki-67 LI (HR [Let:Tam] = 0.53; 95% CI, 0.39 to 0.72) than among patients with low tumor Ki-67 LI (HR [Let:Tam] = 0.81; 95% CI, 0.57 to 1.15; interaction P = .09). CONCLUSION: Ki-67 LI is confirmed as a prognostic factor in this study. High Ki-67 LI levels may identify a patient group that particularly benefits from initial Let adjuvant therapy.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Forensic scientists face increasingly complex inference problems for evaluating likelihood ratios (LRs) for an appropriate pair of propositions. Up to now, scientists and statisticians have derived LR formulae using an algebraic approach. However, this approach reaches its limits when addressing cases with an increasing number of variables and dependence relationships between these variables. In this study, we suggest using a graphical approach, based on the construction of Bayesian networks (BNs). We first construct a BN that captures the problem, and then deduce the expression for calculating the LR from this model to compare it with existing LR formulae. We illustrate this idea by applying it to the evaluation of an activity level LR in the context of the two-trace transfer problem. Our approach allows us to relax assumptions made in previous LR developments, produce a new LR formula for the two-trace transfer problem and generalize this scenario to n traces.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
Values and value processes are said to be needed in every organization nowadays, as the world is changing and companies have to have something to "keep it together". Organizational values, which are approvedand used by the personnel, could be the key. Every organization has values. But what is the real value of values? The greatest and most crucial challenge is the feasibility of the value process. The main point in this thesis is tostudy how organizational members at different hierarchical levels perceive values and value processes in their organizations. This includes themes such as how values are disseminated, the targets of value processing, factors that affect the process, problems that occur during the value implementation and improvements that could be made when organizational values are implemented. These subjects are studied from the perspective of organizational members (both managers and employees); individuals in the organizations. The aim is to get the insider-perspective on value processing, from multiple hierarchical levels. In this research I study three different organizations (forest industry, bank and retail cooperative) and their value processes. The data is gathered from companies interviewing personnel in the head office and at the local level. The individuals areseen as members of organizations, and the cultural aspect is topical throughout the whole study. Values and cultures are seen as the 'actuality of reality' of organizations, interpreted by organizational members. The three case companies were chosen because they represented different lines of business and they all implemented value processing differently. Sincethe emphasis in this study is at the local level, the similar size of the local units was also an important factor. Values are in 'fashion' -but what does the fashion tell us about the real corporate practices? In annual reports companies emphasize the importance and power of official values. But what is the real 'point' of values? Values are publicly respected and advertised, but still it seems that the words do not meet the deeds. There is a clear conflict between theoretical, official and substantive organizational values: in the value processing from words to real action. This contradiction in value processing is studied through individual perceptions in this study. I study the kinds of perceptions organizationalmembers have when values are processed from the head office to the local level: the official value process is studied from the individual's perspective. Value management has been studied more during the 1990's. The emphasis has usually been on managers: how they consider the values in organizations and what effects it has on the management. Recent literature has emphasized values as tools for improving company performance. The value implementation as a process has been studied through 'good' and 'bad' examples, as if one successful value process could be copied to all organizations. Each company is different with different cultures and personnel, so no all-powerful way of processing values exists. In this study, the organizational members' perceptions at different hierarchical levels are emphasized. Still, managers are also interviewed; this is done since managerial roles in value dissemination are crucial. Organizational values cannot be well disseminated without management; this has been proved in several earlier studies (e.g. Kunda 1992, Martin 1992, Parker 2000). Recent literature has not sufficiently emphasized the individual's (organizational member's) role in value processing. Organizations consist of differentindividuals with personal values, at all hierarchical levels. The aim in this study is to let the individual take the floor. Very often the value process is described starting from the value definition and ending at dissemination, and the real results are left without attention. I wish to contribute to this area. Values are published officially in annual reports etc. as a 'goal' just like profits. Still, the results/implementationof value processing is rarely followed, at least in official reports. This is a very interesting point: why do companies espouse values, if there is no real control or feedback after the processing? In this study, the personnel in three different companies is asked to give an answer. In the empirical findings, there are several results which bring new aspects to the research area of organizational values. The targets of value processing, factors effecting value processing, the management's roles and the problems in value implementation are presented through the individual's perspective. The individual's perceptions in value processing are a recurring theme throughout the whole study. A comparison between the three companies with diverse value processes makes the research complete