931 resultados para Future value prediction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Both the competitive environment and the internal structure of an industrial organization are typically included in the processes which describe the strategic management processes of the firm, but less attention has been paid to the interdependence between these views. Therefore, this research focuses on explaining the particular conditions of an industry change, which lead managers to realign the firm in respect of its environment for generating competitive advantage. The research question that directs the development of the theoretical framework is: Why do firms outsource some of their functions? The three general stages of the analysis are related to the following research topics: (i) understanding forces that shape the industry, (ii) estimating the impacts of transforming customer preferences, rivalry, and changing capability bases on the relevance of existing assets and activities, and emergence of new business models, and (iii) developing optional structures for future value chains and understanding general boundaries for market emergence. The defined research setting contributes to the managerial research questions “Why do firms reorganize their value chains?”, “Why and how are decisions made?” Combining Transaction Cost Economics (TCE) and Resource-Based View (RBV) within an integrated framework makes it possible to evaluate the two dimensions of a company’s resources, namely the strategic value and transferability. The final decision of restructuring will be made based on an analysis of the actual business potential of the outsourcing, where benefits and risks are evaluated. The firm focuses on the risk of opportunism, hold-up problems, pricing, and opportunities to reach a complete contract, and finally on the direct benefits and risks for financial performance. The supplier analyzes the business potential of an activity outside the specific customer, the amount of customer-specific investments, the service provider’s competitive position, abilities to revenue gains in generic segments, and long-term dependence on the customer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"Thèse présentée à la Faculté des études supérieures en vue de l'obtention du grade de Docteur en droit (LL.D.)"

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, we present a novel approach to combine both reuse and prediction of dynamic sequences of instructions called Reuse through Speculation on Traces (RST). Our technique allows the dynamic identification of instruction traces that are redundant or predictable, and the reuse (speculative or not) of these traces. RST addresses the issue, present on Dynamic Trace Memoization (DTM), of traces not being reused because some of their inputs are not ready for the reuse test. These traces were measured to be 69% of all reusable traces in previous studies. One of the main advantages of RST over just combining a value prediction technique with an unrelated reuse technique is that RST does not require extra tables to store the values to be predicted. Applying reuse and value prediction in unrelated mechanisms but at the same time may require a prohibitive amount of storage in tables. In RST, the values are already stored in the Trace Memoization Table, and there is no extra cost in reading them if compared with a non-speculative trace reuse technique. . The input context of each trace (the input values of all instructions in the trace) already stores the values for the reuse test, which may also be used for prediction. Our main contributions include: (i) a speculative trace reuse framework that can be adapted to different processor architectures; (ii) specification of the modifications in a superscalar, superpipelined processor in order to implement our mechanism; (iii) study of implementation issues related to this architecture; (iv) study of the performance limits of our technique; (v) a performance study of a realistic, constrained implementation of RST; and (vi) simulation tools that can be used in other studies which represent a superscalar, superpipelined processor in detail. In a constrained architecture with realistic confidence, our RST technique is able to achieve average speedups (harmonic means) of 1.29 over the baseline architecture without reuse and 1.09 over a non-speculative trace reuse technique (DTM).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to predict future rewards or threats is crucial for survival. Recent studies have addressed future event prediction by the hippocampus. Hippocampal neurons exhibit robust selectivity for spatial location. Thus, the activity of hippocampal neurons represents a cognitive map of space during navigation as well as during planning and recall. Spatial selectivity allows the hippocampus to be involved in the formation of spatial and episodic memories, including the sequential ordering of events. On the other hand, the discovery of reverberatory activity in multiple forebrain areas during slow wave and REM sleep underscored the role of sleep on the consolidation of recently acquired memory traces. To this date, there are no studies addressing whether neuronal activity in the hippocampus during sleep can predict regular environmental shifts. The aim of the present study was to investigate the activity of neuronal populations in the hippocampus during sleep sessions intercalated by spatial exploration periods, in which the location of reward changed in a predictable way. To this end, we performed the chronic implantation of 32-channel multielectrode arrays in the CA1 regions of the hippocampus in three male rats of the Wistar strain. In order to activate different neuronal subgroups at each cycle of the task, we exposed the animals to four spatial exploration sessions in a 4-arm elevated maze in which reward was delivered in a single arm per session. Reward location changed regularly at every session in a clockwise manner, traversing all the arms at the end of the daily recordings. Animals were recorded from 2-12 consecutive days. During spatial exploration of the 4-arm elevated maze, 67,5% of the recorded neurons showed firing rate differences across the maze arms. Furthermore, an average of 42% of the neurons showed increased correlation (R>0.3) between neuronal pairs in each arm. This allowed us to sort representative neuronal subgroups for each maze arm, and to analyze the activity of these subgroups across sleep sessions. We found that neuronal subgroups sorted by firing rate differences during spatial exploration sustained these differences across sleep sessions. This was not the case with neuronal subgroups sorted according to synchrony (correlation). In addition, the correlation levels between sleep sessions and waking patterns sampled in each arm were larger for the entire population of neurons than for the rate or synchrony subgroups. Neuronal activity during sleep of the entire neuronal population or subgroups did not show different correlations among the four arm mazes. On the other hand, we verified that neuronal activity during pre-exploration sleep sessions was significantly more similar to the activity patterns of the target arm than neuronal activity during pre-exploration sleep sessions. In other words, neuronal activity during sleep that precedes the task reflects more strongly the location of reward than neuronal activity during sleep that follows the task. Our results suggest that neuronal activity during sleep can predict regular environmental changes

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The sequencing and publication of the cattle genome and the identification of single nucleotide polymorphism (SNP) molecular markers have provided new tools for animal genetic evaluation and genomic-enhanced selection. These new tools aim to increase the accuracy and scope of selection while decreasing generation interval. The objective of this study was to evaluate the enhancement of accuracy caused by the use of genomic information (Clarifide® - Pfizer) on genetic evaluation of Brazilian Nellore cattle. Review: The application of genome-wide association studies (GWAS) is recognized as one of the most practical approaches to modern genetic improvement. Genomic selection is perhaps most suited to the improvement of traits with low heritability in zebu cattle. The primary interest in livestock genomics has been to estimate the effects of all the markers on the chip, conduct cross-validation to determine accuracy, and apply the resulting information in GWAS either alone [9] or in combination with bull test and pedigree-based genetic evaluation data. The cost of SNP50K genotyping however limits the commercial application of GWAS based on all the SNPs on the chip. However, reasonable predictability and accuracy can be achieved in GWAS by using an assay that contains an optimally selected predictive subset of markers, as opposed to all the SNPs on the chip. The best way to integrate genomic information into genetic improvement programs is to have it included in traditional genetic evaluations. This approach combines traditional expected progeny differences based on phenotype and pedigree with the genomic breeding values based on the markers. Including the different sources of information into a multiple trait genetic evaluation model, for within breed dairy cattle selection, is working with excellent results. However, given the wide genetic diversity of zebu breeds, the high-density panel used for genomic selection in dairy cattle (Ilumina Bovine SNP50 array) appears insufficient for across-breed genomic predictions and selection in beef cattle. Today there is only one breed-specific targeted SNP panel and genomic predictions developed using animals across the entire population of the Nellore breed (www.pfizersaudeanimal.com), which enables genomically - enhanced selection. Genomic profiles are a way to enhance our current selection tools to achieve more accurate predictions for younger animals. Material and Methods: We analyzed the age at first calving (AFC), accumulated productivity (ACP), stayability (STAY) and heifer pregnancy at 30 months (HP30) in Nellore cattle fitting two different animal models; 1) a traditional single trait model, and 2) a two-trait model where the genomic breeding value or molecular value prediction (MVP) was included as a correlated trait. All mixed model analyses were performed using the statistical software ASREML 3.0. Results: Genetic correlation estimates between AFC, ACP, STAY, HP30 and respective MVPs ranged from 0.29 to 0.46. Results also showed an increase of 56%, 36%, 62% and 19% in estimated accuracy of AFC, ACP, STAY and HP30 when MVP information was included in the animal model. Conclusion: Depending upon the trait, integration of MVP information into genetic evaluation resulted in increased accuracy of 19% to 62% as compared to accuracy from traditional genetic evaluation. GE-EPD will be an effective tool to enable faster genetic improvement through more dependable selection of young animals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the impact of red blood cell (RBC) Life-spans in some disease areas such as diabetes or anemia of chronic kidney disease, there is no consensus on how to quantitatively best describe the process. Several models have been proposed to explain the elimination process of RBCs: random destruction process, homogeneous life-span model, or a series of 4-transit compartment model. The aim of this work was to explore the different models that have been proposed in literature, and modifications to those. The impact of choosing the right model on future outcomes prediction--in the above mentioned areas--was also investigated. Both data from indirect (clinical data) and direct life-span measurement (biotin-labeled data) methods were analyzed using non-linear mixed effects models. Analysis showed that: (1) predictions from non-steady state data will depend on the RBC model chosen; (2) the transit compartment model, which considers variation in life-span in the RBC population, better describes RBC survival data than the random destruction or homogenous life-span models; and (3) the additional incorporation of random destruction patterns, although improving the description of the RBC survival data, does not appear to provide a marked improvement when describing clinical data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: The goal of this study was to analyse a possible association of admission blood glucose with hospital mortality of polytraumatised patients and to develop an outcome prediction model for this patient group. METHODS: The outcome of adult polytraumatised patients admitted to the University Hospital of Berne, Switzerland, between 2002 and 2004 with an ISS > or = 17, and more than one severely injured organ system was retrospectively analysed. RESULTS: The inclusion criteria were met by 555 patients, of which 108 (19.5%) died. Hyperglycaemia proved to be an independent predictor for hospital mortality (P < 0.0001), following multiple regression analysis. After inclusion of admission blood glucose, the calculated mortality prediction model performed better than currently described models (P < 0.0001, AUC 0.924). CONCLUSION: In this retrospective, single-centre study in polytraumatised patients, admission blood glucose proved to be an independent predictor of hospital mortality following regression analysis controlling for age, gender, injury severity and other laboratory parameters. A reliable admission blood glucose-based mortality prediction model for polytraumatised patients could be established. This observation may be helpful in improving the precision of future outcome prediction models for polytraumatised patients. These observations warrant further prospective evaluation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this Thesis is to get in deep in the use of models (conceptual and numerical), as a prediction and analytical tool for hydrogeological studies, mainly from point of view of the mining drainage. In the first place, are developed the basic concepts and the parametric variations range are developed, usually used in the modelization of underground f10w and particle transport, and also the more recommended modelization process, analysing step by step each of its sequences, developed based in the experience of the author, contrasted against the available bibliography. Following MODFLOW is described, as a modelization tool, taking into account the advantages that its more common pre/post-treatment software have (Processing MODFLOW, Mod CAD and Visual MODFLOW). In third place, are introduced the criterions and required parameters to develop a conceptual model, numerical discretization, definition of the boundary and initial conditions, as well as all those factors which affects to the system (antropic or natural), developing the creation process, data introduction, execution of morlel, convergence criterions and calibration and obtaining result, natural of Visual MODFLOUI. Next, five practical cases are analysed, in which the author has been applied MODFLOW, and the different pre/post-treatment software (Processing MODFLOW, Mod CAD and Visual MODFLOW), describing for each one, the objectives, the conceptual model defined, discretization, the parametric definition, sensibility analysis, results reached and future states prediction. In fifth place, are presented a program developed by the author which allow to improve the facilities offered by Mod CAD and Visual MODFLOW, expanding modelization possibilities and connection to other computers. Next step it is presented a series of solutions to the most typical problems which could appear during the modelization with MODFLOW. Finally, the conclusions and recommendation readied are exposed, with the purpose to help in the developing of hydrogeological models both conceptuals and numericals. RESUMEN El objetivo de esta Tesis es profundizar en el empleo de modelos (conceptuales y numéricos), como herramienta de predicción y análisis en estudios hidrogeológicos, fundamentalmente desde el punto de vista de drenaje minero. En primer lugar, se desarrollan los conceptos básicos y los rangos de variación paramétrica, habituales en la modelización de flujos subterráneos y transporte de partículas, así como el proceso de modelización más recomendado, analizando paso a paso cada una de sus secuencias, desarrollado en base a la experiencia del autor, contrastado con la bibliografía disponible. Seguidamente se describe MODFLOW como herramienta de modelización, valorando las ventajas que presentan sus software de pre/post-tratamiento más comunes (Proccesing MODFLOW, Mod CAD y Visual MODFLOW). En tercer lugar, se introducen los criterios y parámetros precisos para desarrollar un modelo conceptual, discretización numérica, definición de las condiciones de contorno e iniciales, así como todos aquellos factores que afectan al sistema (antrópicos o naturales), desarrollando el proceso de creación, introducción de datos, ejecución del modelo, criterios de convergencia y calibración, y obtención de resultados, propios de Visual MODFLOW. A continuación, se analizan cinco casos prácticos, donde el autor ha aplicado MODFLOW, así como diferentes software de pre/post-tratamiento (Proccesing MODFLOW, Mod CAD y Visual MODFLOW), describiendo para cada uno, el objetivo marcado, modelo conceptual definido, discretización, definición paramétrica, análisis de sensibilidad, resultados alcanzados y predicción de estados futuros. En quinto lugar, se presenta un programa desarrollado por el autor, que permite mejorar las prestaciones ofrecidas por MODFLOW y Visual MODFLOW, ampliando las posibilidades de modelización y conexión con otros ordenadores. Seguidamente se plantean una serie de soluciones a los problemas más típicos que pueden producirse durante la modelización con MODFLOW. Por último, se exponen las conclusiones y recomendaciones alcanzadas, con el fin de auxiliar el desarrollo del desarrollo de modelos hidrogeológicos, tanto conceptuales como numéricos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El sistema de energía eólica-diesel híbrido tiene un gran potencial en la prestación de suministro de energía a comunidades remotas. En comparación con los sistemas tradicionales de diesel, las plantas de energía híbridas ofrecen grandes ventajas tales como el suministro de capacidad de energía extra para "microgrids", reducción de los contaminantes y emisiones de gases de efecto invernadero, y la cobertura del riesgo de aumento inesperado del precio del combustible. El principal objetivo de la presente tesis es proporcionar nuevos conocimientos para la evaluación y optimización de los sistemas de energía híbrido eólico-diesel considerando las incertidumbres. Dado que la energía eólica es una variable estocástica, ésta no puede ser controlada ni predecirse con exactitud. La naturaleza incierta del viento como fuente de energía produce serios problemas tanto para la operación como para la evaluación del valor del sistema de energía eólica-diesel híbrido. Por un lado, la regulación de la potencia inyectada desde las turbinas de viento es una difícil tarea cuando opera el sistema híbrido. Por otro lado, el bene.cio económico de un sistema eólico-diesel híbrido se logra directamente a través de la energía entregada a la red de alimentación de la energía eólica. Consecuentemente, la incertidumbre de los recursos eólicos incrementa la dificultad de estimar los beneficios globales en la etapa de planificación. La principal preocupación del modelo tradicional determinista es no tener en cuenta la incertidumbre futura a la hora de tomar la decisión de operación. Con lo cual, no se prevé las acciones operativas flexibles en respuesta a los escenarios futuros. El análisis del rendimiento y simulación por ordenador en el Proyecto Eólico San Cristóbal demuestra que la incertidumbre sobre la energía eólica, las estrategias de control, almacenamiento de energía, y la curva de potencia de aerogeneradores tienen un impacto significativo sobre el rendimiento del sistema. En la presente tesis, se analiza la relación entre la teoría de valoración de opciones y el proceso de toma de decisiones. La opción real se desarrolla con un modelo y se presenta a través de ejemplos prácticos para evaluar el valor de los sistemas de energía eólica-diesel híbridos. Los resultados muestran que las opciones operacionales pueden aportar un valor adicional para el sistema de energía híbrida, cuando esta flexibilidad operativa se utiliza correctamente. Este marco se puede aplicar en la optimización de la operación a corto plazo teniendo en cuenta la naturaleza dependiente de la trayectoria de la política óptima de despacho, dadas las plausibles futuras realizaciones de la producción de energía eólica. En comparación con los métodos de valoración y optimización existentes, el resultado del caso de estudio numérico muestra que la política de operación resultante del modelo de optimización propuesto presenta una notable actuación en la reducción del con- sumo total de combustible del sistema eólico-diesel. Con el .n de tomar decisiones óptimas, los operadores de plantas de energía y los gestores de éstas no deben centrarse sólo en el resultado directo de cada acción operativa, tampoco deberían tomar decisiones deterministas. La forma correcta es gestionar dinámicamente el sistema de energía teniendo en cuenta el valor futuro condicionado en cada opción frente a la incertidumbre. ABSTRACT Hybrid wind-diesel power systems have a great potential in providing energy supply to remote communities. Compared with the traditional diesel systems, hybrid power plants are providing many advantages such as providing extra energy capacity to the micro-grid, reducing pollution and greenhouse-gas emissions, and hedging the risk of unexpected fuel price increases. This dissertation aims at providing novel insights for assessing and optimizing hybrid wind-diesel power systems considering the related uncertainties. Since wind power can neither be controlled nor accurately predicted, the energy harvested from a wind turbine may be considered a stochastic variable. This uncertain nature of wind energy source results in serious problems for both the operation and value assessment of the hybrid wind-diesel power system. On the one hand, regulating the uncertain power injected from wind turbines is a difficult task when operating the hybrid system. On the other hand, the economic profit of a hybrid wind-diesel system is achieved directly through the energy delivered to the power grid from the wind energy. Therefore, the uncertainty of wind resources has increased the difficulty in estimating the total benefits in the planning stage. The main concern of the traditional deterministic model is that it does not consider the future uncertainty when making the dispatch decision. Thus, it does not provide flexible operational actions in response to the uncertain future scenarios. Performance analysis and computer simulation on the San Cristobal Wind Project demonstrate that the wind power uncertainty, control strategies, energy storage, and the wind turbine power curve have a significant impact on the performance of the system. In this dissertation, the relationship between option pricing theory and decision making process is discussed. A real option model is developed and presented through practical examples for assessing the value of hybrid wind-diesel power systems. Results show that operational options can provide additional value to the hybrid power system when this operational flexibility is correctly utilized. This framework can be applied in optimizing short term dispatch decisions considering the path-dependent nature of the optimal dispatch policy, given the plausible future realizations of the wind power production. Comparing with the existing valuation and optimization methods, result from numerical example shows that the dispatch policy resulting from the proposed optimization model exhibits a remarkable performance in minimizing the total fuel consumption of the wind-diesel system. In order to make optimal decisions, power plant operators and managers should not just focus on the direct outcome of each operational action; neither should they make deterministic decisions. The correct way is to dynamically manage the power system by taking into consideration the conditional future value in each option in response to the uncertainty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este estudo analisa a utilização do gerenciamento de riscos em algumas Empresas de Pequeno e Médio Porte (PMEs) na cidade de São Bernardo do Campo. A análise do risco empresarial possui uma crescente importância e ela pode contribuir fortemente para a continuidade dos negócios. A capacidade para gerenciar os riscos do negócio em relação ás inevitáveis incertezas e com uma valorização futura dos resultados é um fator substancial de vantagem competitiva. Este processo de geração de valor providencia a disciplina e ferramentas de administração dos riscos empresariais permitindo a criação de valor para sua organização. As Metodologias de Análise de Risco, em sua maioria, são aplicadas para grandes corporações. Uma das motivações desse trabalho é verificar o grau de utilidade dessas metodologias para as empresas PMEs escolhidas para a pesquisa em São Bernardo do Campo. O estudo é desenvolvido por meio de pesquisas bibliográficas e pesquisa exploratória nas empresas escolhidas. Após as pesquisas, foi feita uma análise qualitativa utilizando o método de estudo de casos. Finalmente, conclui-se que as empresas pesquisadas de São Bernardo do Campo, podem obter vantagens significativas ao implantar metodologias de gerenciamento de risco. Todas as empresas pesquisadas possuem mais de dez anos e consideram importante controlar a continuidade de seus negócios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este estudo analisa a utilização do gerenciamento de riscos em algumas Empresas de Pequeno e Médio Porte (PMEs) na cidade de São Bernardo do Campo. A análise do risco empresarial possui uma crescente importância e ela pode contribuir fortemente para a continuidade dos negócios. A capacidade para gerenciar os riscos do negócio em relação ás inevitáveis incertezas e com uma valorização futura dos resultados é um fator substancial de vantagem competitiva. Este processo de geração de valor providencia a disciplina e ferramentas de administração dos riscos empresariais permitindo a criação de valor para sua organização. As Metodologias de Análise de Risco, em sua maioria, são aplicadas para grandes corporações. Uma das motivações desse trabalho é verificar o grau de utilidade dessas metodologias para as empresas PMEs escolhidas para a pesquisa em São Bernardo do Campo. O estudo é desenvolvido por meio de pesquisas bibliográficas e pesquisa exploratória nas empresas escolhidas. Após as pesquisas, foi feita uma análise qualitativa utilizando o método de estudo de casos. Finalmente, conclui-se que as empresas pesquisadas de São Bernardo do Campo, podem obter vantagens significativas ao implantar metodologias de gerenciamento de risco. Todas as empresas pesquisadas possuem mais de dez anos e consideram importante controlar a continuidade de seus negócios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

On a global level the population growth and increase of the middle class lead to a growing demand on material resources. The built environment has an enormous impact on this scarcity. In addition, a surplus of construction and demolition waste is a common problem. The construction industry claims to recycle 95% of this waste but this is in fact mainly downcycling. Towards the circular economy, the quality of reuse becomes of increasing importance. Buildings are material warehouses that can contribute to this high quality reuse. However, several aspects to achieve this are unknown and a need for more insight into the potential for high quality reuse of building materials exists. Therefore an instrument has been developed that determines the circularity of construction waste in order to maximise high quality reuse. The instrument is based on three principles: ‘product and material flows in the end of life phase’, ‘future value of secondary materials and products’ and ‘the success of repetition in a new life cycle’. These principles are further divided into a number of criteria to which values and weighting factors are assigned. A degree of circularity can then be determined as a percentage. A case study for a typical 70s building is carried out. For concrete, the circularity is increased from 25% to 50% by mapping out the potential for high quality reuse. During the development of the instrument it was clarified that some criteria are difficult to measure. Accurate and reliable data are limited and assumptions had to be made. To increase the reliability of the instrument, experts have reviewed the instrument several times. In the long-term, the instrument can be used as a tool for quantitative research to reduce the amount of construction and demolition waste and contribute to the reduction of raw material scarcity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, we propose a novel method to predict the solvent accessible surface areas of transmembrane residues. For both transmembrane alpha-helix and beta-barrel residues, the correlation coefficients between the predicted and observed accessible surface areas are around 0.65. On the basis of predicted accessible surface areas, residues exposed to the lipid environment or buried inside a protein can be identified by using certain cutoff thresholds. We have extensively examined our approach based on different definitions of accessible surface areas and a variety of sets of control parameters. Given that experimentally determining the structures of membrane proteins is very difficult and membrane proteins are actually abundant in nature, our approach is useful for theoretically modeling membrane protein tertiary structures, particularly for modeling the assembly of transmembrane domains. This approach can be used to annotate the membrane proteins in proteomes to provide extra structural and functional information.