910 resultados para Event-based Model
Resumo:
Chronic graft-versus-host disease (cGvHD) is the leading cause of late nonrelapse mortality (transplant-related mortality) after hematopoietic stem cell transplant. Given that there are a wide range of treatment options for cGvHD, assessment of the associated costs and efficacy can help clinicians and health care providers allocate health care resources more efficiently. OBJECTIVE: The purpose of this study was to assess the cost-effectiveness of extracorporeal photopheresis (ECP) compared with rituximab (Rmb) and with imatinib (Imt) in patients with cGvHD at 5 years from the perspective of the Spanish National Health System. METHODS: The model assessed the incremental cost-effectiveness/utility ratio of ECP versus Rmb or Imt for 1000 hypothetical patients by using microsimulation cost-effectiveness techniques. Model probabilities were obtained from the literature. Treatment pathways and adverse events were evaluated taking clinical opinion and published reports into consideration. Local data on costs (2010 Euros) and health care resources utilization were validated by the clinical authors. Probabilistic sensitivity analyses were used to assess the robustness of the model. RESULTS: The greater efficacy of ECP resulted in a gain of 0.011 to 0.024 quality-adjusted life-year in the first year and 0.062 to 0.094 at year 5 compared with Rmb or Imt. The results showed that the higher acquisition cost of ECP versus Imt was compensated for at 9 months by greater efficacy; this higher cost was partially compensated for ( 517) by year 5 versus Rmb. After 9 months, ECP was dominant (cheaper and more effective) compared with Imt. The incremental cost-effectiveness ratio of ECP versus Rmb was 29,646 per life-year gained and 24,442 per quality-adjusted life-year gained at year 2.5. Probabilistic sensitivity analysis confirmed the results. The main study limitation was that to assess relative treatment effects, only small studies were available for indirect comparison. CONCLUSION: ECP as a third-line therapy for cGvHD is a more cost-effective strategy than Rmb or Imt.
Resumo:
Behavior-based navigation of autonomous vehicles requires the recognition of the navigable areas and the potential obstacles. In this paper we describe a model-based objects recognition system which is part of an image interpretation system intended to assist the navigation of autonomous vehicles that operate in industrial environments. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using a rule-based cooperative expert system
Resumo:
We describe a model-based objects recognition system which is part of an image interpretation system intended to assist autonomous vehicles navigation. The system is intended to operate in man-made environments. Behavior-based navigation of autonomous vehicles involves the recognition of navigable areas and the potential obstacles. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using CEES, the C++ embedded expert system shell developed in the Systems Engineering and Automatic Control Laboratory (University of Girona) as a specific rule-based problem solving tool. It has been especially conceived for supporting cooperative expert systems, and uses the object oriented programming paradigm
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Chemical-looping combustion (CLC) is a novel combustion technology with inherent separation of the greenhouse gas CO2. The technique typically employs a dual fluidized bed system where a metal oxide is used as a solid oxygen carrier that transfers the oxygen from combustion air to the fuel. The oxygen carrier is looping between the air reactor, where it is oxidized by the air, and the fuel reactor, where it is reduced by the fuel. Hence, air is not mixed with the fuel, and outgoing CO2 does not become diluted by the nitrogen, which gives a possibility to collect the CO2 from the flue gases after the water vapor is condensed. CLC is being proposed as a promising and energy efficient carbon capture technology, since it can achieve both an increase in power station efficiency simultaneously with low energy penalty from the carbon capture. The outcome of a comprehensive literature study concerning the current status of CLC development is presented in this thesis. Also, a steady state model of the CLC process, based on the conservation equations of mass and energy, was developed. The model was used to determine the process conditions and to calculate the reactor dimensions of a 100 MWth CLC system with bunsenite (NiO) as oxygen carrier and methane (CH4) as fuel. This study has been made in Oxygen Carriers and Their Industrial Applications research project (2008 – 2011), funded by the Tekes – Functional Material program. I would like to acknowledge Tekes and participating companies for funding and all project partners for good and comfortable cooperation.
Resumo:
A new damage model based on a micromechanical analysis of cracked [± θ / 90n ]s laminates subjected to multiaxial loads is proposed. The model predicts the onset and accumulation of transverse matrix cracks in uniformly stressed laminates, the effect of matrix cracks on the stiffness of the laminate, as well as the ultimate failure of the laminate. The model also accounts for the effect of the ply thickness on the ply strength. Predictions relating the elastic properties of several laminates and multiaxial loads are presented
Resumo:
Fusarium Head Blight (FHB) is a disease of great concern in wheat (Triticum aestivum). Due to its relatively narrow susceptible phase and environmental dependence, the pathosystem is suitable for modeling. In the present work, a mechanistic model for estimating an infection index of FHB was developed. The model is process-based driven by rates, rules and coefficients for estimating the dynamics of flowering, airborne inoculum density and infection frequency. The latter is a function of temperature during an infection event (IE), which is defined based on a combination of daily records of precipitation and mean relative humidity. The daily infection index is the product of the daily proportion of susceptible tissue available, infection frequency and spore cloud density. The model was evaluated with an independent dataset of epidemics recorded in experimental plots (five years and three planting dates) at Passo Fundo, Brazil. Four models that use different factors were tested, and results showed all were able to explain variation for disease incidence and severity. A model that uses a correction factor for extending host susceptibility and daily spore cloud density to account for post-flowering infections was the most accurate explaining 93% of the variation in disease severity and 69% of disease incidence according to regression analysis.
Resumo:
Calcium oxide looping is a carbon dioxide sequestration technique that utilizes the partially reversible reaction between limestone and carbon dioxide in two interconnected fluidised beds, carbonator and calciner. Flue gases from a combustor are fed into the carbonator where calcium oxide reacts with carbon dioxide within the gases at a temperature of 650 ºC. Calcium oxide is transformed into calcium carbonate which is circulated into the regenerative calciner, where calcium carbonate is returned into calcium oxide and a stream of pure carbon dioxide at a higher temperature of 950 ºC. Calcium oxide looping has proved to have a low impact on the overall process efficiency and would be easily retrofitted into existing power plants. This master’s thesis is done in participation to an EU funded project CaOling as a part of the Lappeenranta University of Technology deliverable, reactor modelling and scale-up tools. Thesis concentrates in creating the first model frame and finding the physically relevant phenomena governing the process.
Resumo:
The purpose of this study is to view credit risk from the financier’s point of view in a theoretical framework. Results and aspects of the previous studies regarding measuring credit risk with accounting based scoring models are also examined. The theoretical framework and previous studies are then used to support the empirical analysis which aims to develop a credit risk measure for a bank’s internal use or a risk management tool for a company to indicate its credit risk to the financier. The study covers a sample of Finnish companies from 12 different industries and four different company categories and employs their accounting information from 2004 to 2008. The empirical analysis consists of six stage methodology process which uses measures of profitability, liquidity, capital structure and cash flow to determine financier’s credit risk, define five significant risk classes and produce risk classification model. The study is confidential until 15.10.2012.
Resumo:
This Master´s thesis investigates the performance of the Olkiluoto 1 and 2 APROS model in case of fast transients. The thesis includes a general description of the Olkiluoto 1 and 2 nuclear power plants and of the most important safety systems. The theoretical background of the APROS code as well as the scope and the content of the Olkiluoto 1 and 2 APROS model are also described. The event sequences of the anticipated operation transients considered in the thesis are presented in detail as they will form the basis for the analysis of the APROS calculation results. The calculated fast operational transient situations comprise loss-of-load cases and two cases related to a inadvertent closure of one main steam isolation valve. As part of the thesis work, the inaccurate initial data values found in the original 1-D reactor core model were corrected. The input data needed for the creation of a more accurate 3-D core model were defined. The analysis of the APROS calculation results showed that while the main results were in good accordance with the measured plant data, also differences were detected. These differences were found to be caused by deficiencies and uncertainties related to the calculation model. According to the results the reactor core and the feedwater systems cause most of the differences between the calculated and measured values. Based on these findings, it will be possible to develop the APROS model further to make it a reliable and accurate tool for the analysis of the operational transients and possible plant modifications.
Resumo:
The application of the Extreme Value Theory (EVT) to model the probability of occurrence of extreme low Standardized Precipitation Index (SPI) values leads to an increase of the knowledge related to the occurrence of extreme dry months. This sort of analysis can be carried out by means of two approaches: the block maxima (BM; associated with the General Extreme Value distribution) and the peaks-over-threshold (POT; associated with the Generalized Pareto distribution). Each of these procedures has its own advantages and drawbacks. Thus, the main goal of this study is to compare the performance of BM and POT in characterizing the probability of occurrence of extreme dry SPI values obtained from the weather station of Ribeirão Preto-SP (1937-2012). According to the goodness-of-fit tests, both BM and POT can be used to assess the probability of occurrence of the aforementioned extreme dry SPI monthly values. However, the scalar measures of accuracy and the return level plots indicate that POT provides the best fit distribution. The study also indicated that the uncertainties in the parameters estimates of a probabilistic model should be taken into account when the probability associated with a severe/extreme dry event is under analysis.
Resumo:
Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.
Resumo:
Combating climate change is one of the key tasks of humanity in the 21st century. One of the leading causes is carbon dioxide emissions due to usage of fossil fuels. Renewable energy sources should be used instead of relying on oil, gas, and coal. In Finland a significant amount of energy is produced using wood. The usage of wood chips is expected to increase in the future significantly, over 60 %. The aim of this research is to improve understanding over the costs of wood chip supply chains. This is conducted by utilizing simulation as the main research method. The simulation model utilizes both agent-based modelling and discrete event simulation to imitate the wood chip supply chain. This thesis concentrates on the usage of simulation based decision support systems in strategic decision-making. The simulation model is part of a decision support system, which connects the simulation model to databases but also provides a graphical user interface for the decisionmaker. The main analysis conducted with the decision support system concentrates on comparing a traditional supply chain to a supply chain utilizing specialized containers. According to the analysis, the container supply chain is able to have smaller costs than the traditional supply chain. Also, a container supply chain can be more easily scaled up due to faster emptying operations. Initially the container operations would only supply part of the fuel needs of a power plant and it would complement the current supply chain. The model can be expanded to include intermodal supply chains as due to increased demand in the future there is not enough wood chips located close to current and future power plants.