914 resultados para Dynamic analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging markets have come to play a significant role in the world, not only due to their strong economic growth but because they have been able to foster an increasing number of innovative high technology oriented firms. However, as the markets continue to change and develop, there remain many companies in emerging markets that struggle with their competitiveness and innovativeness. To improve competitive capabilities, many scholars have come to favor interfirm cooperation, which is perceived to help companies access new knowledge and complementary resources and, by so doing, enables them to catch up quickly with Western competitors. Regardless of numerous attempts by strategic management scholars, the research field remains very fragmented and lacks understanding on how and when interfirm cooperation contributes to firm performance and competiveness in emerging markets. Furthermore, the reasons why interfirm R&D sometimes succeeds but fails at other times frequently remain unidentified. This thesis combines the extant literature on competitive and cooperative strategy, dynamic capabilities, and R&D cooperation while studying interfirm R&D relationships in and between Russian manufacturing companies. Employing primary survey data, the thesis presents numerous novel findings regarding the effect of R&D cooperation and different types of R&D partner on firms’ exploration and exploitation performance. Utilizing a competitive strategy framework enables these effects to be explained in more detail, and especially why interfirm cooperation, regardless of its potential, has had a modest effect on the general competitiveness of emerging market firms. This thesis contributes especially to the strategic management literature and presents a more holistic perspective on the usefulness of cooperative strategy in emerging markets. It provides a framework through which it is possible to assess the potential impacts of different R&D cooperation partners and to clarify the causal relationships between cooperation, performance, and long term competitiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s global industrial service business, markets are dynamic and finding new ways of value creation towards customers has become more and more challenging. Customer orientation is needed because of the demanding after-sales business which is both quickly changing and stochastic in nature. In after-sales business customers require fast and reliable service for their spare part needs. This thesis objective is to clarify this challenging after-sales business environment and find ways to increase customer satisfaction via balanced measurement system which will help to find possible targets to reduce order cycle times in a large metal and mineral company Outotec (Filters)’ Spare Part Supply business line. In case study, internal documents and data and numerical calculations together with qualitative interviews with different persons in key roles of Spare Part Supply organizations are used to analyze the performance of different processes from the spare parts delivery function. The chosen performance measurement tool is Balanced Scorecard which is slightly modified to suit the lead time study from customer’s perspective better. Findings show that many different processes in spare parts supply are facing different kind of challenges in achieving the lead time levels wanted and that these processes’ problems seem to accumulate. Findings also show that putting effort in supply side challenges and information flows visibility should give the best results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to examine the time course of endothelial function after a single handgrip exercise session combined with blood flow restriction in healthy young men. Nine participants (28±5.8 years) completed a single session of bilateral dynamic handgrip exercise (20 min with 60% of the maximum voluntary contraction). To induce blood flow restriction, a cuff was placed 2 cm below the antecubital fossa in the experimental arm. This cuff was inflated to 80 mmHg before initiation of exercise and maintained through the duration of the protocol. The experimental arm and control arm were randomly selected for all subjects. Brachial artery flow-mediated dilation (FMD) and blood flow velocity profiles were assessed using Doppler ultrasonography before initiation of the exercise, and at 15 and 60 min after its cessation. Blood flow velocity profiles were also assessed during exercise. There was a significant increase in FMD 15 min after exercise in the control arm compared with before exercise (64.09%±16.59%, P=0.001), but there was no change in the experimental arm (-12.48%±12.64%, P=0.252). FMD values at 15 min post-exercise were significantly higher for the control arm in comparison to the experimental arm (P=0.004). FMD returned to near baseline values at 60 min after exercise, with no significant difference between arms (P=0.424). A single handgrip exercise bout provoked an acute increase in FMD 15 min after exercise, returning to near baseline values at 60 min. This response was blunted by the addition of an inflated pneumatic cuff to the exercising arm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gravitational phase separation is a common unit operation found in most large-scale chemical processes. The need for phase separation can arise e.g. from product purification or protection of downstream equipment. In gravitational phase separation, the phases separate without the application of an external force. This is achieved in vessels where the flow velocity is lowered substantially compared to pipe flow. If the velocity is low enough, the denser phase settles towards the bottom of the vessel while the lighter phase rises. To find optimal configurations for gravitational phase separator vessels, several different geometrical and internal design features were evaluated based on simulations using OpenFOAM computational fluid dynamics (CFD) software. The studied features included inlet distributors, vessel dimensions, demister configurations and gas phase outlet configurations. Simulations were conducted as single phase steady state calculations. For comparison, additional simulations were performed as dynamic single and two-phase calculations. The steady state single phase calculations provided indications on preferred configurations for most above mentioned features. The results of the dynamic simulations supported the utilization of the computationally faster steady state model as a practical engineering tool. However, the two-phase model provides more truthful results especially with flows where a single phase does not determine the flow characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this qualitative research is to study how international new ventures change internally during initial internationalization. Based on the analysis of seven INV firms, a framework illustrating this change process, will be developed. This research will also develop earlier theories, and create a solid combination of existing theories to explain the phenomenon. INV firms internationalize more rapidly and aggressively than traditional MNEs. At the same, external and internal drivers cause changes in INVs culture, resources, capabilities, strategic management, and output decisions inside the company. Organizational learning and resource acquisition through international business networks explain how INVs are able to cope with the dynamic high-technology industry and be able to adapt. Internationalization of INVs proceeds through several phases, which may be gone through rapidly due to the network effects and INVs’ special characteristics. The results of this research revealed that INVs internal change process proceeds through four phases; pre-incorporation phase, product development phase, internationalization and growth phase, and maturation phase. INVs culture, resources, capabilities, strategic management, and outputs change significantly during initial internationalization, and INVs develop from small start-ups into fully established companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article evaluates the impacts of the imposition of tariffs on the Brazilian soluble coffee mainly by European countries as of the 1990s. More particularly, it verifies whether the imposition of discriminatory trade tariffs by the European Union and of non-discriminatory ones by some Eastern European countries reflects on the international demand for this commodity. For this purpose, dynamic models of global demand for Brazilian soluble coffee were estimated for the 1995-2003 period using data from the International Coffee Organization. Findings suggest that existing tariffs significantly account for the reduction of Brazilian share of soluble in the world market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing amount of renewable energy source based electricity production has set high load control requirements for power grid balance markets. The essential grid balance between electricity consumption and generation is currently hard to achieve economically with new-generation solutions. Therefore conventional combustion power generation will be examined in this thesis as a solution to the foregoing issue. Circulating fluidized bed (CFB) technology is known to have sufficient scale to acts as a large grid balancing unit. Although the load change rate of the CFB unit is known to be moderately high, supplementary repowering solution will be evaluated in this thesis for load change maximization. The repowering heat duty is delivered to the CFB feed water preheating section by smaller gas turbine (GT) unit. Consequently, steam extraction preheating may be decreased and large amount of the gas turbine exhaust heat may be utilized in the CFB process to reach maximum plant electrical efficiency. Earlier study of the repowering has focused on the efficiency improvements and retrofitting to maximize plant electrical output. This study however presents the CFB load change improvement possibilities achieved with supplementary GT heat. The repowering study is prefaced with literature and theory review for both of the processes to maximize accuracy of the research. Both dynamic and steady-state simulations accomplished with APROS simulation tool will be used to evaluate repowering effects to the CFB unit operation. Eventually, a conceptual level analysis is completed to compare repowered plant performance to the state-of-the-art CFB performance. Based on the performed simulations, considerably good improvements to the CFB process parameters are achieved with repowering. Consequently, the results show possibilities to higher ramp rate values achieved with repowered CFB technology. This enables better plant suitability to the grid balance markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents an understanding of how a U.S. based, international MBA school has been able to achieve competitive advantage within a relatively short period of time. A framework is built to comprehend how the dynamic capability and value co-creation theories are connected and to understand how the dynamic capabilities have enabled value co-creation to happen between the school and its students, leading to such competitive advantage for the school. The data collection method followed a qualitative single-case study with a process perspective. Seven semi-structured interviews were made in September and October of 2015; one current employee of the MBA school was interviewed, with the other six being graduates and/or former employees of the MBA school. In addition, the researcher has worked as a recruiter at the MBA school, enabling to build bridges and a coherent whole of the empirical findings. Data analysis was conducted by first identifying themes from interviews, after which a narrative was written and a causal network model was built. Thus, a combination of thematic analysis, narrative and grounded theory were used as data analysis methods. This study finds that value co-creation is enabled by the dynamic capabilities of the MBA school; also capabilities would not be dynamic if value co-creation did not take place. Thus, this study presents that even though the two theories represent different level analyses, they are intertwined and together they can help to explain competitive advantage. The MBA case school’s dynamic capabilities are identified to be the sales & marketing capabilities and international market creation capabilities, thus the study finds that the MBA school does not only co-create value with existing students (customers) in the school setting, but instead, most of the value co-creation happens between the school and the student cohorts (network) already in the recruiting phase. Therefore, as a theoretical implication, the network should be considered as part of the context. The main value created seem to lie in the MBA case school’s international setting & networks. MBA schools around the world can learn from this study; schools should try to find their own niche and specialize, based on their own values and capabilities. With a differentiating focus and a unique and practical content, the schools can and should be well-marketed and proactively sold in order to receive more student applications and enhance competitive advantage. Even though an MBA school can effectively be treated as a business, as the study shows, the main emphasis should still be on providing quality education. Good content with efficient marketing can be the winning combination for an MBA school.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our surrounding landscape is in a constantly dynamic state, but recently the rate of changes and their effects on the environment have considerably increased. In terms of the impact on nature, this development has not been entirely positive, but has rather caused a decline in valuable species, habitats, and general biodiversity. Regardless of recognizing the problem and its high importance, plans and actions of how to stop the detrimental development are largely lacking. This partly originates from a lack of genuine will, but is also due to difficulties in detecting many valuable landscape components and their consequent neglect. To support knowledge extraction, various digital environmental data sources may be of substantial help, but only if all the relevant background factors are known and the data is processed in a suitable way. This dissertation concentrates on detecting ecologically valuable landscape components by using geospatial data sources, and applies this knowledge to support spatial planning and management activities. In other words, the focus is on observing regionally valuable species, habitats, and biotopes with GIS and remote sensing data, using suitable methods for their analysis. Primary emphasis is given to the hemiboreal vegetation zone and the drastic decline in its semi-natural grasslands, which were created by a long trajectory of traditional grazing and management activities. However, the applied perspective is largely methodological, and allows for the application of the obtained results in various contexts. Models based on statistical dependencies and correlations of multiple variables, which are able to extract desired properties from a large mass of initial data, are emphasized in the dissertation. In addition, the papers included combine several data sets from different sources and dates together, with the aim of detecting a wider range of environmental characteristics, as well as pointing out their temporal dynamics. The results of the dissertation emphasise the multidimensionality and dynamics of landscapes, which need to be understood in order to be able to recognise their ecologically valuable components. This not only requires knowledge about the emergence of these components and an understanding of the used data, but also the need to focus the observations on minute details that are able to indicate the existence of fragmented and partly overlapping landscape targets. In addition, this pinpoints the fact that most of the existing classifications are too generalised as such to provide all the required details, but they can be utilized at various steps along a longer processing chain. The dissertation also emphases the importance of landscape history as an important factor, which both creates and preserves ecological values, and which sets an essential standpoint for understanding the present landscape characteristics. The obtained results are significant both in terms of preserving semi-natural grasslands, as well as general methodological development, giving support to science-based framework in order to evaluate ecological values and guide spatial planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transmission system operators and distribution system operators are experiencing new challenges in terms of reliability, power quality, and cost efficiency. Although the potential of energy storages to face those challenges is recognized, the economic implications are still obscure, which introduce the risk into the business models. This thesis aims to investigate the technical and economic value indicators of lithium-ion battery energy storage systems (BESS) in grid-scale applications. In order to do that, a comprehensive performance lithium-ion BESS model with degradation effects estimation is developed. The model development process implies literature review on lifetime modelling, use, and modification of previous study progress, building the additional system parts and integrating it into a complete tool. The constructed model is capable of describing the dynamic behavior of the BESS voltage, state of charge, temperature and capacity loss. Five control strategies for BESS unit providing primary frequency regulation are implemented, in addition to the model. The questions related to BESS dimensioning and the end of life (EoL) criterion are addressed. Simulations are performed with one-month real frequency data acquired from Fingrid. The lifetime and cost-benefit analysis of the simulation results allow to compare and determine the preferable control strategy. Finally, the study performs the sensitivity analysis of economic profitability with variable size, EoL and system price. The research reports that BESS can be profitable in certain cases and presents the recommendations.