973 resultados para semi-parametri model
Resumo:
Biotechnology has been recognized as the key strategic technology for industrial growth. The industry is heavily dependent on basic research. Finland continues to rank in the top 10 of Europe's most innovative countries in terms of tax-policy, education system, infrastructure and the number of patents issued. Regardless of the excellent statistical results, the output of this innovativeness is below acceptable. Research on the issues hindering the output creation has already been done and the identifiable weaknesses in the Finland's National Innovation system are the non-existent growth of entrepreneurship and the missing internationalization. Finland is proven to have all the enablers of the innovation policy tools, but is lacking the incentives and rewards to push the enablers, such as knowledge and human capital, forward. Science Parks are the biggest operator in research institutes in the Finnish Science and Technology system. They exist with the purpose of speeding up the commercialization process of biotechnology innovations which usually include technological uncertainty, technical inexperience, business inexperience and high technology cost. Innovation management only internally is a rather historic approach, current trend drives towards open innovation model with strong triple helix linkages. The evident problems in the innovation management within the biotechnology industry are examined through a case study approach including analysis of the semi-structured interviews which included biotechnology and business expertise from Turku School of Economics. The results from the interviews supported the theoretical implications as well as conclusions derived from the pilot survey, which focused on the companies inside Turku Science Park network. One major issue that the Finland's National innovation system is struggling with is the fact that it is technology driven, not business pulled. Another problem is the university evaluation scale which focuses more on number of graduates and short-term factors, when it should put more emphasis on the cooperation success in the long-term, such as the triple helix connections with interaction and knowledge distribution. The results of this thesis indicated that there is indeed requirement for some structural changes in the Finland's National innovation system and innovation policy in order to generate successful biotechnology companies and innovation output. There is lack of joint output and scales of success, lack of people with experience, lack of language skills, lack of business knowledge and lack of growth companies.
Resumo:
Recently, Small Modular Reactors (SMRs) have attracted increased public discussion. While large nuclear power plant new build projects are facing challenges, the focus of attention is turning to small modular reactors. One particular project challenge arises in the area of nuclear licensing, which plays a significant role in new build projects affecting their quality as well as costs and schedules. This dissertation - positioned in the field of nuclear engineering but also with a significant section in the field of systems engineering - examines the nuclear licensing processes and their suitability for the characteristics of SMRs. The study investigates the licensing processes in selected countries, as well as other safety critical industry fields. Viewing the licensing processes and their separate licensing steps in terms of SMRs, the study adopts two different analysis theories for review and comparison. The primary data consists of a literature review, semi-structured interviews, and questionnaire responses concerning licensing processes and practices. The result of the study is a recommendation for a new, optimized licensing process for SMRs. The most important SMR-specific feature, in terms of licensing, is the modularity of the design. Here the modularity indicates multi-module SMR designs, which creates new challenges in the licensing process. As this study focuses on Finland, the main features of the new licensing process are adapted to the current Finnish licensing process, aiming to achieve the main benefits with minimal modifications to the current process. The application of the new licensing process is developed using Systems Engineering, Requirements Management, and Project Management practices and tools. Nuclear licensing includes a large amount of data and documentation which needs to be managed in a suitable manner throughout the new build project and then during the whole life cycle of the nuclear power plant. To enable a smooth licensing process and therefore ensure the success of the new build nuclear power plant project, management processes and practices play a significant role. This study contributes to the theoretical understanding of how licensing processes are structured and how they are put into action in practice. The findings clarify the suitability of different licensing processes and their selected licensing steps for SMR licensing. The results combine the most suitable licensing steps into a new licensing process for SMRs. The results are also extended to the concept of licensing management practices and tools.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.
Resumo:
BCM (business continuity Management) is a holistic management process aiming at ensuring business continuity and building organizational resilience. Maturity models offer organizations a tool for evaluating their current maturity in a certain process. In the recent years BCM has been subject to international ISO standardization, while the interest of organizations to bechmark their state of BCM agains standards and the use of maturity models for these asessments has increased. However, although new standards have been introduced, very little attention has been paid to reviewing the existing BCM maturity models in research - especially in the light of the new ISO 22301 standard for BCM. In this thesis the existing BCM maturily models are carefully evaluated to determine whetherthey could be improved. In order to accomplish this, the compliance of the existing models to the ISO 22301 standard is measured and a framework for assessing a maturitymodel´s quality is defined. After carefully evaluating the existing frameworks for maturity model development and evaluation, an approach suggested by Becker et al. (2009) was chosen as the basis for the research. An additionto the procedural model a set of seven research guidelines proposed by the same authors was applied, drawing on the design-science research guidelines as suggested by Hevner et al. (2004). Furthermore, the existing models´ form and function was evaluated to address their usability. Based on the evaluation of the existing BCM maturity models, the existing models were found to have shortcomings in each dimension of the evaluation. Utilizing the best of the existing models, a draft version for an enhanced model was developed. This draft model was then iteratively developed by conducting six semi-structured interviews with BCM professionals in finland with the aim of validating and improving it. As a Result, a final version of the enhanced BCM maturity model was developed, conforming to the seven key clauses in the ISO 22301 standard and the maturity model development guidelines suggested by Becker et al. (2009).
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.
Resumo:
The objective of this thesis is to understand how to create and develop a successful place brand and how to manage it systematically. The thesis thoroughly explains the phenomenon of place brands and place branding and presents different sub-categories of place branding. The theoretical part of the thesis provides a wide overview on the prevailing literature of place branding, place brand development and place brand management, which form the basis of the thesis’ theoretical framework. The theoretical evidence is gathered from a case living area. The living area is developed by one construction company, which has a significant role in the construction industry in Finland. The empirical evidence is gathered through semi-structured in-depth interviews by interviewing the new living area’s carefully selected stakeholder groups. Afterwards the empirical data is analyzed and reflected to the theoretical findings. After examining the case living area, the thesis will present a new living area branding process model based on prevailing theories and empirical findings.
Resumo:
Searching for effective Smad3 gene-based gene therapies for hepatic fibrosis, we constructed siRNA expression plasmids targeting the rat Smad3 gene and then delivered these plasmids into hepatic stellate cells (HSCs). The effect of siRNAs on the mRNA levels of Smad2, Smad3, Smad4, and collagens I-α1, III-α1 and IV-α1 (Colα1, Col3α1, Col4α1, respectively) was determined by RT-PCR. Eighty adult male Sprague-Dawley rats were randomly divided into three groups. Twice a week for 8 weeks, the untreated hepatic fibrosis model (N = 30) and the treated group (N = 20) were injected subcutaneously with 40% (v/v) carbon tetrachloride (CCl4)-olive oil (3 mL/kg), and the normal control group (N = 30) was injected with olive oil (3 mL/kg). In the 4th week, the treated rats were injected subcutaneously with liposome-encapsulated plasmids (150 µg/kg) into the right liver lobe under general anesthesia once every 2 weeks, and the untreated rats were injected with the same volume of buffer. At the end of the 6th and 8th weeks, liver tissue and sera were collected. Pathological changes were assessed by a semi-quantitative scoring system (SSS), and a radioimmunoassay was used to establish a serum liver fibrosis index (type III procollagen, type IV collagen, laminin, and hyaluronic acid). The mRNA expression levels of the above cited genes were reduced in the HSCs transfected with the siRNA expression plasmids. Moreover, in the treated group, fibrosis evaluated by the SSS was significantly reduced (P < 0.05) and the serum indices were greatly improved (P < 0.01). These results suggest that Smad3 siRNA expression plasmids have an anti-fibrotic effect.
Resumo:
The hydration kinetics of transgenic corn types flint DKB 245PRO, semi-flint DKB 390PRO, and dent DKB 240PRO was studied at temperatures of 30, 40, 50, and 67 °C. The concentrated parameters model was used, and it fits the experimental data well for all three cultivars. The chemical composition of the corn kernels was also evaluated. The corn cultivar influenced the initial rate of absorption and the water equilibrium concentration, and the dent corn absorbed more water than the other cultivars at the four temperatures analyzed. The effect of hydration on the kernel texture was also studied, and it was observed that there was no significant difference in the deformation force required for all three corn types analyzed with longer hydration period.
Resumo:
Phenomena in cyber domain, especially threats to security and privacy, have proven an increasingly heated topic addressed by different writers and scholars at an increasing pace – both nationally and internationally. However little public research has been done on the subject of cyber intelligence. The main research question of the thesis was: To what extent is the applicability of cyber intelligence acquisition methods circumstantial? The study was conducted in sequential a manner, starting with defining the concept of intelligence in cyber domain and identifying its key attributes, followed by identifying the range of intelligence methods in cyber domain, criteria influencing their applicability, and types of operatives utilizing cyber intelligence. The methods and criteria were refined into a hierarchical model. The existing conceptions of cyber intelligence were mapped through an extensive literature study on a wide variety of sources. The established understanding was further developed through 15 semi-structured interviews with experts of different backgrounds, whose wide range of points of view proved to substantially enhance the perspective on the subject. Four of the interviewed experts participated in a relatively extensive survey based on the constructed hierarchical model on cyber intelligence that was formulated in to an AHP hierarchy and executed in the Expert Choice Comparion online application. It was concluded that Intelligence in cyber domain is an endorsing, cross-cutting intelligence discipline that adds value to all aspects of conventional intelligence and furthermore that it bears a substantial amount of characteristic traits – both advantageous and disadvantageous – and furthermore that the applicability of cyber intelligence methods is partly circumstantially limited.
Resumo:
We study the phonon dispersion, cohesive and thermal properties of raxe gas solids Ne, Ar, Kr, and Xe, using a variety of potentials obtained from different approaches; such as, fitting to crystal properties, purely ab initio calculations for molecules and dimers or ab initio calculations for solid crystalline phase, a combination of ab initio calculations and fitting to either gas phase data or sohd state properties. We explore whether potentials derived with a certain approaxih have any obvious benefit over the others in reproducing the solid state properties. In particular, we study phonon dispersion, isothermal ajid adiabatic bulk moduli, thermal expansion, and elastic (shear) constants as a function of temperatiue. Anharmonic effects on thermal expansion, specific heat, and bulk moduli have been studied using A^ perturbation theory in the high temperature limit using the neaxest-neighbor central force (nncf) model as developed by Shukla and MacDonald [4]. In our study, we find that potentials based on fitting to the crystal properties have some advantage, particularly for Kr and Xe, in terms of reproducing the thermodynamic properties over an extended range of temperatiures, but agreement with the phonon frequencies with the measured values is not guaranteed. For the lighter element Ne, the LJ potential which is based on fitting to the gas phase data produces best results for the thermodynamic properties; however, the Eggenberger potential for Ne, where the potential is based on combining ab initio quantum chemical calculations and molecular dynamics simulations, produces results that have better agreement with the measured dispersion, and elastic (shear) values. For At, the Morse-type potential, which is based on M0ller-Plesset perturbation theory to fourth order (MP4) ab initio calculations, yields the best results for the thermodynamic properties, elastic (shear) constants, and the phonon dispersion curves.
Resumo:
Research interest on the topic of female coaches as role models has recently emerged in the coaching literature. Social learning theory (Bandura, 1963; 1977; 1986) has also emerged as an essential framework in explaining learning through modeling. Previous research has examined the coach as a role model, as well as gender differences between coaches. Several authors, with several different conclusions, have studied the significance of gender as an influencer in role modeling. Whitaker and Molstad in 1988 conducted a study focusing on the coach as a role model. What they found was when they combined the results of high school and college aged athletes; the female coach was considered to be a superior role model. The current research used a social learning theory framework to examine the benefits and intricacies of the modeling relationship between female adolescent athletes and influential female coaches. To accomplish this task, the formative experiences of thirteen adolescent female athletes were examined. Each athlete was interviewed, with each semi-structured interview focusing on extracting the salient features of a coach that the athlete identified as being the most influential in her personal development. The data from these interviews were quaHtatively analyzed using case studies. From case studies, a template emerges in which the coach/athlete relationship can be seen as an essential construct in which caring and strong role models can have lasting effects on the lives, values, and successes of adolescent female athletes.
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
This paper studies a dynamic-optimizing model of a semi-small open economy with sticky nominal prices and wages. the model exhibits exchange rate overshooting in response to money supply shocks. the predicted variability of nominal and real exchange rates is roughly consistent with that of G7 effective exchange rates during the post-Bretton Woods era.
Resumo:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.