910 resultados para Process models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One important metaphor, referred to biological theories, used to investigate on organizational and business strategy issues is the metaphor about heredity; an area requiring further investigation is the extent to which the characteristics of blueprints inherited from the parent, helps in explaining subsequent development of the spawned ventures. In order to shed a light on the tension between inherited patterns and the new trajectory that may characterize spawned ventures’ development we propose a model aimed at investigating which blueprints elements might exert an effect on business model design choices and to which extent their persistence (or abandonment) determines subsequent business model innovation. Under the assumption that academic and corporate institutions transmit different genes to their spin-offs, we hence expect to have heterogeneity in elements that affect business model design choices and its subsequent evolution. This is the reason why we carry on a twofold analysis in the biotech (meta)industry: under a multiple-case research design, business model and especially its fundamental design elements and themes scholars individuated to decompose the construct, have been thoroughly analysed. Our purpose is to isolate the dimensions of business model that may have been the object of legacy and the ones along which an experimentation and learning process is more likely to happen, bearing in mind that differences between academic and corporate might not be that evident as expected, especially considering that business model innovation may occur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The market’s challenges bring firms to collaborate with other organizations in order to create Joint Ventures, Alliances and Consortia that are defined as “Interorganizational Networks” (IONs) (Provan, Fish and Sydow; 2007). Some of these IONs are managed through a shared partecipant governance (Provan and Kenis, 2008): a team composed by entrepreneurs and/or directors of each firm of an ION. The research is focused on these kind of management teams and it is based on an input-process-output model: some input variables (work group’s diversity, intra-team's friendship network density) have a direct influence on the process (team identification, shared leadership, interorganizational trust, team trust and intra-team's communication network density), which influence some team outputs, individual innovation behaviors and team effectiveness (team performance, work group satisfaction and ION affective commitment). Data was collected on a sample of 101 entrepreneurs grouped in 28 ION’s government teams and the research hypotheses are tested trough the path analysis and the multilevel models. As expected trust in team and shared leadership are positively and directly related to team effectiveness while team identification and interorganizational trust are indirectly related to the team outputs. The friendship network density among the team’s members has got positive effects on the trust in team and on the communication network density, and also, through the communication network density it improves the level of the teammates ION affective commitment. The shared leadership and its effects on the team effectiveness are fostered from higher level of team identification and weakened from higher level of work group diversity, specifically gender diversity. Finally, the communication network density and shared leadership at the individual level are related to the frequency of individual innovative behaviors. The dissertation’s results give a wider and more precise indication about the management of interfirm network through “shared” form of governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our research asked the following main questions: how the characteristics of professionals service firms allow them to successfully innovate in exploiting through exploring by combining internal and external factors of innovation and how these ambidextrous organisations perceive these factors; and how do successful innovators in professional service firms use corporate entrepreneurship models in their new service development processes? With a goal to shed light on innovation in professional knowledge intensive business service firms’ (PKIBS), we concluded a qualitative analysis of ten globally acting law firms, providing business legal services. We analyse the internal and factors of innovation that are critical for PKIBS’ innovation. We suggest how these firms become ambidextrous in changing environment. Our findings show that this kind of firms has particular type of ambidexterity due to their specific characteristics. As PKIBS are very dependant on its human capital, governance structure, and the high expectations of their clients, their ambidexterity is structural, but also contextual at the same time. In addition, we suggest 3 types of corporate entrepreneurship models that international PKIBS use to enhance innovation in turbulent environments. We looked at how law firms going through turbulent environments were using corporate entrepreneurship activities as a part of their strategies to be more innovative. Using visual mapping methodology, we developed three types of innovation patterns in the law firms. We suggest that corporate entrepreneurship models depend on successful application of mainly three elements: who participates in corporate entrepreneurship initiatives; what are the formal processes that enhances these initiatives; and what are the policies applied to this type of behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small-scale dynamic stochastic general equilibrium have been treated as the benchmark of much of the monetary policy literature, given their ability to explain the impact of monetary policy on output, inflation and financial markets. One cause of the empirical failure of New Keynesian models is partially due to the Rational Expectations (RE) paradigm, which entails a tight structure on the dynamics of the system. Under this hypothesis, the agents are assumed to know the data genereting process. In this paper, we propose the econometric analysis of New Keynesian DSGE models under an alternative expectations generating paradigm, which can be regarded as an intermediate position between rational expectations and learning, nameley an adapted version of the "Quasi-Rational" Expectatations (QRE) hypothesis. Given the agents' statistical model, we build a pseudo-structural form from the baseline system of Euler equations, imposing that the length of the reduced form is the same as in the `best' statistical model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Logistics involves planning, managing, and organizing the flows of goods from the point of origin to the point of destination in order to meet some requirements. Logistics and transportation aspects are very important and represent a relevant costs for producing and shipping companies, but also for public administration and private citizens. The optimization of resources and the improvement in the organization of operations is crucial for all branches of logistics, from the operation management to the transportation. As we will have the chance to see in this work, optimization techniques, models, and algorithms represent important methods to solve the always new and more complex problems arising in different segments of logistics. Many operation management and transportation problems are related to the optimization class of problems called Vehicle Routing Problems (VRPs). In this work, we consider several real-world deterministic and stochastic problems that are included in the wide class of the VRPs, and we solve them by means of exact and heuristic methods. We treat three classes of real-world routing and logistics problems. We deal with one of the most important tactical problems that arises in the managing of the bike sharing systems, that is the Bike sharing Rebalancing Problem (BRP). We propose models and algorithms for real-world earthwork optimization problems. We describe the 3DP process and we highlight several optimization issues in 3DP. Among those, we define the problem related to the tool path definition in the 3DP process, the 3D Routing Problem (3DRP), which is a generalization of the arc routing problem. We present an ILP model and several heuristic algorithms to solve the 3DRP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a process-based modelling approach to quantify carbon uptake by lichens and bryophytes at the global scale. Based on the modelled carbon uptake, potential global rates of nitrogen fixation, phosphorus uptake and chemical weathering by the organisms are estimated. In this way, the significance of lichens and bryophytes for global biogeochemical cycles can be assessed. The model uses gridded climate data and key properties of the habitat (e.g. disturbance intervals) to predict processes which control net carbon uptake, namely photosynthesis, respiration, water uptake and evaporation. It relies on equations used in many dynamical vegetation models, which are combined with concepts specific to lichens and bryophytes, such as poikilohydry or the effect of water content on CO2 diffusivity. To incorporate the great functional variation of lichens and bryophytes at the global scale, the model parameters are characterised by broad ranges of possible values instead of a single, globally uniform value. The predicted terrestrial net uptake of 0.34 to 3.3 Gt / yr of carbon and global patterns of productivity are in accordance with empirically-derived estimates. Based on the simulated estimates of net carbon uptake, further impacts of lichens and bryophytes on biogeochemical cycles are quantified at the global scale. Thereby the focus is on three processes, namely nitrogen fixation, phosphorus uptake and chemical weathering. The presented estimates have the form of potential rates, which means that the amount of nitrogen and phosphorus is quantified which is needed by the organisms to build up biomass, also accounting for resorption and leaching of nutrients. Subsequently, the potential phosphorus uptake on bare ground is used to estimate chemical weathering by the organisms, assuming that they release weathering agents to obtain phosphorus. The predicted requirement for nitrogen ranges from 3.5 to 34 Tg / yr and for phosphorus it ranges from 0.46 to 4.6 Tg / yr. Estimates of chemical weathering are between 0.058 and 1.1 km³ / yr of rock. These values seem to have a realistic order of magnitude and they support the notion that lichens and bryophytes have the potential to play an important role for global biogeochemical cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In den vergangenen Jahren wurden einige bislang unbekannte Phänomene experimentell beobachtet, wie etwa die Existenz unterschiedlicher Prä-Nukleations-Strukturen. Diese haben zu einem neuen Verständnis von Prozessen, die auf molekularer Ebene während der Nukleation und dem Wachstum von Kristallen auftreten, beigetragen. Die Auswirkungen solcher Prä-Nukleations-Strukturen auf den Prozess der Biomineralisation sind noch nicht hinreichend verstanden. Die Mechanismen, mittels derer biomolekulare Modifikatoren, wie Peptide, mit Prä-Nukleations-Strukturen interagieren und somit den Nukleationsprozess von Mineralen beeinflussen könnten, sind vielfältig. Molekulare Simulationen sind zur Analyse der Formation von Prä-Nukleations-Strukturen in Anwesenheit von Modifikatoren gut geeignet. Die vorliegende Arbeit beschreibt einen Ansatz zur Analyse der Interaktion von Peptiden mit den in Lösung befindlichen Bestandteilen der entstehenden Kristalle mit Hilfe von Molekular-Dynamik Simulationen.rnUm informative Simulationen zu ermöglichen, wurde in einem ersten Schritt die Qualität bestehender Kraftfelder im Hinblick auf die Beschreibung von mit Calciumionen interagierenden Oligoglutamaten in wässrigen Lösungen untersucht. Es zeigte sich, dass große Unstimmigkeiten zwischen etablierten Kraftfeldern bestehen, und dass keines der untersuchten Kraftfelder eine realistische Beschreibung der Ionen-Paarung dieser komplexen Ionen widerspiegelte. Daher wurde eine Strategie zur Optimierung bestehender biomolekularer Kraftfelder in dieser Hinsicht entwickelt. Relativ geringe Veränderungen der auf die Ionen–Peptid van-der-Waals-Wechselwirkungen bezogenen Parameter reichten aus, um ein verlässliches Modell für das untersuchte System zu erzielen. rnDas umfassende Sampling des Phasenraumes der Systeme stellt aufgrund der zahlreichen Freiheitsgrade und der starken Interaktionen zwischen Calciumionen und Glutamat in Lösung eine besondere Herausforderung dar. Daher wurde die Methode der Biasing Potential Replica Exchange Molekular-Dynamik Simulationen im Hinblick auf das Sampling von Oligoglutamaten justiert und es erfolgte die Simulation von Peptiden verschiedener Kettenlängen in Anwesenheit von Calciumionen. Mit Hilfe der Sketch-Map Analyse konnten im Rahmen der Simulationen zahlreiche stabile Ionen-Peptid-Komplexe identifiziert werden, welche die Formation von Prä-Nukleations-Strukturen beeinflussen könnten. Abhängig von der Kettenlänge des Peptids weisen diese Komplexe charakteristische Abstände zwischen den Calciumionen auf. Diese ähneln einigen Abständen zwischen den Calciumionen in jenen Phasen von Calcium-Oxalat Kristallen, die in Anwesenheit von Oligoglutamaten gewachsen sind. Die Analogie der Abstände zwischen Calciumionen in gelösten Ionen-Peptid-Komplexen und in Calcium-Oxalat Kristallen könnte auf die Bedeutung von Ionen-Peptid-Komplexen im Prozess der Nukleation und des Wachstums von Biomineralen hindeuten und stellt einen möglichen Erklärungsansatz für die Fähigkeit von Oligoglutamaten zur Beeinflussung der Phase des sich formierenden Kristalls dar, die experimentell beobachtet wurde.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation consists of three self-contained papers that are related to two main topics. In particular, the first and third studies focus on labor market modeling, whereas the second essay presents a dynamic international trade setup.rnrnIn Chapter "Expenses on Labor Market Reforms during Transitional Dynamics", we investigate the arising costs of a potential labor market reform from a government point of view. To analyze various effects of unemployment benefits system changes, this chapter develops a dynamic model with heterogeneous employed and unemployed workers.rn rnIn Chapter "Endogenous Markup Distributions", we study how markup distributions adjust when a closed economy opens up. In order to perform this analysis, we first present a closed-economy general-equilibrium industry dynamics model, where firms enter and exit markets, and then extend our analysis to the open-economy case.rn rnIn Chapter "Unemployment in the OECD - Pure Chance or Institutions?", we examine effects of aggregate shocks on the distribution of the unemployment rates in OECD member countries.rn rnIn all three chapters we model systems that behave randomly and operate on stochastic processes. We therefore exploit stochastic calculus that establishes clear methodological links between the chapters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first chapter of this work has the aim to provide a brief overview of the history of our Universe, in the context of string theory and considering inflation as its possible application to cosmological problems. We then discuss type IIB string compactifications, introducing the study of the inflaton, a scalar field candidated to describe the inflation theory. The Large Volume Scenario (LVS) is studied in the second chapter paying particular attention to the stabilisation of the Kähler moduli which are four-dimensional gravitationally coupled scalar fields which parameterise the size of the extra dimensions. Moduli stabilisation is the process through which these particles acquire a mass and can become promising inflaton candidates. The third chapter is devoted to the study of Fibre Inflation which is an interesting inflationary model derived within the context of LVS compactifications. The fourth chapter tries to extend the zone of slow-roll of the scalar potential by taking larger values of the field φ. Everything is done with the purpose of studying in detail deviations of the cosmological observables, which can better reproduce current experimental data. Finally, we present a slight modification of Fibre Inflation based on a different compactification manifold. This new model produces larger tensor modes with a spectral index in good agreement with the date released in February 2015 by the Planck satellite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-cultural comparisons may increase our understanding of different models of substance use treatment and help identify consistent associations between patients' characteristics, treatment conditions, and outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database. 48 Caucasian and Asian bone models (left and right) from the database were used for the preliminary optimization process and validation of the fibula plate. The implant was constructed to fit bilaterally in a lateral position of the fibula. Then a biomechanical comparison of the designed implant to the current gold standard in the treatment of distal fibular fractures (locking 1/3 tubular plate) was conducted. Finally, a clinical surveillance study to evaluate the grade of implant fit achieved was performed. The results showed that with a virtual anatomic database it was possible to design a fibula plate with an optimized fit for a large proportion of the population. Biomechanical testing showed the novel fibula plate to be superior to 1/3 tubular plates in 4-point bending tests. The clinical application showed a very high degree of primary implant fit. Only in a small minority of cases further intra-operative implant bending was necessary. Therefore, the goal to develop an implant for the treatment of distal fibular fractures based on the evidence of a large anatomical database could be attained. Biomechanical testing showed good results regarding the stability and the clinical application confirmed the high grade of anatomical fit.