980 resultados para dynamic set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The active magnetic bearings have recently been intensively developed because of noncontact support having several advantages compared to conventional bearings. Due to improved materials, strategies of control, and electrical components, the performance and reliability of the active magnetic bearings are improving. However, additional bearings, retainer bearings, still have a vital role in the applications of the active magnetic bearings. The most crucial moment when the retainer bearings are needed is when the rotor drops from the active magnetic bearings on the retainer bearings due to component or power failure. Without appropriate knowledge of the retainer bearings, there is a chance that an active magnetic bearing supported rotor system will be fatal in a drop-down situation. This study introduces a detailed simulation model of a rotor system in order to describe a rotor drop-down situation on the retainer bearings. The introduced simulation model couples a finite element model with component mode synthesis and detailed bearing models. In this study, electrical components and electromechanical forces are not in the focus. The research looks at the theoretical background of the finite element method with component mode synthesis that can be used in the dynamic analysis of flexible rotors. The retainer bearings are described by using two ball bearing models, which include damping and stiffness properties, oil film, inertia of rolling elements and friction between races and rolling elements. Thefirst bearing model assumes that the cage of the bearing is ideal and that the cage holds the balls in their predefined positions precisely. The second bearing model is an extension of the first model and describes the behavior of the cageless bearing. In the bearing model, each ball is described by using two degrees of freedom. The models introduced in this study are verified with a corresponding actual structure. By using verified bearing models, the effects of the parameters of the rotor system onits dynamics during emergency stops are examined. As shown in this study, the misalignment of the retainer bearings has a significant influence on the behavior of the rotor system in a drop-down situation. In this study, a stability map of the rotor system as a function of rotational speed of the rotor and the misalignment of the retainer bearings is presented. In addition, the effects of parameters of the simulation procedure and the rotor system on the dynamics of system are studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the rapid change in today's business environment there are relatively few studies about corporate renewal. This study aims for its part at filling that research gap by studying the concepts of strategy, corporate renewal, innovation and corporate venturing. Its purpose is to enhance our understanding of how established companies operating in dynamic and global environment can benefit from their corporate venturing activities. The theoretical part approaches the research problem in corporate and venture levels. Firstly, it focuses on mapping the determinants of strategy and suggests using industry, location, resources, knowledge, structure and culture, market, technology and business model to assess the environment and using these determinants to optimize speed and magnitude of change.Secondly, it concludes that the choice of innovation strategy is dependent on the type and dimensions of innovation and suggests assessing market, technology, business model as well as novelty and complexity related to each of them for choosing an optimal context for developing innovations further. Thirdly, it directsattention on processes through which corporate renewal takes place. On corporate level these processes are identified as strategy formulation, strategy formation and strategy implementation. On the venture level the renewal processes are identified as learning, leveraging and nesting. The theoretical contribution of this study, the framework of strategic corporate venturing, joins corporate and venture level management issues together and concludes that strategy processes and linking processes are the mechanism through which continuous corporate renewaltakes place. The framework of strategic corporate venturing proposed by this study is a new way to illustrate the role of corporate venturing as a purposefullybuilt, different view of a company's business environment. The empirical part extended the framework by enhancing our understanding of the link between corporate renewal and corporate venturing in its real life environment in three Finnish companies: Metso, Nokia and TeliaSonera. Characterizing companies' environmentwith the determinants of strategy identified in this study provided a structured way to analyze their competitive position and renewal challenges that they arefacing. More importantly the case studies confirmed that a link between corporate renewal and corporate venturing exists and found out that the link is not as straight forward as indicated by the theory. Furthermore, the case studies enhanced the framework by indicating a sequence according to which the processes work. Firstly, the induced strategy processes strategy formulation and strategy implementation set the scene for corporate venturing context and management processes and leave strategy formation for the venture. Only after that can strategies formed by ventures come back to the corporate level - and if found viable in the corporate level be formalized through formulation and implementation. With the help of the framework of strategic corporate venturing the link between corporaterenewal and corporate venturing can be found and managed. The suggested response to the continuous need for change is continuous renewal i.e. institutionalizing corporate renewal in the strategy processes of the company. As far as benefiting from venturing is concerned the answer lies in deliberately managing venturing in a context different to the mainstream businesses and establishing efficientlinking processes to exploit the renewal potential of individual ventures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The three essays constituting this thesis focus on financing and cash management policy. The first essay aims to shed light on why firms issue debt so conservatively. In particular, it examines the effects of shareholder and creditor protection on capital structure choices. It starts by building a contingent claims model where financing policy results from a trade-off between tax benefits, contracting costs and agency costs. In this setup, controlling shareholders can divert part of the firms' cash ows as private benefits at the expense of minority share- holders. In addition, shareholders as a class can behave strategically at the time of default leading to deviations from the absolute priority rule. The analysis demonstrates that investor protection is a first order determinant of firms' financing choices and that conflicts of interests between firm claimholders may help explain the level and cross-sectional variation of observed leverage ratios. The second essay focuses on the practical relevance of agency conflicts. De- spite the theoretical development of the literature on agency conflicts and firm policy choices, the magnitude of manager-shareholder conflicts is still an open question. This essay proposes a methodology for quantifying these agency conflicts. To do so, it examines the impact of managerial entrenchment on corporate financing decisions. It builds a dynamic contingent claims model in which managers do not act in the best interest of shareholders, but rather pursue private benefits at the expense of shareholders. Managers have discretion over financing and dividend policies. However, shareholders can remove the manager at a cost. The analysis demonstrates that entrenched managers restructure less frequently and issue less debt than optimal for shareholders. I take the model to the data and use observed financing choices to provide firm-specific estimates of the degree of managerial entrenchment. Using structural econometrics, I find costs of control challenges of 2-7% on average (.8-5% at median). The estimates of the agency costs vary with variables that one expects to determine managerial incentives. In addition, these costs are sufficient to resolve the low- and zero-leverage puzzles and explain the time series of observed leverage ratios. Finally, the analysis shows that governance mechanisms significantly affect the value of control and firms' financing decisions. The third essay is concerned with the documented time trend in corporate cash holdings by Bates, Kahle and Stulz (BKS,2003). BKS find that firms' cash holdings double from 10% to 20% over the 1980 to 2005 period. This essay provides an explanation of this phenomenon by examining the effects of product market competition on firms' cash holdings in the presence of financial constraints. It develops a real options model in which cash holdings may be used to cover unexpected operating losses and avoid inefficient closure. The model generates new predictions relating cash holdings to firm and industry characteristics such as the intensity of competition, cash flow volatility, or financing constraints. The empirical examination of the model shows strong support of model's predictions. In addition, it shows that the time trend in cash holdings documented by BKS can be at least partly attributed to a competition effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The improvement of the dynamics of flexible manipulators like log cranes often requires advanced control methods. This thesis discusses the vibration problems in the cranes used in commercial forestry machines. Two control methods, adaptive filtering and semi-active damping, are presented. The adaptive filter uses a part of the lowest natural frequency of the crane as a filtering frequency. The payload estimation algorithm, filtering of control signal and algorithm for calculation of the lowest natural frequency of the crane are presented. The semi-active damping method is basedon pressure feedback. The pressure vibration, scaled with suitable gain, is added to the control signal of the valve of the lift cylinder to suppress vibrations. The adaptive filter cuts off high frequency impulses coming from the operatorand semi-active damping suppresses the crane?s oscillation, which is often caused by some external disturbance. In field tests performed on the crane, a correctly tuned (25 % tuning) adaptive filter reduced pressure vibration by 14-17 % and semi-active damping correspondingly by 21-43%. Applying of these methods require auxiliary transducers, installed in specific points in the crane, and electronically controlled directional control valves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Firms operating in a changing environment have a need for structures and practices that provide flexibility and enable rapid response to changes. Given the challenges they face in attempts to keep up with market needs, they have to continuously improve their processes and products, and develop new products to match market requirements. Success in changing markets depends on the firm's ability to convert knowledge into innovations, and consequently their internal structures and capabilities have an important role in innovation activities. According 10 the dynamic capability view of the firm, firms thus need dynamic capabilities in (he form ofassets, processes and structures that enable strategic flexibility and support entrepreneurial opportunity sensing and exploitation. Dynamic capabilities are also needed in conditions of rapid change in the operating environment, and in activities such as new product development and expansion to new markets. Despite the growing interest in these issues and the theoretical developments in the field of strategy research, there are still only very few empirical studies, and large-scale empirical studies in particular, that provide evidence that firms'dynamic capabilities are reflected in performance differences. This thesis represents an attempt to advance the research by providing empirical evidence of thelinkages between the firm's dynamic capabilities and performance in intenationalization and innovation activities. The aim is thus to increase knowledge and enhance understanding of the organizational factors that explain interfirm performance differences. The study is in two parts. The first part is the introduction and the second part comprises five research publications covering the theoretical foundations of the dynamic capability view and subsequent empirical analyses. Quantitative research methodology is used throughout. The thesis contributes to the literature in several ways. While a lot of prior research on dynamic capabilities is conceptual in nature, or conducted through case studies, this thesis introduces empirical measures for assessing the different aspects, and uses large-scale sampling to investigate the relationships between them and performance indicators. The dynamic capability view is further developed by integrating theoretical frameworks and research traditions from several disciplines. The results of the study provide support for the basic tenets of the dynamic capability view. The empirical findings demonstrate that the firm's ability to renew its knowledge base and other intangible assets, its proactive, entrepreneurial behavior, and the structures and practices that support operational flexibility arepositively related to performance indicators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thisresearch deals with the dynamic modeling of gas lubricated tilting pad journal bearings provided with spring supported pads, including experimental verification of the computation. On the basis of a mathematical model of a film bearing, a computer program has been developed, which can be used for the simulation of a special type of tilting pad gas journal bearing supported by a rotary spring under different loading conditions time dependently (transient running conditions due to geometry variations in time externally imposed). On the basis of literature, different transformations have been used in the model to achieve simpler calculation. The numerical simulation is used to solve a non-stationary case of a gasfilm. The simulation results were compared with literature results in a stationary case (steady running conditions) and they were found to be equal. In addition to this, comparisons were made with a number of stationary and non-stationary bearing tests, which were performed at Lappeenranta University of Technology using bearings designed with the simulation program. A study was also made using numerical simulation and literature to establish the influence of the different bearing parameters on the stability of the bearing. Comparison work was done with literature on tilting pad gas bearings. This bearing type is rarely used. One literature reference has studied the same bearing type as that used in LUT. A new design of tilting pad gas bearing is introduced. It is based on a stainless steel body and electron beam welding of the bearing parts. It has good operation characteristics and is easier to tune and faster to manufacture than traditional constructions. It is also suitable for large serial production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of the thesis is to structure and model the factors that contribute to and can be used in evaluating project success. The purpose of this thesis is to enhance the understanding of three research topics. The goal setting process, success evaluation and decision-making process are studied in the context of a project, business unitand its business environment. To achieve the objective three research questionsare posed. These are 1) how to set measurable project goals, 2) how to evaluateproject success and 3) how to affect project success with managerial decisions.The main theoretical contribution comes from deriving a synthesis of these research topics which have mostly been discussed apart from each other in prior research. The research strategy of the study has features from at least the constructive, nomothetical, and decision-oriented research approaches. This strategy guides the theoretical and empirical part of the study. Relevant concepts and a framework are composed on the basis of the prior research contributions within the problem area. A literature review is used to derive constructs of factors withinthe framework. They are related to project goal setting, success evaluation, and decision making. On the basis of this, the case study method is applied to complement the framework. The empirical data includes one product development program, three construction projects, as well as one organization development, hardware/software, and marketing project in their contexts. In two of the case studiesthe analytic hierarchy process is used to formulate a hierarchical model that returns a numerical evaluation of the degree of project success. It has its origin in the solution idea which in turn has its foundation in the notion of projectsuccess. The achieved results are condensed in the form of a process model thatintegrates project goal setting, success evaluation and decision making. The process of project goal setting is analysed as a part of an open system that includes a project, the business unit and its competitive environment. Four main constructs of factors are suggested. First, the project characteristics and requirements are clarified. The second and the third construct comprise the components of client/market segment attractiveness and sources of competitive advantage. Together they determine the competitive position of a business unit. Fourth, the relevant goals and the situation of a business unit are clarified to stress their contribution to the project goals. Empirical evidence is gained on the exploitation of increased knowledge and on the reaction to changes in the business environment during a project to ensure project success. The relevance of a successful project to a company or a business unit tends to increase the higher the reference level of project goals is set. However, normal performance or sometimes performance below this normal level is intentionally accepted. Success measures make project success quantifiable. There are result-oriented, process-oriented and resource-oriented success measures. The study also links result measurements to enablers that portray the key processes. The success measures can be classified into success domains determining the areas on which success is assessed. Empiricalevidence is gained on six success domains: strategy, project implementation, product, stakeholder relationships, learning situation and company functions. However, some project goals, like safety, can be assessed using success measures that belong to two success domains. For example a safety index is used for assessing occupational safety during a project, which is related to project implementation. Product safety requirements, in turn, are connected to the product characteristics and thus to the product-related success domain. Strategic success measures can be used to weave the project phases together. Empirical evidence on their static nature is gained. In order-oriented projects the project phases are oftencontractually divided into different suppliers or contractors. A project from the supplier's perspective can represent only a part of the ¿whole project¿ viewed from the client's perspective. Therefore static success measures are mostly used within the contractually agreed project scope and duration. Proof is also acquired on the dynamic use of operational success measures. They help to focus on the key issues during each project phase. Furthermore, it is shown that the original success domains and success measures, their weights and target values can change dynamically. New success measures can replace the old ones to correspond better with the emphasis of the particular project phase. This adjustment concentrates on the key decision milestones. As a conclusion, the study suggests a combination of static and dynamic success measures. Their linkage to an incentive system can make the project management proactive, enable fast feedback and enhancethe motivation of the personnel. It is argued that the sequence of effective decisions is closely linked to the dynamic control of project success. According to the used definition, effective decisions aim at adequate decision quality and decision implementation. The findings support that project managers construct and use a chain of key decision milestones to evaluate and affect success during aproject. These milestones can be seen as a part of the business processes. Different managers prioritise the key decision milestones to a varying degree. Divergent managerial perspectives, power, responsibilities and involvement during a project offer some explanation for this. Finally, the study introduces the use ofHard Gate and Soft Gate decision milestones. The managers may use the former milestones to provide decision support on result measurements and ad hoc critical conditions. In the latter milestones they may make intermediate success evaluation also on the basis of other types of success measures, like process and resource measures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Belt-drive systems have been and still are the most commonly used power transmission form in various applications of different scale and use. The peculiar features of the dynamics of the belt-drives include highly nonlinear deformation,large rigid body motion, a dynamical contact through a dry friction interface between the belt and pulleys with sticking and slipping zones, cyclic tension of the belt during the operation and creeping of the belt against the pulleys. The life of the belt-drive is critically related on these features, and therefore, amodel which can be used to study the correlations between the initial values and the responses of the belt-drives is a valuable source of information for the development process of the belt-drives. Traditionally, the finite element models of the belt-drives consist of a large number of elements thatmay lead to computational inefficiency. In this research, the beneficial features of the absolute nodal coordinate formulation are utilized in the modeling of the belt-drives in order to fulfill the following requirements for the successful and efficient analysis of the belt-drive systems: the exact modeling of the rigid body inertia during an arbitrary rigid body motion, the consideration of theeffect of the shear deformation, the exact description of the highly nonlinear deformations and a simple and realistic description of the contact. The use of distributed contact forces and high order beam and plate elements based on the absolute nodal coordinate formulation are applied to the modeling of the belt-drives in two- and three-dimensional cases. According to the numerical results, a realistic behavior of the belt-drives can be obtained with a significantly smaller number of elements and degrees of freedom in comparison to the previously published finite element models of belt-drives. The results of theexamples demonstrate the functionality and suitability of the absolute nodal coordinate formulation for the computationally efficient and realistic modeling ofbelt-drives. This study also introduces an approach to avoid the problems related to the use of the continuum mechanics approach in the definition of elastic forces on the absolute nodal coordinate formulation. This approach is applied to a new computationally efficient two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation. The proposed beam element uses a linear displacement field neglecting higher-order terms and a reduced number of nodal coordinates, which leads to fewer degrees of freedom in a finite element.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo do presente trabalho foi avaliar os efeitos do ácido giberélico, do bioestimulante Crop Set e do anelamento sobre o aumento do tamanho de bagas e a produtividade da uva 'Thompson Seedless' no Vale do São Francisco. O experimento foi conduzido durante dois ciclos de produção (2001 e 2002), no Campo Experimental de Bebedouro, pertencente à Embrapa Semi-Árido, em Petrolina-PE. Utilizou-se do delineamento experimental de blocos ao acaso, com 12 tratamentos e 3 repetições. Os tratamentos corresponderam à aplicação de ácido giberélico em cinco fases de desenvolvimento da videira, nas doses de 10 + 15 + 15 +50 + 50 mg.L-1, do bioestimulante Crop Set em duas doses de 0,1 e 0,2% e do anelamento no caule, isolados e combinados entre si. Os tratamentos combinados de anelamento + ácido giberélico e anelamento + ácido giberélico + Crop Set destacaram-se como aqueles que promoveram os maiores peso e tamanho de cachos e de bagas, com diferenças significativas em relação à testemunha. Entretanto, o anelamento não cicatrizou completamente, causando a morte de plantas, recomendando-se cautela na sua realização. Apesar de não se observar efeito significativo dos tratamentos sobre a produtividade, pode-se notar um aumento de 63% para o tratamento anelamento + ácido giberélico em relação ao ciclo de produção de 2001.