73 resultados para Ecological dynamic


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is the development of a multibody dynamic model matching the observed movements of the lower limb of a skier performing the skating technique in cross-country style. During the construction of this model, the formulation of the equation of motion was made using the Euler - Lagrange approach with multipliers applied to a multibody system in three dimensions. The description of the lower limb of the skate skier and the ski was completed by employing three bodies, one representing the ski, and two representing the natural movements of the leg of the skier. The resultant system has 13 joint constraints due to the interconnection of the bodies, and four prescribed kinematic constraints to account for the movements of the leg, leaving the amount of degrees of freedom equal to one. The push-off force exerted by the skate skier was taken directly from measurements made on-site in the ski tunnel at the Vuokatti facilities (Finland) and was input into the model as a continuous function. Then, the resultant velocities and movement of the ski, center of mass of the skier, and variation of the skating angle were studied to understand the response of the model to the variation of important parameters of the skate technique. This allowed a comparison of the model results with the real movement of the skier. Further developments can be made to this model to better approximate the results to the real movement of the leg. One can achieve this by changing the constraints to include the behavior of the real leg joints and muscle actuation. As mentioned in the introduction of this thesis, a multibody dynamic model can be used to provide relevant information to ski designers and to obtain optimized results of the given variables, which athletes can use to improve their performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Family businesses are among the longest-lived most prevalent institutions in the world and they are an important source of economic development and growth. Ownership is a key to the business life of the firm and also one main key in family business definition. There is only a little portfolio entrepreneurship or portfolio business research within family business context. The absence of empirical evidence on the long-term relationship between family ownership and portfolio development presents an important gap in the family business literature. This study deals with the family business ownership changes and the development of portfolios in the family business and it is positioned in to the conversation of family business, growth, ownership, management and strategy. This study contributes and expands the existing body of theory on family business and ownership. From the theoretical point of view this study combines insights from the fields of portfolio entrepreneurship, ownership, and family business and integrate them. This crossfertilization produces interesting empirical and theoretical findings that can constitute a basis for solid contributions to the understanding of ownership dynamics and portfolio entrepreneurship in family firms. The research strategy chosen for this study represents longitudinal, qualitative, hermeneutic, and deductive approaches.The empirical part of study is using a case study approach with embedded design, that is, multiple levels of analysis within a single study. The study consists of two cases and it begins with a pilot case which will form a preunderstanding on the phenomenon. Pilot case develops the methodology approach to build in the main case and the main case will deepen the understanding of the phenomenon. This study develops and tests a research method of family business portfolio development focusing on investigating how ownership changes are influencing to the family business structures over time. This study reveals the linkages between dimensions of ownership and how they give rise to portfolio business development within the context of the family business. The empirical results of the study suggest that family business ownership is dynamic and owners are using ownership as a tool for creating business portfolios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preparation of optically active compounds is of high importance in modern medicinal chemistry. Despite recent advances in the field of asymmetric synthesis, resolution of racemates still remains the most utilized way for preparation of single enantiomers in industrial scale due to its cost-efficiency and simplicity. Enzymatic kinetic resolution (KR) of racemates is a classical method for separation of enantiomers. One of its drawbacks is the limitation of target enantiomer yield to 50%. Dynamic Kinetic Resolution (DKR) allows to reach yields up to 100% by in situ racemization of the less reactive enantiomer. In the first part of this thesis, a number of half-sandwich ruthenium complexes were prepared and evaluated as catalysts for racemization of optically active secondary alcohols. A leading catalyst, Bn5CpRu(CO)2Cl, was identified. The catalyst discovered was extensively characterized by its application for DKR of a broad range of secondary alcohols in a wide range of reaction loadings (1 mmol – 1 mol). Cost-efficient chromatography-free procedure for preparation of this catalyst was developed. Further, detailed kinetic and mechanistic studies of the racemization reactions were performed. Comparison of racemization rates in the presence of Bn5CpRu(CO)2Cl and Ph5CpRu(CO)2Cl catalysts reveals that the performance of the catalytic system can be adjusted by matching of the electronic properties of the catalysts and the substrates. Moreover, dependence of the rate-limiting step from the electronic properties of the reagents was observed. Important conclusions about reaction mechanism were made. Finally, an alternative approach to DKR of amines based on space separated vessels was addressed. This procedure allows the combination of thermolabile enzyme with racemization catalysts active only at high temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rolling element bearings are essential components of rotating machinery. The spherical roller bearing (SRB) is one variant seeing increasing use, because it is self-aligning and can support high loads. It is becoming increasingly important to understand how the SRB responds dynamically under a variety of conditions. This doctoral dissertation introduces a computationally efficient, three-degree-of-freedom, SRB model that was developed to predict the transient dynamic behaviors of a rotor-SRB system. In the model, bearing forces and deflections were calculated as a function of contact deformation and bearing geometry parameters according to nonlinear Hertzian contact theory. The results reveal how some of the more important parameters; such as diametral clearance, the number of rollers, and osculation number; influence ultimate bearing performance. Distributed defects, such as the waviness of the inner and outer ring, and localized defects, such as inner and outer ring defects, are taken into consideration in the proposed model. Simulation results were verified with results obtained by applying the formula for the spherical roller bearing radial deflection and the commercial bearing analysis software. Following model verification, a numerical simulation was carried out successfully for a full rotor-bearing system to demonstrate the application of this newly developed SRB model in a typical real world analysis. Accuracy of the model was verified by comparing measured to predicted behaviors for equivalent systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In literature CO 2 liquidization is well studied with steady state modeling. Steady state modeling gives an overview of the process but it doesn’t give information about process behavior during transients. In this master’s thesis three dynamic models of CO2 liquidization were made and tested. Models were straight multi-stage compression model and two compression liquid pumping models, one with and one without cold energy recovery. Models were made with Apros software, models were also used to verify that Apros is capable to model phase changes and over critical state of CO 2. Models were verified against compressor manufacturer’s data and simulation results presented in literature. From the models made in this thesis, straight compression model was found to be the most energy efficient and fastest to react to transients. Also Apros was found to be capable tool for dynamic liquidization modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this Thesis is to find the most optimal heat recovery solution for Wärtsilä’s dynamic district heating power plant considering Germany energy markets as in Germany government pays subsidies for CHP plants in order to increase its share of domestic power production to 25 % by 2020. Different heat recovery connections have been simulated dozens to be able to determine the most efficient heat recovery connections. The purpose is also to study feasibility of different heat recovery connections in the dynamic district heating power plant in the Germany markets thus taking into consideration the day ahead electricity prices, district heating network temperatures and CHP subsidies accordingly. The auxiliary cooling, dynamical operation and cost efficiency of the power plant is also investigated.