79 resultados para dynamic configuration


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is the development of a multibody dynamic model matching the observed movements of the lower limb of a skier performing the skating technique in cross-country style. During the construction of this model, the formulation of the equation of motion was made using the Euler - Lagrange approach with multipliers applied to a multibody system in three dimensions. The description of the lower limb of the skate skier and the ski was completed by employing three bodies, one representing the ski, and two representing the natural movements of the leg of the skier. The resultant system has 13 joint constraints due to the interconnection of the bodies, and four prescribed kinematic constraints to account for the movements of the leg, leaving the amount of degrees of freedom equal to one. The push-off force exerted by the skate skier was taken directly from measurements made on-site in the ski tunnel at the Vuokatti facilities (Finland) and was input into the model as a continuous function. Then, the resultant velocities and movement of the ski, center of mass of the skier, and variation of the skating angle were studied to understand the response of the model to the variation of important parameters of the skate technique. This allowed a comparison of the model results with the real movement of the skier. Further developments can be made to this model to better approximate the results to the real movement of the leg. One can achieve this by changing the constraints to include the behavior of the real leg joints and muscle actuation. As mentioned in the introduction of this thesis, a multibody dynamic model can be used to provide relevant information to ski designers and to obtain optimized results of the given variables, which athletes can use to improve their performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Family businesses are among the longest-lived most prevalent institutions in the world and they are an important source of economic development and growth. Ownership is a key to the business life of the firm and also one main key in family business definition. There is only a little portfolio entrepreneurship or portfolio business research within family business context. The absence of empirical evidence on the long-term relationship between family ownership and portfolio development presents an important gap in the family business literature. This study deals with the family business ownership changes and the development of portfolios in the family business and it is positioned in to the conversation of family business, growth, ownership, management and strategy. This study contributes and expands the existing body of theory on family business and ownership. From the theoretical point of view this study combines insights from the fields of portfolio entrepreneurship, ownership, and family business and integrate them. This crossfertilization produces interesting empirical and theoretical findings that can constitute a basis for solid contributions to the understanding of ownership dynamics and portfolio entrepreneurship in family firms. The research strategy chosen for this study represents longitudinal, qualitative, hermeneutic, and deductive approaches.The empirical part of study is using a case study approach with embedded design, that is, multiple levels of analysis within a single study. The study consists of two cases and it begins with a pilot case which will form a preunderstanding on the phenomenon. Pilot case develops the methodology approach to build in the main case and the main case will deepen the understanding of the phenomenon. This study develops and tests a research method of family business portfolio development focusing on investigating how ownership changes are influencing to the family business structures over time. This study reveals the linkages between dimensions of ownership and how they give rise to portfolio business development within the context of the family business. The empirical results of the study suggest that family business ownership is dynamic and owners are using ownership as a tool for creating business portfolios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preparation of optically active compounds is of high importance in modern medicinal chemistry. Despite recent advances in the field of asymmetric synthesis, resolution of racemates still remains the most utilized way for preparation of single enantiomers in industrial scale due to its cost-efficiency and simplicity. Enzymatic kinetic resolution (KR) of racemates is a classical method for separation of enantiomers. One of its drawbacks is the limitation of target enantiomer yield to 50%. Dynamic Kinetic Resolution (DKR) allows to reach yields up to 100% by in situ racemization of the less reactive enantiomer. In the first part of this thesis, a number of half-sandwich ruthenium complexes were prepared and evaluated as catalysts for racemization of optically active secondary alcohols. A leading catalyst, Bn5CpRu(CO)2Cl, was identified. The catalyst discovered was extensively characterized by its application for DKR of a broad range of secondary alcohols in a wide range of reaction loadings (1 mmol – 1 mol). Cost-efficient chromatography-free procedure for preparation of this catalyst was developed. Further, detailed kinetic and mechanistic studies of the racemization reactions were performed. Comparison of racemization rates in the presence of Bn5CpRu(CO)2Cl and Ph5CpRu(CO)2Cl catalysts reveals that the performance of the catalytic system can be adjusted by matching of the electronic properties of the catalysts and the substrates. Moreover, dependence of the rate-limiting step from the electronic properties of the reagents was observed. Important conclusions about reaction mechanism were made. Finally, an alternative approach to DKR of amines based on space separated vessels was addressed. This procedure allows the combination of thermolabile enzyme with racemization catalysts active only at high temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Configuration management is often seen as an enabler for the main IT Service Management (ITSM) processes such as Incident and Problem management. A decent level of quality of IT configuration data is required in order to carry out routines of these processes. This case study examines the state of configuration management in a multinational organization and aims at identification of methods for its improvement. The author has stayed five months with this company in order to collect different sources of evidence and to make observations. The main source of data for this study is interviews with some of the key employees of the assigned organization who are involved into the ITSM processes. This study concludes the maturity level of the existing configuration management process to be repeatable but intuitive, and outlines the principal requirements for its improvement. A match between the requirements identified in the organization and the requirements stated in the ISO/IEC 20000 standard indicates the possibility of adopting ITIL guidelines as a method for configuration management process improvement. The outcome of the study presents a set of recommendations for improvement that considers the process, the information model and the information system for configuration management in the case organization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During a possible loss of coolant accident in BWRs, a large amount of steam will be released from the reactor pressure vessel to the suppression pool. Steam will be condensed into the suppression pool causing dynamic and structural loads to the pool. The formation and break up of bubbles can be measured by visual observation using a suitable pattern recognition algorithm. The aim of this study was to improve the preliminary pattern recognition algorithm, developed by Vesa Tanskanen in his doctoral dissertation, by using MATLAB. Video material from the PPOOLEX test facility, recorded during thermal stratification and mixing experiments, was used as a reference in the development of the algorithm. The developed algorithm consists of two parts: the pattern recognition of the bubbles and the analysis of recognized bubble images. The bubble recognition works well, but some errors will appear due to the complex structure of the pool. The results of the image analysis were reasonable. The volume and the surface area of the bubbles were not evaluated. Chugging frequencies calculated by using FFT fitted well into the results of oscillation frequencies measured in the experiments. The pattern recognition algorithm works in the conditions it is designed for. If the measurement configuration will be changed, some modifications have to be done. Numerous improvements are proposed for the future 3D equipment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wastes and side streams in the mining industry and different anthropogenic wastes often contain valuable metals in such concentrations their recovery may be economically viable. These raw materials are collectively called secondary raw materials. The recovery of metals from these materials is also environmentally favorable, since many of the metals, for example heavy metals, are hazardous to the environment. This has been noticed in legislative bodies, and strict regulations for handling both mining and anthropogenic wastes have been developed, mainly in the last decade. In the mining and metallurgy industry, important secondary raw materials include, for example, steelmaking dusts (recoverable metals e.g. Zn and Mo), zinc plant residues (Ag, Au, Ga, Ge, In) and waste slurry from Bayer process alumina production (Ga, REE, Ti, V). From anthropogenic wastes, waste electrical and electronic equipment (WEEE), among them LCD screens and fluorescent lamps, are clearly the most important from a metals recovery point of view. Metals that are commonly recovered from WEEE include, for example, Ag, Au, Cu, Pd and Pt. In LCD screens indium, and in fluorescent lamps, REEs, are possible target metals. Hydrometallurgical processing routes are highly suitable for the treatment of complex and/or low grade raw materials, as secondary raw materials often are. These solid or liquid raw materials often contain large amounts of base metals, for example. Thus, in order to recover valuable metals, with small concentrations, highly selective separation methods, such as hydrometallurgical routes, are needed. In addition, hydrometallurgical processes are also seen as more environmental friendly, and they have lower energy consumption, when compared to pyrometallurgical processes. In this thesis, solvent extraction and ion exchange are the most important hydrometallurgical separation methods studied. Solvent extraction is a mainstream unit operation in the metallurgical industry for all kinds of metals, but for ion exchange, practical applications are not as widespread. However, ion exchange is known to be particularly suitable for dilute feed solutions and complex separation tasks, which makes it a viable option, especially for processing secondary raw materials. Recovering valuable metals was studied with five different raw materials, which included liquid and solid side streams from metallurgical industries and WEEE. Recovery of high purity (99.7%) In, from LCD screens, was achieved by leaching with H2SO4, extracting In and Sn to D2EHPA, and selectively stripping In to HCl. In was also concentrated in the solvent extraction stage from 44 mg/L to 6.5 g/L. Ge was recovered as a side product from two different base metal process liquors with Nmethylglucamine functional chelating ion exchange resin (IRA-743). Based on equilibrium and dynamic modeling, a mechanism for this moderately complex adsorption process was suggested. Eu and Y were leached with high yields (91 and 83%) by 2 M H2SO4 from a fluorescent lamp precipitate of waste treatment plant. The waste also contained significant amounts of other REEs such as Gd and Tb, but these were not leached with common mineral acids in ambient conditions. Zn was selectively leached over Fe from steelmaking dusts with a controlled acidic leaching method, in which the pH did not go below, but was held close as possible to, 3. Mo was also present in the other studied dust, and was leached with pure water more effectively than with the acidic methods. Good yield and selectivity in the solvent extraction of Zn was achieved by D2EHPA. However, Fe needs to be eliminated in advance, either by the controlled leaching method or, for example, by precipitation. 100% Pure Mo/Cr product was achieved with quaternary ammonium salt (Aliquat 336) directly from the water leachate, without pH adjustment (pH 13.7). A Mo/Cr mixture was also obtained from H2SO4 leachates with hydroxyoxime LIX 84-I and trioctylamine (TOA), but the purities were 70% at most. However with Aliquat 336, again an over 99% pure mixture was obtained. High selectivity for Mo over Cr was not achieved with any of the studied reagents. Ag-NaCl solution was purified from divalent impurity metals by aminomethylphosphonium functional Lewatit TP-260 ion exchange resin. A novel preconditioning method, named controlled partial neutralization, with conjugate bases of weak organic acids, was used to control the pH in the column to avoid capacity losses or precipitations. Counter-current SMB was shown to be a better process configuration than either batch column operation or the cross-current operation conventionally used in the metallurgical industry. The raw materials used in this thesis were also evaluated from an economic point of view, and the precipitate from a waste fluorescent lamp treatment process was clearly shown to be the most promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rolling element bearings are essential components of rotating machinery. The spherical roller bearing (SRB) is one variant seeing increasing use, because it is self-aligning and can support high loads. It is becoming increasingly important to understand how the SRB responds dynamically under a variety of conditions. This doctoral dissertation introduces a computationally efficient, three-degree-of-freedom, SRB model that was developed to predict the transient dynamic behaviors of a rotor-SRB system. In the model, bearing forces and deflections were calculated as a function of contact deformation and bearing geometry parameters according to nonlinear Hertzian contact theory. The results reveal how some of the more important parameters; such as diametral clearance, the number of rollers, and osculation number; influence ultimate bearing performance. Distributed defects, such as the waviness of the inner and outer ring, and localized defects, such as inner and outer ring defects, are taken into consideration in the proposed model. Simulation results were verified with results obtained by applying the formula for the spherical roller bearing radial deflection and the commercial bearing analysis software. Following model verification, a numerical simulation was carried out successfully for a full rotor-bearing system to demonstrate the application of this newly developed SRB model in a typical real world analysis. Accuracy of the model was verified by comparing measured to predicted behaviors for equivalent systems.