853 resultados para Photorefractive dynamic holograms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is the development of a multibody dynamic model matching the observed movements of the lower limb of a skier performing the skating technique in cross-country style. During the construction of this model, the formulation of the equation of motion was made using the Euler - Lagrange approach with multipliers applied to a multibody system in three dimensions. The description of the lower limb of the skate skier and the ski was completed by employing three bodies, one representing the ski, and two representing the natural movements of the leg of the skier. The resultant system has 13 joint constraints due to the interconnection of the bodies, and four prescribed kinematic constraints to account for the movements of the leg, leaving the amount of degrees of freedom equal to one. The push-off force exerted by the skate skier was taken directly from measurements made on-site in the ski tunnel at the Vuokatti facilities (Finland) and was input into the model as a continuous function. Then, the resultant velocities and movement of the ski, center of mass of the skier, and variation of the skating angle were studied to understand the response of the model to the variation of important parameters of the skate technique. This allowed a comparison of the model results with the real movement of the skier. Further developments can be made to this model to better approximate the results to the real movement of the leg. One can achieve this by changing the constraints to include the behavior of the real leg joints and muscle actuation. As mentioned in the introduction of this thesis, a multibody dynamic model can be used to provide relevant information to ski designers and to obtain optimized results of the given variables, which athletes can use to improve their performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Family businesses are among the longest-lived most prevalent institutions in the world and they are an important source of economic development and growth. Ownership is a key to the business life of the firm and also one main key in family business definition. There is only a little portfolio entrepreneurship or portfolio business research within family business context. The absence of empirical evidence on the long-term relationship between family ownership and portfolio development presents an important gap in the family business literature. This study deals with the family business ownership changes and the development of portfolios in the family business and it is positioned in to the conversation of family business, growth, ownership, management and strategy. This study contributes and expands the existing body of theory on family business and ownership. From the theoretical point of view this study combines insights from the fields of portfolio entrepreneurship, ownership, and family business and integrate them. This crossfertilization produces interesting empirical and theoretical findings that can constitute a basis for solid contributions to the understanding of ownership dynamics and portfolio entrepreneurship in family firms. The research strategy chosen for this study represents longitudinal, qualitative, hermeneutic, and deductive approaches.The empirical part of study is using a case study approach with embedded design, that is, multiple levels of analysis within a single study. The study consists of two cases and it begins with a pilot case which will form a preunderstanding on the phenomenon. Pilot case develops the methodology approach to build in the main case and the main case will deepen the understanding of the phenomenon. This study develops and tests a research method of family business portfolio development focusing on investigating how ownership changes are influencing to the family business structures over time. This study reveals the linkages between dimensions of ownership and how they give rise to portfolio business development within the context of the family business. The empirical results of the study suggest that family business ownership is dynamic and owners are using ownership as a tool for creating business portfolios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preparation of optically active compounds is of high importance in modern medicinal chemistry. Despite recent advances in the field of asymmetric synthesis, resolution of racemates still remains the most utilized way for preparation of single enantiomers in industrial scale due to its cost-efficiency and simplicity. Enzymatic kinetic resolution (KR) of racemates is a classical method for separation of enantiomers. One of its drawbacks is the limitation of target enantiomer yield to 50%. Dynamic Kinetic Resolution (DKR) allows to reach yields up to 100% by in situ racemization of the less reactive enantiomer. In the first part of this thesis, a number of half-sandwich ruthenium complexes were prepared and evaluated as catalysts for racemization of optically active secondary alcohols. A leading catalyst, Bn5CpRu(CO)2Cl, was identified. The catalyst discovered was extensively characterized by its application for DKR of a broad range of secondary alcohols in a wide range of reaction loadings (1 mmol – 1 mol). Cost-efficient chromatography-free procedure for preparation of this catalyst was developed. Further, detailed kinetic and mechanistic studies of the racemization reactions were performed. Comparison of racemization rates in the presence of Bn5CpRu(CO)2Cl and Ph5CpRu(CO)2Cl catalysts reveals that the performance of the catalytic system can be adjusted by matching of the electronic properties of the catalysts and the substrates. Moreover, dependence of the rate-limiting step from the electronic properties of the reagents was observed. Important conclusions about reaction mechanism were made. Finally, an alternative approach to DKR of amines based on space separated vessels was addressed. This procedure allows the combination of thermolabile enzyme with racemization catalysts active only at high temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A surgical technique for the treatment of ununited anconeal process in dogs treated by osteotomy and dynamic distraction of the proximal part of the ulna using a linear external skeletal fixator was evaluated. In all cases the osteotomy was distracted 1mm each day after the surgery until desired distraction had been achieved. Eight dogs and 9 joints diagnosed with ununited anconeal process were treated. The success of the procedure was determined by comparing clinical signs of lameness and degree of arthrosis at the time of diagnosis to 6 months after the surgical intervention. Radiographic union occurred in 88.9% of the affected joints between 21 and 42 days after the surgical procedure. Clinically, six elbows were classified as good, two as satisfactory and one as unsatisfactory. Six months after surgery two elbows had no arthrosis, one had Grade 1, two Grade 2 and one Grade 3. It is concluded the combination of ulnar osteotomy and dynamic distraction of the olecranon by a linear external skeletal fixator is a feasible procedure for the treatment of ununited anconeal process in dogs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach to the solution of moving a robot manipulator with minimum cost along a specified geometric path in the presence of obstacles. The main idea is to express obstacle avoidance in terms of the distances between potentially colliding parts. The optimal traveling time and the minimum mechanical energy of the actuators are considered together to build a multiobjective function. A simple numerical example involving a Cartesian manipulator arm with two-degree-of-freedom is described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a geometric nonlinear dynamic analysis of plates and shells using eight-node hexahedral isoparametric elements. The main features of the present formulation are: (a) the element matrices are obtained using reduced integrations with hourglass control; (b) an explicit Taylor-Galerkin scheme is used to carry out the dynamic analysis, solving the corresponding equations of motion in terms of velocity components; (c) the Truesdell stress rate tensor is used; (d) the vector processor facilities existing in modern supercomputers were used. The results obtained are comparable with previous solutions in terms of accuracy and computational performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A frequency-domain method for nonlinear analysis of structural systems with viscous, hysteretic, nonproportional and frequency-dependent damping is presented. The nonlinear effects and nonproportional damping are considered through pseudo-force terms. The modal coordinates uncoupled equations are iteratively solved. The treatment of initial conditions in the frequency domain which is necessary for the treatment of the uncoupled equations is initially adressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.