65 resultados para Computerized Dynamic Posturography
Resumo:
This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.
Resumo:
This master’s thesis aims to examine the relationship between dynamic capabilities and operational-level innovations. In addition, measures for the concept of dynamic capabilities are developed. The study was executed in the magazine publishing industry which is considered favourable for examining dynamic capabilities, since the sector is characterized by rapid change. As a basis for the study and the measure development, a literary review was conducted. Data for the empirical section was gathered by a survey targeted to chief-editors of Finnish consumer magazines. The relationship between dynamic capabilities and innovation was examined by multiple linear regression. The results indicate that dynamic capabilities have effect on the emergence of radical innovations. Environmental dynamism’s effect on radical innovations was not detected. Also, dynamic capabilities’ effect on innovation was not greater in turbulent operating environment.
Resumo:
The objective of this thesis is the development of a multibody dynamic model matching the observed movements of the lower limb of a skier performing the skating technique in cross-country style. During the construction of this model, the formulation of the equation of motion was made using the Euler - Lagrange approach with multipliers applied to a multibody system in three dimensions. The description of the lower limb of the skate skier and the ski was completed by employing three bodies, one representing the ski, and two representing the natural movements of the leg of the skier. The resultant system has 13 joint constraints due to the interconnection of the bodies, and four prescribed kinematic constraints to account for the movements of the leg, leaving the amount of degrees of freedom equal to one. The push-off force exerted by the skate skier was taken directly from measurements made on-site in the ski tunnel at the Vuokatti facilities (Finland) and was input into the model as a continuous function. Then, the resultant velocities and movement of the ski, center of mass of the skier, and variation of the skating angle were studied to understand the response of the model to the variation of important parameters of the skate technique. This allowed a comparison of the model results with the real movement of the skier. Further developments can be made to this model to better approximate the results to the real movement of the leg. One can achieve this by changing the constraints to include the behavior of the real leg joints and muscle actuation. As mentioned in the introduction of this thesis, a multibody dynamic model can be used to provide relevant information to ski designers and to obtain optimized results of the given variables, which athletes can use to improve their performance.
Resumo:
Family businesses are among the longest-lived most prevalent institutions in the world and they are an important source of economic development and growth. Ownership is a key to the business life of the firm and also one main key in family business definition. There is only a little portfolio entrepreneurship or portfolio business research within family business context. The absence of empirical evidence on the long-term relationship between family ownership and portfolio development presents an important gap in the family business literature. This study deals with the family business ownership changes and the development of portfolios in the family business and it is positioned in to the conversation of family business, growth, ownership, management and strategy. This study contributes and expands the existing body of theory on family business and ownership. From the theoretical point of view this study combines insights from the fields of portfolio entrepreneurship, ownership, and family business and integrate them. This crossfertilization produces interesting empirical and theoretical findings that can constitute a basis for solid contributions to the understanding of ownership dynamics and portfolio entrepreneurship in family firms. The research strategy chosen for this study represents longitudinal, qualitative, hermeneutic, and deductive approaches.The empirical part of study is using a case study approach with embedded design, that is, multiple levels of analysis within a single study. The study consists of two cases and it begins with a pilot case which will form a preunderstanding on the phenomenon. Pilot case develops the methodology approach to build in the main case and the main case will deepen the understanding of the phenomenon. This study develops and tests a research method of family business portfolio development focusing on investigating how ownership changes are influencing to the family business structures over time. This study reveals the linkages between dimensions of ownership and how they give rise to portfolio business development within the context of the family business. The empirical results of the study suggest that family business ownership is dynamic and owners are using ownership as a tool for creating business portfolios.
Resumo:
Preparation of optically active compounds is of high importance in modern medicinal chemistry. Despite recent advances in the field of asymmetric synthesis, resolution of racemates still remains the most utilized way for preparation of single enantiomers in industrial scale due to its cost-efficiency and simplicity. Enzymatic kinetic resolution (KR) of racemates is a classical method for separation of enantiomers. One of its drawbacks is the limitation of target enantiomer yield to 50%. Dynamic Kinetic Resolution (DKR) allows to reach yields up to 100% by in situ racemization of the less reactive enantiomer. In the first part of this thesis, a number of half-sandwich ruthenium complexes were prepared and evaluated as catalysts for racemization of optically active secondary alcohols. A leading catalyst, Bn5CpRu(CO)2Cl, was identified. The catalyst discovered was extensively characterized by its application for DKR of a broad range of secondary alcohols in a wide range of reaction loadings (1 mmol – 1 mol). Cost-efficient chromatography-free procedure for preparation of this catalyst was developed. Further, detailed kinetic and mechanistic studies of the racemization reactions were performed. Comparison of racemization rates in the presence of Bn5CpRu(CO)2Cl and Ph5CpRu(CO)2Cl catalysts reveals that the performance of the catalytic system can be adjusted by matching of the electronic properties of the catalysts and the substrates. Moreover, dependence of the rate-limiting step from the electronic properties of the reagents was observed. Important conclusions about reaction mechanism were made. Finally, an alternative approach to DKR of amines based on space separated vessels was addressed. This procedure allows the combination of thermolabile enzyme with racemization catalysts active only at high temperatures.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.
Resumo:
The significance of services as business and human activities has increased dramatically throughout the world in the last three decades. Becoming a more and more competitive and efficient service provider while still being able to provide unique value opportunities for customers requires new knowledge and ideas. Part of this knowledge is created and utilized in daily activities in every service organization, but not all of it, and therefore an emerging phenomenon in the service context is information awareness. Terms like big data and Internet of things are not only modern buzz-words but they are also describing urgent requirements for a new type of competences and solutions. When the amount of information increases and the systems processing information become more efficient and intelligent, it is the human understanding and objectives that may get separated from the automated processes and technological innovations. This is an important challenge and the core driver for this dissertation: What kind of information is created, possessed and utilized in the service context, and even more importantly, what information exists but is not acknowledged or used? In this dissertation the focus is on the relationship between service design and service operations. Reframing this relationship refers to viewing the service system from the architectural perspective. The selected perspective allows analysing the relationship between design activities and operational activities as an information system while maintaining the tight connection to existing service research contributions and approaches. This type of an innovative approach is supported by research methodology that relies on design science theory. The methodological process supports the construction of a new design artifact based on existing theoretical knowledge, creation of new innovations and testing the design artifact components in real service contexts. The relationship between design and operations is analysed in the health care and social care service systems. The existing contributions in service research tend to abstract services and service systems as value creation, working or interactive systems. This dissertation adds an important information processing system perspective to the research. The main contribution focuses on the following argument: Only part of the service information system is automated and computerized, whereas a significant part of information processing is embedded in human activities, communication and ad-hoc reactions. The results indicate that the relationship between service design and service operations is more complex and dynamic than the existing scientific and managerial models tend to view it. Both activities create, utilize, mix and share information, making service information management a necessary but relatively unknown managerial task. On the architectural level, service system -specific elements seem to disappear, but access to more general information elements and processes can be found. While this dissertation focuses on conceptual-level design artifact construction, the results provide also very practical implications for service providers. Personal, visual and hidden activities of service, and more importantly all changes that take place in any service system have also an information dimension. Making this information dimension visual and prioritizing the processed information based on service dimensions is likely to provide new opportunities to increase activities and provide a new type of service potential for customers.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.