79 resultados para Dynamic Adjustment
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
People make up one of the most important resources for a corporation and therefore it has to continuously seek an ever more diverse and international workforce. Inpatriation is another way of utilizing foreign expertise in a corporation. An inpatriate refers to a person that is on an international assignment at the headquarters of a corporation, where they have been sent either from a subsidiary abroad or from a third country outside the corporation. Strengthening the social network of the inpatriate and their family contributes to the adjustment process and furthermore the success of the work assignment. As social networking sites are currently the fastest developing personal networking tools in the world, it is interesting to see how they can help in inpatriate adjustment. The objective of this thesis is to explore the potential of social networking sites (SNS) in inpatriate adjustment. The main objective can be divided into three sub objectives: 1. What is SNS used for during the inpatriate assignment? 2. What are the inpatriates’ motivations to use SNS? 3. Could the three facets of adjustment (work, interaction and general) be gained through SNS? This qualitative study utilizes the theme interview data collection method and the thematic analysis approach for analysing the interview data. From the interviews with five Indian inpatriates in Finland the most mentioned uses of SNS were related to participating (sharing opinions, recommendations and discussing things and connecting to friends, family and colleagues) and consuming (collecting information for work and free time), the least mentioned use of SNS was producing (posting videos, photos and updates). An interesting finding was that the five interviewees did not use SNS for purely entertainment motives at all during their assignment. This thesis found that all three facets of adjustment could potentially be gained through SNS.
Resumo:
To manage foreign operations, companies must often send their employees on international assignments. Repatriating these expatriates can be difficult because they have been forgotten during their posting, and their new experiences are not utilised. In addition to the possible difficulties in organisational repatriation, the returnee can suffer from readjustment problems after a lengthy stay abroad has changed their habits and even identity. This thesis examines the repatriation experience of Finnish assignees returning from Russia. The purpose of the study is to understand how the repatriation experience influences their readjustment to work in Finland. This experience is influenced by many factors including personal and situational changes, the repatriation process, job and organisational factors, and individual’s motives. The theoretical background of the study is founded on two models of repatriation adjustment. A refined, holistic theoretical framework for the study is created. It describes the formation of the repatriation experience and its importance for readjustment to work and retention. The qualitative research approach is suitable for the thesis which examines the returnees’ personal experiences and feelings: a qualitative case study aims to explain the phenomenon in-depth and comprehensively. The data was collected in summer 2013 through semi-standardised interviews with eight Finnish repatriates. They had returned from Russia within the last two years. The data was analysed by structuring the interview transcripts using template analysis. The results supported earlier literature and suggest that the re-entry remains a challenging phase for both the individual and the company. For some, adjusting to a new job was difficult for various reasons. The repatriates underwent personal change and development and felt it was for the better. Many repatriates criticised the company’s repatriation process upon return. Finding a suitable return job was not clear. Instead, the returnees had to be active in finding a new position. Many assignees had only modest career-related motives regarding the assignment and they had realistic expectations about the return. Therefore they were not extremely surprised or dissatisfied when they were not actively offered positions or support by the company. The significance of motives stood out even more than the theory predicted. As predicted, they are linked to the expectations of employees. Moreover, if the employees are motivated to remain in the company, they can tolerate partly a negative repatriation experience. Despite the complexity of the return and readjustment, the assignment as a whole was seen as a rewarding experience by all participants.
Resumo:
Wastes and side streams in the mining industry and different anthropogenic wastes often contain valuable metals in such concentrations their recovery may be economically viable. These raw materials are collectively called secondary raw materials. The recovery of metals from these materials is also environmentally favorable, since many of the metals, for example heavy metals, are hazardous to the environment. This has been noticed in legislative bodies, and strict regulations for handling both mining and anthropogenic wastes have been developed, mainly in the last decade. In the mining and metallurgy industry, important secondary raw materials include, for example, steelmaking dusts (recoverable metals e.g. Zn and Mo), zinc plant residues (Ag, Au, Ga, Ge, In) and waste slurry from Bayer process alumina production (Ga, REE, Ti, V). From anthropogenic wastes, waste electrical and electronic equipment (WEEE), among them LCD screens and fluorescent lamps, are clearly the most important from a metals recovery point of view. Metals that are commonly recovered from WEEE include, for example, Ag, Au, Cu, Pd and Pt. In LCD screens indium, and in fluorescent lamps, REEs, are possible target metals. Hydrometallurgical processing routes are highly suitable for the treatment of complex and/or low grade raw materials, as secondary raw materials often are. These solid or liquid raw materials often contain large amounts of base metals, for example. Thus, in order to recover valuable metals, with small concentrations, highly selective separation methods, such as hydrometallurgical routes, are needed. In addition, hydrometallurgical processes are also seen as more environmental friendly, and they have lower energy consumption, when compared to pyrometallurgical processes. In this thesis, solvent extraction and ion exchange are the most important hydrometallurgical separation methods studied. Solvent extraction is a mainstream unit operation in the metallurgical industry for all kinds of metals, but for ion exchange, practical applications are not as widespread. However, ion exchange is known to be particularly suitable for dilute feed solutions and complex separation tasks, which makes it a viable option, especially for processing secondary raw materials. Recovering valuable metals was studied with five different raw materials, which included liquid and solid side streams from metallurgical industries and WEEE. Recovery of high purity (99.7%) In, from LCD screens, was achieved by leaching with H2SO4, extracting In and Sn to D2EHPA, and selectively stripping In to HCl. In was also concentrated in the solvent extraction stage from 44 mg/L to 6.5 g/L. Ge was recovered as a side product from two different base metal process liquors with Nmethylglucamine functional chelating ion exchange resin (IRA-743). Based on equilibrium and dynamic modeling, a mechanism for this moderately complex adsorption process was suggested. Eu and Y were leached with high yields (91 and 83%) by 2 M H2SO4 from a fluorescent lamp precipitate of waste treatment plant. The waste also contained significant amounts of other REEs such as Gd and Tb, but these were not leached with common mineral acids in ambient conditions. Zn was selectively leached over Fe from steelmaking dusts with a controlled acidic leaching method, in which the pH did not go below, but was held close as possible to, 3. Mo was also present in the other studied dust, and was leached with pure water more effectively than with the acidic methods. Good yield and selectivity in the solvent extraction of Zn was achieved by D2EHPA. However, Fe needs to be eliminated in advance, either by the controlled leaching method or, for example, by precipitation. 100% Pure Mo/Cr product was achieved with quaternary ammonium salt (Aliquat 336) directly from the water leachate, without pH adjustment (pH 13.7). A Mo/Cr mixture was also obtained from H2SO4 leachates with hydroxyoxime LIX 84-I and trioctylamine (TOA), but the purities were 70% at most. However with Aliquat 336, again an over 99% pure mixture was obtained. High selectivity for Mo over Cr was not achieved with any of the studied reagents. Ag-NaCl solution was purified from divalent impurity metals by aminomethylphosphonium functional Lewatit TP-260 ion exchange resin. A novel preconditioning method, named controlled partial neutralization, with conjugate bases of weak organic acids, was used to control the pH in the column to avoid capacity losses or precipitations. Counter-current SMB was shown to be a better process configuration than either batch column operation or the cross-current operation conventionally used in the metallurgical industry. The raw materials used in this thesis were also evaluated from an economic point of view, and the precipitate from a waste fluorescent lamp treatment process was clearly shown to be the most promising.
Resumo:
Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Scanning optics create different types of phenomena and limitation to cladding process compared to cladding with static optics. This work concentrates on identifying and explaining the special features of laser cladding with scanning optics. Scanner optics changes cladding process energy input mechanics. Laser energy is introduced into the process through a relatively small laser spot which moves rapidly back and forth, distributing the energy to a relatively large area. The moving laser spot was noticed to cause dynamic movement in the melt pool. Due to different energy input mechanism scanner optic can make cladding process unstable if parameter selection is not done carefully. Especially laser beam intensity and scanning frequency have significant role in the process stability. The laser beam scanning frequency determines how long the laser beam affects with specific place local specific energy input. It was determined that if the scanning frequency in too low, under 40 Hz, scanned beam can start to vaporize material. The intensity in turn determines on how large package this energy is brought and if the intensity of the laser beam was too high, over 191 kW/cm2, laser beam started to vaporize material. If there was vapor formation noticed in the melt pool, the process starts to resample more laser alloying due to deep penetration of laser beam in to the substrate. Scanner optics enables more flexibility to the process than static optics. The numerical adjustment of scanning amplitude enables clad bead width adjustment. In turn scanner power modulation (where laser power is adjusted according to where the scanner is pointing) enables modification of clad bead cross-section geometry when laser power can be adjusted locally and thus affect how much laser beam melts material in each sector. Power modulation is also an important factor in terms of process stability. When a linear scanner is used, oscillating the scanning mirror causes a dwell time in scanning amplitude border area, where the scanning mirror changes the direction of movement. This can cause excessive energy input to this area which in turn can cause vaporization and process instability. This process instability can be avoided by decreasing energy in this region by power modulation. Powder feeding parameters have a significant role in terms of process stability. It was determined that with certain powder feeding parameter combinations powder cloud behavior became unstable, due to the vaporizing powder material in powder cloud. Mainly this was noticed, when either or both the scanning frequency or powder feeding gas flow was low or steep powder feeding angle was used. When powder material vaporization occurred, it created vapor flow, which prevented powder material to reach the melt pool and thus dilution increased. Also powder material vaporization was noticed to produce emission of light at wavelength range of visible light. This emission intensity was noticed to be correlated with the amount of vaporization in the powder cloud.
Resumo:
Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.
Resumo:
Rolling element bearings are essential components of rotating machinery. The spherical roller bearing (SRB) is one variant seeing increasing use, because it is self-aligning and can support high loads. It is becoming increasingly important to understand how the SRB responds dynamically under a variety of conditions. This doctoral dissertation introduces a computationally efficient, three-degree-of-freedom, SRB model that was developed to predict the transient dynamic behaviors of a rotor-SRB system. In the model, bearing forces and deflections were calculated as a function of contact deformation and bearing geometry parameters according to nonlinear Hertzian contact theory. The results reveal how some of the more important parameters; such as diametral clearance, the number of rollers, and osculation number; influence ultimate bearing performance. Distributed defects, such as the waviness of the inner and outer ring, and localized defects, such as inner and outer ring defects, are taken into consideration in the proposed model. Simulation results were verified with results obtained by applying the formula for the spherical roller bearing radial deflection and the commercial bearing analysis software. Following model verification, a numerical simulation was carried out successfully for a full rotor-bearing system to demonstrate the application of this newly developed SRB model in a typical real world analysis. Accuracy of the model was verified by comparing measured to predicted behaviors for equivalent systems.
Resumo:
In literature CO 2 liquidization is well studied with steady state modeling. Steady state modeling gives an overview of the process but it doesn’t give information about process behavior during transients. In this master’s thesis three dynamic models of CO2 liquidization were made and tested. Models were straight multi-stage compression model and two compression liquid pumping models, one with and one without cold energy recovery. Models were made with Apros software, models were also used to verify that Apros is capable to model phase changes and over critical state of CO 2. Models were verified against compressor manufacturer’s data and simulation results presented in literature. From the models made in this thesis, straight compression model was found to be the most energy efficient and fastest to react to transients. Also Apros was found to be capable tool for dynamic liquidization modeling.
Resumo:
Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.
Resumo:
The purpose of this Thesis is to find the most optimal heat recovery solution for Wärtsilä’s dynamic district heating power plant considering Germany energy markets as in Germany government pays subsidies for CHP plants in order to increase its share of domestic power production to 25 % by 2020. Different heat recovery connections have been simulated dozens to be able to determine the most efficient heat recovery connections. The purpose is also to study feasibility of different heat recovery connections in the dynamic district heating power plant in the Germany markets thus taking into consideration the day ahead electricity prices, district heating network temperatures and CHP subsidies accordingly. The auxiliary cooling, dynamical operation and cost efficiency of the power plant is also investigated.