82 resultados para design methods
Resumo:
Planar, large area, position sensitive silicon detectors are widely utilized in high energy physics research and in medical, computed tomography (CT). This thesis describes author's research work relating to development of such detector components. The key motivation and objective for the research work has been the development of novel, position sensitive detectors improving the performance of the instruments they are intended for. Silicon strip detectors are the key components of barrel-shaped tracking instruments which are typically the innermost structures of high energy physics experimental stations. Particle colliders such as the former LEP collider or present LHC produce particle collisions and the silicon strip detector based trackers locate the trajectories of particles emanating from such collisions. Medical CT has become a regular part of everyday medical care in all developed countries. CT scanning enables x-ray imaging of all parts of the human body with an outstanding structural resolution and contrast. Brain, chest and abdomen slice images with a resolution of 0.5 mm are possible and latest CT machines are able to image whole human heart between heart beats. The two application areas are presented shortly and the radiation detection properties of planar silicon detectors are discussed. Fabrication methods and preamplifier electronics of the planar detectors are presented. Designs of the developed, large area silicon detectors are presented and measurement results of the key operating parameters are discussed. Static and dynamic performance of the developed silicon strip detectors are shown to be very satisfactory for experimental physics applications. Results relating to the developed, novel CT detector chips are found to be very promising for further development and all key performance goals are met.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
This research is a survey on values related to entrepreneurship education and a participatory action research on entrepreneurship education curricula in teacher education. Research problems, rising from the practical development work, were solved by several methods, following the principles of design-based research. Values related to entrepreneurship education were studied among teachers, headmasters, teacher educators, researchers and officers in the field of entrepreneurship education in 16 European Union countries. Fifteen most important values related to entrepreneurship education were listed based on two qualitative surveys (N 124 and N 66). Values were also surveyed among Finnish teacher trainees (N 71). Results of the surveys show that the values given by the teacher trainees did not differ much from the ones given by the professionals already working in the field. Subsequently, emergence of these values was studied in documents that steer education. The values gathered in the surveys did not occur in the documents to a substantial degree. Development of entrepreneurship education curricula in teacher education was conducted by means of participatory action research. The development project gathered 55 teacher trainers from 15 teacher education organisations in Finland. The starting point of the phenomenon based project (see Annala and Mäkinen 2011) was the activity plan created for developing entrepreneurship education curricula. During the project, the learning of the teacher educators proceeded in a balanced way as brightening visions, stronger motivation, increasing understanding and new practices, following Shulman and Shulman’s model (2004). Goals of the development project were set to each teacher educator acquiring basic knowledge on entrepreneurship education, organization of obligatory courses on entrepreneurship education, and making entrepreneurship education a cross-curricular theme in teacher education. The process increased the understanding and motivation of teacher educators to develop and teach entrepreneurship education. It also facilitated collaboration as well as creating visions on entrepreneurship education. Based on the results, the concept of enterprisingness was defined, and recommendations were given for developing curricula in entrepreneurship education.
Resumo:
Ore sorting after crushing is an effective way to enhance the feed quality of a concentrator. Sorting by hand is the oldest way of concentrating minerals but it has become outdated because of low capacities. Older methods of sorting have also been difficult to use in large scale productions due to low capacities of sorters. Data transfer and processing and the speed of rejection mechanisms have been the bottlenecks for effective use of sorters. A fictive chalcopyrite ore body was created for this thesis. The properties of the ore were typical of chalcopyrite ores and economical limit was set for design. Concentrator capacity was determined by the size of ore body and the planned mine life. Two concentrator scenarios were compared, one with the sorting facility and the other without sorting. Comparison was made for quality and amount of feed, size of equipment and economics. Concentrator with sorting had lower investment and operational cost but also lower incomes due to the ore loss in sorting. Net cash flow, net present value and internal rate of interest were calculated for comparison of the two scenarios.
Resumo:
Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.
Resumo:
This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Tässä työssä on tutkittu modulaarisen aktiivimagneettilaakeroidun koelaitteen mekaanista suunnittelua ja analysointia. Suurnopeusroottorin suunnittelun teoria on esitelty. Lisäksi monia analyyttisiä mallinnusmenetelmiä mekaanisten kuormitusten mallintamiseksi on esitelty. Koska kyseessä on suurnopeussähkökone, roottoridynamiikka ja sen soveltuvuus suunnittelussa on esitelty. Magneettilaakerien rakenteeseen ja toimintaan on tutustuttu osana tätä työtä. Kirjallisuuskatsaus nykyisistä koelaitteista esimerkiksi komponenttien ominaisuuksien tunnistamiseen ja roottoridynamiikan tutkimuksiin on esitelty. Työn rajauksena on konseptisuunnittelu muunneltavalle magneettilaakeroidulle (AMB) koelaitteelle ja suunnitteluprosessin dokumentointi. Muunneltavuuteen päädyttiin, koska se mahdollistaa erilaisten komponenttiasetteluiden testaamisen erilaisille magneettilaakerikokoonpanoille ja roottoreille. Pääpaino tässä työssä on suurnopeus induktiokoneen roottorin suunnittelussa ja mallintamisessa. Modulaaristen toimilaitteiden kuten magneettilaakerien ja induktiosähkömoottorin rakenne on esitelty ja modulaarisen rakenteen käytettävyyden hyödyistä koelaitekäytössä on dokumentoitu. Analyyttisiä ja elementtimenetelmään perustuvia tutkimusmenetelmiä on käytetty tutkittaessa suunniteltua suurnopeusroottoria. Suunnittelun ja analysoinnin tulokset on esitelty ja verrattu keskenään eri mallinnusmenetelmien välillä. Lisäksi johtopäätökset sähkömagneettisten osien liittämisen monimutkaisuudesta ja vaatimuksista roottoriin ja toimilaitteisiin sekä mekaanisten että sähkömagneettisten ominaisuuksien optimoimiseksi on dokumentoitu.
Virtual Testing of Active Magnetic Bearing Systems based on Design Guidelines given by the Standards
Resumo:
Active Magnetic Bearings offer many advantages that have brought new applications to the industry. However, similarly to all new technology, active magnetic bearings also have downsides and one of those is the low standardization level. This thesis is studying mainly the ISO 14839 standard and more specifically the system verification methods. These verifying methods are conducted using a practical test with an existing active magnetic bearing system. The system is simulated with Matlab using rotor-bearing dynamics toolbox, but this study does not include the exact simulation code or a direct algebra calculation. However, this study provides the proof that standardized simulation methods can be applied in practical problems.
Resumo:
The objective of this thesis is to examine distribution network designs and modeling practices and create a framework to identify best possible distribution network structure for the case company. The main research question therefore is: How to optimize case company’s distribution network in terms of customer needs and costs? Theory chapters introduce the basic building blocks of the distribution network design and needed calculation methods and models. Framework for the distribution network projects was created based on the theory and the case study was carried out by following the defined framework. Distribution network calculations were based on the company’s sales plan for the years 2014 - 2020. Main conclusions and recommendations were that the new Asian business strategy requires high investments in logistics and the first step is to open new satellite DC in China as soon as possible to support sales and second possible step is to open regional DC in Asia within 2 - 4 years.
Resumo:
An electric system based on renewable energy faces challenges concerning the storage and utilization of energy due to the intermittent and seasonal nature of renewable energy sources. Wind and solar photovoltaic power productions are variable and difficult to predict, and thus electricity storage will be needed in the case of basic power production. Hydrogen’s energetic potential lies in its ability and versatility to store chemical energy, to serve as an energy carrier and as feedstock for various industries. Hydrogen is also used e.g. in the production of biofuels. The amount of energy produced during hydrogen combustion is higher than any other fuel’s on a mass basis with a higher-heating-value of 39.4 kWh/kg. However, even though hydrogen is the most abundant element in the universe, on Earth most hydrogen exists in molecular forms such as water. Therefore, hydrogen must be produced and there are various methods to do so. Today, the majority hydrogen comes from fossil fuels, mainly from steam methane reforming, and only about 4 % of global hydrogen comes from water electrolysis. Combination of electrolytic production of hydrogen from water and supply of renewable energy is attracting more interest due to the sustainability and the increased flexibility of the resulting energy system. The preferred option for intermittent hydrogen storage is pressurization in tanks since at ambient conditions the volumetric energy density of hydrogen is low, and pressurized tanks are efficient and affordable when the cycling rate is high. Pressurized hydrogen enables energy storage in larger capacities compared to battery technologies and additionally the energy can be stored for longer periods of time, on a time scale of months. In this thesis, the thermodynamics and electrochemistry associated with water electrolysis are described. The main water electrolysis technologies are presented with state-of-the-art specifications. Finally, a Power-to-Hydrogen infrastructure design for Lappeenranta University of Technology is presented. Laboratory setup for water electrolysis is specified and factors affecting its commissioning in Finland are presented.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.