1000 resultados para CHSE-214
Resumo:
Deflection compensation of flexible boom structures in robot positioning is usually done using tables containing the magnitude of the deflection with inverse kinematics solutions of a rigid structure. The number of table values increases greatly if the working area of the boom is large and the required positioning accuracy is high. The inverse kinematics problems are very nonlinear, and if the structure is redundant, in some cases it cannot be solved in a closed form. If the structural flexibility of the manipulator arms is taken into account, the problem is almost impossible to solve using analytical methods. Neural networks offer a possibility to approximate any linear or nonlinear function. This study presents four different methods of using neural networks in the static deflection compensation and inverse kinematics solution of a flexible hydraulically driven manipulator. The training information required for training neural networks is obtained by employing a simulation model that includes elasticity characteristics. The functionality of the presented methods is tested based on the simulated and measured results of positioning accuracy. The simulated positioning accuracy is tested in 25 separate coordinate points. For each point, the positioning is tested with five different mass loads. The mean positioning error of a manipulator decreased from 31.9 mm to 4.1 mm in the test points. This accuracy enables the use of flexible manipulators in the positioning of larger objects. The measured positioning accuracy is tested in 9 separate points using three different mass loads. The mean positioning error decreased from 10.6 mm to 4.7 mm and the maximum error from 27.5 mm to 11.0 mm.
Resumo:
The RPC Detector Control System (RCS) is the main subject of this PhD work. The project, involving the Lappeenranta University of Technology, the Warsaw University and INFN of Naples, is aimed to integrate the different subsystems for the RPC detector and its trigger chain in order to develop a common framework to control and monitoring the different parts. In this project, I have been strongly involved during the last three years on the hardware and software development, construction and commissioning as main responsible and coordinator. The CMS Resistive Plate Chambers (RPC) system consists of 912 double-gap chambers at its start-up in middle of 2008. A continuous control and monitoring of the detector, the trigger and all the ancillary sub-systems (high voltages, low voltages, environmental, gas, and cooling), is required to achieve the operational stability and reliability of a so large and complex detector and trigger system. Role of the RPC Detector Control System is to monitor the detector conditions and performance, control and monitor all subsystems related to RPC and their electronics and store all the information in a dedicated database, called Condition DB. Therefore the RPC DCS system has to assure the safe and correct operation of the sub-detectors during all CMS life time (more than 10 year), detect abnormal and harmful situations and take protective and automatic actions to minimize consequential damages. The analysis of the requirements and project challenges, the architecture design and its development as well as the calibration and commissioning phases represent themain tasks of the work developed for this PhD thesis. Different technologies, middleware and solutions has been studied and adopted in the design and development of the different components and a big challenging consisted in the integration of these different parts each other and in the general CMS control system and data acquisition framework. Therefore, the RCS installation and commissioning phase as well as its performance and the first results, obtained during the last three years CMS cosmic runs, will be
Resumo:
Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.
Resumo:
This study focuses on the phenomenon of customer reference marketing in a business tobusiness (B to B) context. Although customer references are generally considered an important marketing and sales tool, the academic literature has paid surprisingly little attention to the phenomenon. The study suggests that customer references could be viewed as important marketing assets for industrial suppliers, and the ability to build, manage and leverage customer reference portfolios systematically constitutes a relevant marketing capability. The role of customer references is examined in the context of the industrial suppliers' shift towards a solution and project orientation and in the light of the on going changes in the project business. Suppliers in several industry sectors are undergoing a change from traditional equipment manufacturing towards project and solution oriented business. It is argued in this thesis that the high complexity, the project oriented nature and the intangible service elements that characterise many contemporary B to B offerings further increase the role of customer references. The study proposes three mechanisms of customer reference marketing: status transfer, validation through testimonials and the demonstration of experience and prior performance. The study was conducted in the context of Finnish B to B process technology and information technology companies. The empirical data comprises 38 interviews with managers of four case companies, 165 customer reference descriptions gathered from six case companies' Web sites, as well as company internal material. The findings from the case studies show that customer references have various external and internal functions that contribute to the growth and performance of B to B firms. Externally, customer references bring status transfer effects from reputable customers, concretise and demonstrate complex solutions, and provide indirect evidence of experience, previous performance, technological functionality and delivered customer value. They can also be leveraged internally to facilitate organisational learning and training, advance offering development, and motivate personnel. Major reference projects create new business opportunities and can be used as a vehicle for strategic change. The findings of the study shed light on the on going changing orientations in the project business environment, increase understanding of the variety of ways in which customer references can be deployed as marketing assets, and provide a framework of the relevant tasks and activities related to building, managing and leveraging a firm's customer reference portfolio. The findings contribute to the industrial marketing research, to the literature on marketing assets and capabilities and to the literature on projects and solutions. The proposed functions and mechanisms of customer reference marketing bring a more thorough and structured understanding about the essence and characteristics of the phenomenon and give a wide ranging view of the role of customer references as marketing assets for B to B firms. The study suggests several managerial implications for industrial suppliers in order to systematise customer reference marketing efforts.
Resumo:
The uncertainty of any analytical determination depends on analysis and sampling. Uncertainty arising from sampling is usually not controlled and methods for its evaluation are still little known. Pierre Gy’s sampling theory is currently the most complete theory about samplingwhich also takes the design of the sampling equipment into account. Guides dealing with the practical issues of sampling also exist, published by international organizations such as EURACHEM, IUPAC (International Union of Pure and Applied Chemistry) and ISO (International Organization for Standardization). In this work Gy’s sampling theory was applied to several cases, including the analysis of chromite concentration estimated on SEM (Scanning Electron Microscope) images and estimation of the total uncertainty of a drug dissolution procedure. The results clearly show that Gy’s sampling theory can be utilized in both of the above-mentioned cases and that the uncertainties achieved are reliable. Variographic experiments introduced in Gy’s sampling theory are beneficially applied in analyzing the uncertainty of auto-correlated data sets such as industrial process data and environmental discharges. The periodic behaviour of these kinds of processes can be observed by variographic analysis as well as with fast Fourier transformation and auto-correlation functions. With variographic analysis, the uncertainties are estimated as a function of the sampling interval. This is advantageous when environmental data or process data are analyzed as it can be easily estimated how the sampling interval is affecting the overall uncertainty. If the sampling frequency is too high, unnecessary resources will be used. On the other hand, if a frequency is too low, the uncertainty of the determination may be unacceptably high. Variographic methods can also be utilized to estimate the uncertainty of spectral data produced by modern instruments. Since spectral data are multivariate, methods such as Principal Component Analysis (PCA) are needed when the data are analyzed. Optimization of a sampling plan increases the reliability of the analytical process which might at the end have beneficial effects on the economics of chemical analysis,
Resumo:
In this thesis, equilibrium and dynamic sorption properties of weakly basic chelating adsorbents were studied to explain removal of copper, nickel from a concentrated zinc sulfate solution in a hydrometallurgical process. Silica-supported chelating composites containing either branched poly(ethyleneimine) (BPEI) or 2-(aminomethyl)pyridine (AMP) as a functional group were used. The adsorbents are commercially available from Purity Systems Inc, USA as WP-1® and CuWRAM®, respectively. The fundamental interactions between the adsorbents, sulfuric acid and metal sulfates were studied in detail and the results were used to find the best conditions for removal of copper and nickel from an authentic ZnSO4 process solution. In particular, the effect of acid concentration and temperature on the separation efficiency was considered. Both experimental and modeling aspectswere covered in all cases. Metal sorption is considerably affected by the chemical properties of the studied adsorbents and by the separation conditions. In the case of WP-1, acid affinity is so high that column separation of copper, nickel and zinc has to be done using the adsorbent in base-form. On the other hand, the basicity of CuWRAM is significantly lower and protonated adsorbent can be used. Increasing temperature decreases the basicity and the metals affinity of both adsorbents, but the uptake capacities remain practically unchanged. Moreover, increasing temperature substantially enhances intra-particle mass transport and decreases viscosities thus allowing significantly higher feed flow rates in the fixed-bed separation. The copper selectivity of both adsorbents is very high even in the presence of a 250-fold excess of zinc. However, because of the basicity of WP-1, metal precipitation is a serious problem and therefore only CuWRAM is suitable for the practical industrial application. The optimum temperature for copper removal appears to be around 60 oC and an alternative solution purification method is proposed. The Ni/Zn selectivity of both WP-1 and CuWRAM is insufficient for removal of the very small amounts of nickel present in the concentrated ZnSO4 solution.
Resumo:
This is a study of team social networks, their antecedents and outcomes. In focusing attention on the structural configuration of the team this research contributes to a new wave of thinking concerning group social capital. The research site was a random sample of Finnish work organisations. The data consisted of 499 employees in 76 teams representing 48 different organisations. A systematic literature review and quantitative methods were used in conducting the research: the former primarily to establish the current theoretical position on the relationships among the variables and the latter to test these relationships. Social network analysis was the primary method used in identifying the social-network relations among the work-team members. The first and key contribution of this study is that it relates the structuralnetwork properties of work teams to behavioural outcomes, attitudinal outcomes and, ultimately, team performance. Moreover, it shows that addressing attitudinal outcomes is also important in terms of team performance; attitudinal outcomes (team identity) mediated the relationship between the team’s performance and its social network. The second contribution is that it examines the possible antecedents of the social structure. It is thus one response to Salancik’s (1995) call for a network theory in that it explains why certain network characteristics exist. Itdemonstrates that irrespective of whether or not a team is heterogeneous in terms of age or gender, educational diversity may protect it from centralisation. However, heterogeneity in terms of gender turned out to have a negative impact on density. Thirdly, given the observation that the benefits of (team) networks are typically theorised and modelled without reference to the nature of the relationships comprising the structure, the study directly tested whether team knowledge mediated the effects of instrumental and expressive network relationships on team performance. Furthermore, with its focus on expressive networks that link the workplace to a more informal world, which have been rather neglected in previous research, it enhances knowledge of teams andnetworks. The results indicate that knowledge sharing fully mediates the influence of complementarities between dense and fragmented instrumental network relationships, thus providing empirical validation of the implicit understanding that networks transfer knowledge. Fourthly, the study findings suggest that an optimal configuration of the work-team social-network structure combines both bridging and bonding social relationships.
Resumo:
This thesis is devoted to investigations of three typical representatives of the II-V diluted magnetic semiconductors, Zn1-xMnxAs2, (Zn1-xMnx)3As2 and p-CdSb:Ni. When this work started the family of the II-V semiconductors was presented by only the compounds belonging to the subgroup II3-V2, as (Zn1-xMnx)3As2, whereas the rest of the materials mentioned above were not investigated at all. Pronounced low-field magnetic irreversibility, accompanied with a ferromagnetic transition, are observed in Zn1-xMnxAs2 and (Zn1-xMnx)3As2 near 300 K. These features give evidence for presence of MnAs nanosize magnetic clusters, responsible for frustrated ground magnetic state. In addition, (Zn1-xMnx)3As2 demonstrates large paramagnetic response due to considerable amount of single Mn ions and small antiferromagnetic clusters. Similar paramagnetic system existing in Zn1-xMnxAs2 is much weaker. Distinct low-field magnetic irreversibility, accompanied with a rapid saturation of the magnetization with increasing magnetic field, is observed near the room temperature in p- CdSb:Ni, as well. Such behavior is connected to the frustrated magnetic state, determined by Ni-rich magnetic Ni1-xSbx nanoclusters. Their large non-sphericity and preferable orientations are responsible for strong anisotropy of the coercivity and saturation magnetization of p- CdSb:Ni. Parameters of the Ni1-xSbx nanoclusters are estimated. Low-temperature resistivity of p-CdSb:Ni is governed by a hopping mechanism of charge transfer. The variable-range hopping conductivity, observed in zero magnetic field, demonstrates a tendency of transformation into the nearest-neighbor hopping conductivity in non-zero magnetic filed. The Hall effect in p-CdSb:Ni exhibits presence of a positive normal and a negative anomalous contributions to the Hall resistivity. The normal Hall coefficient is governed mainly by holes activated into the valence band, whereas the anomalous Hall effect, attributable to the Ni1-xSbx nanoclusters with ferromagnetically ordered internal spins, exhibits a low-temperature power-law resistivity scaling.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
In recent times of global turmoil, the need for uncertainty management has become ever momentous. The need for enhanced foresight especially concerns capital-intensive industries, which need to commit their resources and assets with long-term planning horizons. Scenario planning has been acknowledged to have many virtues - and limitations - concerning the mapping of the future and illustrating the alternative development paths. The present study has been initiated to address both the need of improved foresight in two capital-intensive industries, i.e. the paper and steel industries and the imperfections in the current scenario practice. The research problem has been approached by engendering a problem-solving vehicle, which combines, e.g. elements of generic scenario process, face-to-face group support methods, deductive scenario reasoning and causal mapping into a fully integrated scenario process. The process, called the SAGES scenario framework, has been empirically tested by creating alternative futures for two capital-intensive industries, i.e. the paper and steel industries. Three scenarios for each industry have been engendered together with the identification of the key megatrends, the most important foreign investment determinants, key future drivers and leading indicators for the materialisation of the scenarios. The empirical results revealed a two-fold outlook for the paper industry, while the steel industry future was seen as much more positive. The research found support for utilising group support systems in scenario and strategic planning context with some limitations. Key perceived benefits include high time-efficiency, productivity and lower resource-intensiveness. Group support also seems to enhance participant satisfaction, encourage innovative thinking and provide the users with personalised qualitative scenarios.
Resumo:
The traditional forest industry is a good example of the changing nature of the competitive environment in many industries. Faced with drastic challenges forestindustry companies are forced to search for new value-creating strategies in order to create competitive advantage. The emerging bioenergy business is now offering promising avenues for value creation for both the forest and energy sectors because of their complementary resources and knowledge with respect to bioenergy production from forest-based biomass. The key objective of this dissertation is to examine the sources of sustainable competitive advantage and the value-creation opportunities that are emerging at the intersection between the forest and energy industries. The research topic is considered from different perspectives in order to provide a comprehensive view of the phenomenon. The study discusses the business opportunities that are related to producing bioenergy from forest-based biomass, and sheds light on the greatest challenges and threats influencing the success of collaboration between the forest and energy sectors. In addition, it identifies existing and potential bioenergy actors, and considers the resources and capabilities needed in order to prosper in the bioenergy field. The value-creation perspective is founded on strategic management accounting, the theoretical frameworks are adopted from the field of strategic management, and the future aspect is taken into account through the application of futures studies research methodology. This thesis consists of two parts. The first part provides a synthesis of the overall dissertation, and the second part comprises four complementary research papers. There search setting is explorative in nature, and both qualitative and quantitative research methods are used. As a result, the thesis lays the foundation for non-technological studies on bioenergy. It gives an example of how to study new value-creation opportunities at an industrial intersection, and discusses the main determinants affecting the value-creation process. In order to accomplish these objectives the phenomenon of value creation at the intersection between the forest and energy industries is theorized and connected with the dynamic resource-based view of the firm.
Resumo:
An oscillating overvoltage has become a common phenomenon at the motor terminal in inverter-fed variable-speed drives. The problem has emerged since modern insulated gate bipolar transistors have become the standard choice as the power switch component in lowvoltage frequency converter drives. Theovervoltage phenomenon is a consequence of the pulse shape of inverter output voltage and impedance mismatches between the inverter, motor cable, and motor. The overvoltages are harmful to the electric motor, and may cause, for instance, insulation failure in the motor. Several methods have been developed to mitigate the problem. However, most of them are based on filtering with lossy passive components, the drawbacks of which are typically their cost and size. In this doctoral dissertation, application of a new active du/dt filtering method based on a low-loss LC circuit and active control to eliminate the motor overvoltages is discussed. The main benefits of the method are the controllability of the output voltage du/dt within certain limits, considerably smaller inductances in the filter circuit resulting in a smaller physical component size, and excellent filtering performance when compared with typical traditional du/dt filtering solutions. Moreover, no additional components are required, since the active control of the filter circuit takes place in the process of the upper-level PWM modulation using the same power switches as the inverter output stage. Further, the active du/dt method will benefit from the development of semiconductor power switch modules, as new technologies and materials emerge, because the method requires additional switching in the output stage of the inverter and generation of narrow voltage pulses. Since additional switching is required in the output stage, additional losses are generated in the inverter as a result of the application of the method. Considerations on the application of the active du/dt filtering method in electric drives are presented together with experimental data in order to verify the potential of the method.
Resumo:
Concentrated winding permanent magnet machines and their electromagnetic properties are studied in this doctoral thesis. The thesis includes a number of main tasks related to the application of permanent magnets in concentrated winding open slot machines. Suitable analytical methods are required for the first design calculations of a new machine. Concentrated winding machines differ from conventional integral slot winding machines in such a way that adapted analytical calculation methods are needed. A simple analytical model for calculating the concentrated winding axial flux machines is provided. The next three main design tasks are discussed in more detail in the thesis. The magnetic length of the rotor surface magnet machines is studied, and it is shown that the traditional methods have to be modified also in this respect. An important topic in this study has been to evaluate and minimize the rotor permanent magnet Joule losses by using segmented magnets in the calculations and experiments. Determination of the magnetizing and leakage inductances for a concentrated winding machine and the torque production capability of concentrated winding machines with different pole pair numbers are studied, and the results are compared with the corresponding properties of integral slot winding machines. The thesis introduces a new practical permanent magnet motor type for industrial use. The special features of the machine are based on the option of using concentrated winding open slot constructions of permanent magnet synchronous machines in the normal speed ranges of industrial motors, for instance up to 3000 min-1, without excessive rotor losses. By applying the analytical equations and methods introduced in the thesis, a 37 kW 2400 min-1 12-slot 10-pole axial flux machine with rotor-surfacemounted magnets is designed. The performance of the designed motor is determined by experimental measurements and finite element calculations.
Resumo:
In the paper machine, it is not a desired feature for the boundary layer flows in the fabric and the roll surfaces to travel into the closing nips, creating overpressure. In this thesis, the aerodynamic behavior of the grooved roll and smooth rolls is compared in order to understand the nip flow phenomena, which is the main reason why vacuum and grooved roll constructions are designed. A common method to remove the boundary layer flow from the closing nip is to use the vacuum roll construction. The downside of the use of vacuum rolls is high operational costs due to pressure losses in the vacuum roll shell. The deep grooved roll has the same goal, to create a pressure difference over the paper web and keep the paper attached to the roll or fabric surface in the drying pocket of the paper machine. A literature review revealed that the aerodynamic functionality of the grooved roll is not very well known. In this thesis, the aerodynamic functionality of the grooved roll in interaction with a permeable or impermeable wall is studied by varying the groove properties. Computational fluid dynamics simulations are utilized as the research tool. The simulations have been performed with commercial fluid dynamics software, ANSYS Fluent. Simulation results made with 3- and 2-dimensional fluid dynamics models are compared to laboratory scale measurements. The measurements have been made with a grooved roll simulator designed for the research. The variables in the comparison are the paper or fabric wrap angle, surface velocities, groove geometry and wall permeability. Present-day computational and modeling resources limit grooved roll fluid dynamics simulations in the paper machine scale. Based on the analysis of the aerodynamic functionality of the grooved roll, a grooved roll simulation tool is proposed. The smooth roll simulations show that the closing nip pressure does not depend on the length of boundary layer development. The surface velocity increase affects the pressure distribution in the closing and opening nips. The 3D grooved roll model reveals the aerodynamic functionality of the grooved roll. With the optimal groove size it is possible to avoid closing nip overpressure and keep the web attached to the fabric surface in the area of the wrap angle. The groove flow friction and minor losses play a different role when the wrap angle is changed. The proposed 2D grooved roll simulation tool is able to replicate the grooved aerodynamic behavior with reasonable accuracy. A small wrap angle predicts the pressure distribution correctly with the chosen approach for calculating the groove friction losses. With a large wrap angle, the groove friction loss shows too large pressure gradients, and the way of calculating the air flow friction losses in the groove has to be reconsidered. The aerodynamic functionality of the grooved roll is based on minor and viscous losses in the closing and opening nips as well as in the grooves. The proposed 2D grooved roll model is a simplification in order to reduce computational and modeling efforts. The simulation tool makes it possible to simulate complex paper machine constructions in the paper machine scale. In order to use the grooved roll as a replacement for the vacuum roll, the grooved roll properties have to be considered on the basis of the web handling application.
Resumo:
It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.