930 resultados para High-speed video


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Passengers’ comfort in terms of acoustic noise levels is a key design driver for train design. The problem is especially relevant for high speed trains, where the aerodynamic induced noise is dominant, but it is also important for medium speed trains where the mechanical sources of noise may have more influence. The numerical interior noise prediction inside the train is a very comp lex problem, involving many different parameters: complex geometries and materials, different noise sources, com- plex interactions among those sources, broad range of frequencies where the phenomenon is important, etc. In this paper, the main findings of this work developed at IDR/UPM (Instituto de Microgravedad “Ignacio Da Riva”, Universidad Politécnica de Madrid) are presented, concentrat ing on the different modelling methodologies used for the different frequency ranges of interest, from FEM-BEM models, hybrid FEM-SEA to pure SEA models. The advantages and disadvantages of the different approaches are summarized. Different modelling techniques have also been evaluated and compared, taking into account the various and specific geometrical configurations typical in this type of structures, and the material properties used in the models. The critical configuration of the train inside a tunnel is studied in order to evaluate the external loads due to noise sources of the train. In this work, a SEA-model composed by periodic characteristic sections of a high spee d train is analysed inside a tunnel.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impact techniques can be used to evaluate firmness on fruit. Chen and Ruiz-Altisent developed and used a 50,4 g impactor with a 19 mm diameter spherical tip, dropping from different heights onto the fruit. Another impactor device is a semispherical impacting tip attached to the end of a pivoting arm. In both devices a small accelerometer is mounted behind the impacting tip. Prototype lateral impactor on-line sorting system for high-speed firmness sorting of fruits has been developed and tested. Preliminary results shows that is possible its use on-line. The last version of an impact device has new elements that improve the data resolution, the signal-noise ratio and the precision.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper assesses the main challenges associated with the propagation and channel modeling of broadband radio systems in a complex environment of high speed and metropolitan railways. These challenges comprise practical simulation, modeling interferences, radio planning, test trials and performance evaluation in different railway scenarios using Long Term Evolution (LTE) as test case. This approach requires several steps; the first is the use of a radio propagation simulator based on ray-tracing techniques to accurately predict propagation. Besides the radio propagation simulator, a complete test bed has been constructed to assess LTE performance, channel propagation conditions and interference with other systems in real-world environments by means of standard-compliant LTE transmissions. Such measurement results allowed us to evaluate the propagation and performance of broadband signals and to test the suitability of LTE radio technology for complex railway scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conventional dual-rail precharge logic suffers from difficult implementations of dual-rail structure for obtaining strict compensation between the counterpart rails. As a light-weight and high-speed dual-rail style, balanced cell-based dual-rail logic (BCDL) uses synchronised compound gates with global precharge signal to provide high resistance against differential power or electromagnetic analyses. BCDL can be realised from generic field programmable gate array (FPGA) design flows with constraints. However, routings still exist as concerns because of the deficient flexibility on routing control, which unfavourably results in bias between complementary nets in security-sensitive parts. In this article, based on a routing repair technique, novel verifications towards routing effect are presented. An 8 bit simplified advanced encryption processing (AES)-co-processor is executed that is constructed on block random access memory (RAM)-based BCDL in Xilinx Virtex-5 FPGAs. Since imbalanced routing are major defects in BCDL, the authors can rule out other influences and fairly quantify the security variants. A series of asymptotic correlation electromagnetic (EM) analyses are launched towards a group of circuits with consecutive routing schemes to be able to verify routing impact on side channel analyses. After repairing the non-identical routings, Mutual information analyses are executed to further validate the concrete security increase obtained from identical routing pairs in BCDL.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the present study was to analyze the visual strategies prior to a throw from 7 metres in elite and amateur handball goalkeepers. To this end we analyzed the visual fixations in number and order of 10 goalkeepers (29.7±5.4 years; 14.7±8.6 years of experience), 3 elite and 7 amateurs, during the life size projection of 14 different throws, made by different players. During each throw the movement of the eyeballs, the dilation of the pupil (pupillometry) and the subject?s blinking were recorded thanks to a technological system which permitted eye tracking with high speed cameras, and the subsequent presentation of the visual data for each action studied. The elite goalkeepers performed a greater number of visual fixations than the amateur goalkeepers, revealing large and significant differences. Equally the priority zones observed were differed, with the amateur goalkeepers fixating more on the thrower?s face, and the elite goalkeepers paying more attention to the area of the arm/ball. It can therefore be inferred that elite goalkeepers have a greater perceptive capacity and also use different visual strategies from the amateur goalkeepers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 6 cylinder servo-hydraulic loading system of CEDEX's track box (250 kN, 50 Hz) has been recently implemented with a new piezoelectric loading system (±20 kN, 300 Hz) allowing the incorporation of low amplitude high frequency dynamic load time histories to the high amplitude low frequency quasi-static load time histories used so far in the CEDEX's track box to assess the inelastic long term behavior of ballast under mixed traffic in conventional and high- speed lines. This presentation will discuss the results obtained in the first long-duration test performed at CEDEX's track box using simultaneously both loading systems, to simulate the pass-by of 6000 freight vehicles (1M of 225 kN axle loads) travelling at a speed of 120 km/h over a line with vertical irregularities corresponding to a medium quality lin3e level. The superstructure of the track tested at full scale consisted of E 60 rails, stiff rail pads (mayor que 450 kN/mm), B90.2 sleepers with USP 0.10 N/mm and a 0.35 m thick ballast layer of ADIF first class. A shear wave velocity of 250 m/s can be assumed for the different layers of the track sub-base. The ballast long-term settlements will be compared with those obtained in a previous long-duration quasi- static test performed in the same track, for the RIVAS [EU co-funded] project, in which no dynamic loads where considered. Also, the results provided by a high diameter cyclic triaxial cell with ballast tested in full size will be commented. Finally, the progress made at CEDEX's Geotechnical Laboratory to reproduce numerically the long term behavior of ballast will be discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper the main challenges associated with the migration process towards LTE, will be assessed. These challenges comprise, among others, the next key topics: Reliability, Availability Maintainability and Safety (RAMS) requirements, end to end Quality of Service (QoS) requirements, system performance in high speed scenarios, communication system deployment strategy, and system backward compatibility as well as the future system features for delivering railway services. The practical evaluation of the LTE system capabilities and performance in High Speed Railway (HSR) scenarios, require the development of an LTE demonstrator and an LTE system level simulator. Under this scope, the authors have developed an RF LTE demonstrator, as well as an LTE system level simulator, that will provide valuable information for the assessing of LTE performance and suitability in real HSR scenarios. This work is being developed under the framework of a research project to evaluate the feasibility of LTE to become the new railway communication system. The companies and universities involved in this project are: Technical University of Madrid (UPM), Alcatel Lucent Spain, ADIF (Spanish Railway Infrastructure Manager), Metro de Madrid, AT4 Wireless, the University of A Coruña (UDC) and University of Málaga (UMA).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

On-line dynamic MRI, which is oriented to industrial grading lines, requires high-speed sequences with motion correction artefacts. In this study two different types of motion correction sequences have been used and have been implemented in real-time (FLASH and UFLARE). They are based on T2* and T2 respectively and their selection depends on the expected contrast effect of the disorder: while watercore enhances bright areas due to higher fluid mobility, internal breakdown potentiates low signal due to texture degradation. For watercore study, five different apple cultivars were used (Normanda-18-, Fuji-35-, Helada-36-, Verde Doncella-54-, Esperiega-75-) along two seasons (2011 and 2012). In total 218 fruits were measured under both, static conditions (20 slices per fruit) and under dynamic conditions (3 repetitions without slice selection). For internal breakdown, Braeburn cultivar has been studied (in total 106 fruits) under both static (20 slices per fruit) and dynamic conditions (3 replicates with slice selection). Metrological aspects such as repeatability of dynamic images and subsequent histogram feature stability become of major interest for further industrial application. Segregation ability among varying degrees of disorder is also analyzed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the context of the present conference paper culverts are defined as an opening or conduit passing through an embankment usually for the purpose of conveying water or providing safe pedestrian and animal crossings under rail infrastructure. The clear opening of culverts may reach values of up to 12m however, values around 3m are encountered much more frequently. Depending on the topography, the number of culverts is about 10 times that of bridges. In spite of this, their dynamic behavior has received far less attention than that of bridges. The fundamental frequency of culverts is considerably higher than that of bridges even in the case of short span bridges. As the operational speed of modern high-speed passenger rail systems rises, higher frequencies are excited and thus more energy is encountered in frequency bands where the fundamental frequency of box culverts is located. Many research efforts have been spent on the subject of ballast instability due to bridge resonance, since it was first observed when high-speed trains were introduced to the Paris/Lyon rail line. To prevent this phenomenon from occurring, design codes establish a limit value for the vertical deck acceleration. Obviously one needs some sort of numerical model in order to estimate this acceleration level and at that point things get quite complicated. Not only acceleration but also displacement values are of interest e.g. to estimate the impact factor. According to design manuals the structural design should consider the depth of cover, trench width and condition, bedding type, backfill material, and compaction. The same applies to the numerical model however, the question is: What type of model is appropriate for this job? A 3D model including the embankment and an important part of the soil underneath the culvert is computationally very expensive and hard to justify taking into account the associated costs. Consequently, there is a clear need for simplified models and design rules in order to achieve reasonable costs. This paper will describe the results obtained from a 2D finite element model which has been calibrated by means of a 3D model and experimental data obtained at culverts that belong to the high-speed railway line that links the two towns of Segovia and Valladolid in Spain

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la última década, los sistemas de telecomunicación de alta frecuencia han evolucionado tremendamente. Las bandas de frecuencias, los anchos de banda del usuario, las técnicas de modulación y otras características eléctricas están en constante cambio de acuerdo a la evolución de la tecnología y la aparición de nuevas aplicaciones. Las arquitecturas de los transceptores modernos son diferentes de las tradicionales. Muchas de las funciones convencionalmente realizadas por circuitos analógicos han sido asignadas gradualmente a procesadores digitales de señal, de esta manera, las fronteras entre la banda base y las funcionalidades de RF se difuminan. Además, los transceptores inalámbricos digitales modernos son capaces de soportar protocolos de datos de alta velocidad, por lo que emplean una elevada escala de integración para muchos de los subsistemas que componen las diferentes etapas. Uno de los objetivos de este trabajo de investigación es realizar un estudio de las nuevas configuraciones en el desarrollo de demostradores de radiofrecuencia (un receptor y un transmisor) y transpondedores para fines de comunicaciones y militares, respectivamente. Algunos trabajos se han llevado a cabo en el marco del proyecto TECRAIL, donde se ha implementado un demostrador de la capa física LTE para evaluar la viabilidad del estándar LTE en el entorno ferroviario. En el ámbito militar y asociado al proyecto de calibración de radares (CALRADAR), se ha efectuado una actividad importante en el campo de la calibración de radares balísticos Doppler donde se ha analizado cuidadosamente su precisión y se ha desarrollado la unidad generadora de Doppler de un patrón electrónico para la calibración de estos radares. Dicha unidad Doppler es la responsable de la elevada resolución en frecuencia del generador de “blancos” radar construido. Por otro lado, se ha elaborado un análisis completo de las incertidumbres del sistema para optimizar el proceso de calibración. En una segunda fase se han propuesto soluciones en el desarrollo de dispositivos electro-ópticos para aplicaciones de comunicaciones. Estos dispositivos son considerados, debido a sus ventajas, tecnologías de soporte para futuros dispositivos y subsistemas de RF/microondas. Algunas demandas de radio definida por software podrían cubrirse aplicando nuevos conceptos de circuitos sintonizables mediante parámetros programables de un modo dinámico. También se ha realizado una contribución relacionada con el diseño de filtros paso banda con topología “Hairpin”, los cuales son compactos y se pueden integrar fácilmente en circuitos de microondas en una amplia gama de aplicaciones destinadas a las comunicaciones y a los sistemas militares. Como importante aportación final, se ha presentado una propuesta para ecualizar y mejorar las transmisiones de señales discretas de temporización entre los TRMs y otras unidades de procesamiento, en el satélite de última generación SEOSAR/PAZ. Tras un análisis exhaustivo, se ha obtenido la configuración óptima de los buses de transmisión de datos de alta velocidad basadas en una red de transceptores. ABSTRACT In the last decade, high-frequency telecommunications systems have extremely evolved. Frequency bands, user bandwidths, modulation techniques and other electrical characteristics of these systems are constantly changing following to the evolution of technology and the emergence of new applications. The architectures of modern transceivers are different from the traditional ones. Many of the functions conventionally performed by analog circuitry have gradually been assigned to digital signal processors. In this way, boundaries between baseband and RF functionalities are diffused. The design of modern digital wireless transceivers are capable of supporting high-speed data protocols. Therefore, a high integration scale is required for many of the components in the block chain. One of the goals of this research work is to investigate new configurations in the development of RF demonstrators (a receiver and a transmitter) and transponders for communications and military purposes, respectively. A LTE physical layer demonstrator has been implemented to assess the viability of LTE in railway scenario under the framework of the TECRAIL project. An important activity, related to the CALRADAR project, for the calibration of Doppler radars with extremely high precision has been performed. The contribution is the Doppler unit of the radar target generator developed that reveals a high frequency resolution. In order to assure the accuracy of radar calibration process, a complete analysis of the uncertainty in the above mentioned procedure has been carried out. Another important research topic has been the development of photonic devices that are considered enabling technologies for future RF and microwave devices and subsystems. Some Software Defined Radio demands are addressed by the proposed novel circuit concepts based on photonically tunable elements with dynamically programmable parameters. A small contribution has been made in the field of Hairpin-line bandpass filters. These filters are compact and can also be easily integrated into microwave circuits finding a wide range of applications in communication and military systems. In this research field, the contributions made have been the improvements in the design and the simulations of wideband filters. Finally, an important proposal to balance and enhance transmissions of discrete timing signals between TRMs and other processing units into the state of the art SEOSAR/PAZ Satellite has been carried out obtaining the optimal configuration of the high-speed data transmission buses based on a transceiver network. RÉSUMÉ Les systèmes d'hyperfréquence dédiés aux télécommunications ont beaucoup évolué dans la dernière décennie. Les bandes de fréquences, les bandes passantes par utilisateur, les techniques de modulation et d'autres caractéristiques électriques sont en constant changement en fonction de l'évolution des technologies et l'émergence de nouvelles applications. Les architectures modernes des transcepteurs sont différentes des traditionnelles. Un grand nombre d’opérations normalement effectuées par les circuits analogiques a été progressivement alloué à des processeurs de signaux numériques. Ainsi, les frontières entre la bande de base et la fonctionnalité RF sont floues. Les transcepteurs sans fils numériques modernes sont capables de transférer des données à haute vitesse selon les différents protocoles de communication utilisés. C'est pour cette raison qu’un niveau élevé d'intégration est nécessaire pour un grand nombre de composants qui constitue les différentes étapes des systèmes. L'un des objectifs de cette recherche est d'étudier les nouvelles configurations dans le développement des démonstrateurs RF (récepteur et émetteur) et des transpondeurs à des fins militaire et de communication. Certains travaux ont été réalisés dans le cadre du projet TECRAIL, où un démonstrateur de la couche physique LTE a été mis en place pour évaluer la faisabilité de la norme LTE dans l'environnement ferroviaire. Une contribution importante, liée au projet CALRADAR, est proposée dans le domaine des systèmes d’étalonnage de radar Doppler de haute précision. Cette contribution est le module Doppler de génération d’hyperfréquence intégré dans le système électronique de génération de cibles radar virtuelles que présente une résolution de fréquence très élevée. Une analyse complète de l'incertitude dans l'étalonnage des radars Doppler a été effectuée, afin d'assurer la précision du calibrage. La conception et la mise en oeuvre de quelques dispositifs photoniques sont un autre sujet important du travail de recherche présenté dans cette thèse. De tels dispositifs sont considérés comme étant des technologies habilitantes clés pour les futurs dispositifs et sous-systèmes RF et micro-ondes grâce à leurs avantages. Certaines demandes de radio définies par logiciel pourraient être supportées par nouveaux concepts de circuits basés sur des éléments dynamiquement programmables en utilisant des paramètres ajustables. Une petite contribution a été apportée pour améliorer la conception et les simulations des filtres passe-bande Hairpin à large bande. Ces filtres sont compacts et peuvent également être intégrés dans des circuits à micro-ondes compatibles avec un large éventail d'applications dans les systèmes militaires et de communication. Finalement, une proposition a été effectuée visant à équilibrer et améliorer la transmission des signaux discrets de synchronisation entre les TRMs et d'autres unités de traitement dans le satellite SEOSAR/PAZ de dernière génération et permettant l’obtention de la configuration optimale des bus de transmission de données à grande vitesse basés sur un réseau de transcepteurs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conditions are identified under which analyses of laminar mixing layers can shed light on aspects of turbulent spray combustion. With this in mind, laminar spray-combustion models are formulated for both non-premixed and partially premixed systems. The laminar mixing layer separating a hot-air stream from a monodisperse spray carried by either an inert gas or air is investigated numerically and analytically in an effort to increase understanding of the ignition process leading to stabilization of high-speed spray combustion. The problem is formulated in an Eulerian framework, with the conservation equations written in the boundary-layer approximation and with a one-step Arrhenius model adopted for the chemistry description. The numerical integrations unveil two different types of ignition behaviour depending on the fuel availability in the reaction kernel, which in turn depends on the rates of droplet vaporization and fuel-vapour diffusion. When sufficient fuel is available near the hot boundary, as occurs when the thermochemical properties of heptane are employed for the fuel in the integrations, combustion is established through a precipitous temperature increase at a well-defined thermal-runaway location, a phenomenon that is amenable to a theoretical analysis based on activation-energy asymptotics, presented here, following earlier ideas developed in describing unsteady gaseous ignition in mixing layers. By way of contrast, when the amount of fuel vapour reaching the hot boundary is small, as is observed in the computations employing the thermochemical properties of methanol, the incipient chemical reaction gives rise to a slowly developing lean deflagration that consumes the available fuel as it propagates across the mixing layer towards the spray. The flame structure that develops downstream from the ignition point depends on the fuel considered and also on the spray carrier gas, with fuel sprays carried by air displaying either a lean deflagration bounding a region of distributed reaction or a distinct double-flame structure with a rich premixed flame on the spray side and a diffusion flame on the air side. Results are calculated for the distributions of mixture fraction and scalar dissipation rate across the mixing layer that reveal complexities that serve to identify differences between spray-flamelet and gaseous-flamelet problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years a great number of high speed railway bridges have been constructed within the Spanish borders. Due to the demanding high speed trains route's geometrical requirements, bridges frequently show remarkable lengths. This fact is the main reason why railway bridges are overall longer than roadway bridges. In the same line, it is also worth highlighting the importance of high speed trains braking forces compared to vehicles. While vehicles braking forces can be tackled easily, the railway braking forces demand the existence of a fixed-point. It is generally located at abutments where the no-displacements requirement can be more easily achieved. In some other cases the fixed-point is placed in one of the interior columns. As a consequence of these bridges' length and the need of a fixed-point, temperature, creep and shrinkage strains lead to fairly significant deck displacements, which become greater with the distance to the fixed-point. These displacements need to be accommodated by the piers and bearings deformation. Regular elastomeric bearings are not able to allow such displacements and therefore are not suitable for this task. For this reason, the use of sliding PTFE POT bearings has been an extensive practice mainly because they permit sliding with low friction. This is not the only reason of the extensive use of these bearings to high-speed railways bridges. The value of the vertical loads at each bent is significantly higher than in roadway bridges. This is so mainly because the live loads due to trains traffic are much greater than vehicles. Thus, gravel rails foundation represents a non-negligible permanent load at all. All this together increases the value of vertical loads to be withstood. This high vertical load demand discards the use of conventional bearings for excessive compressions. The PTFE POT bearings' higher technology allows to accommodate this level of compression thanks to their design. The previously explained high-speed railway bridge configuration leads to a key fact regarding longitudinal horizontal loads (such as breaking forces) which is the transmission of these loads entirely to the fixed-point alone. Piers do not receive these longitudinal horizontal loads since PTFE POT bearings displayed are longitudinally free-sliding. This means that longitudinal horizontal actions on top of piers will not be forces but imposed displacements. This feature leads to the need to approach these piers design in a different manner that when piers are elastically linked to superstructure, which is the case of elastomeric bearings. In response to the previous, the main goal of this Thesis is to present a Design Method for columns displaying either longitudinally fixed POT bearings or longitudinally free PTFE POT bearings within bridges with fixed-point deck configuration, applicable to railway and road vehicles bridges. The method was developed with the intention to account for all major parameters that play a role in these columns behavior. The long process that has finally led to the method's formulation is rooted in the understanding of these column's behavior. All the assumptions made to elaborate the formulations contained in this method have been made in benefit of conservatives results. The singularity of the analysis of columns with this configuration is due to a combination of different aspects. One of the first steps of this work was to study they of these design aspects and understand the role each plays in the column's response. Among these aspects, special attention was dedicated to the column's own creep due to permanent actions such us rheological deck displacements, and also to the longitudinally guided PTFE POT bearings implications in the design of the column. The result of this study is the Design Method presented in this Thesis, that allows to work out a compliant vertical reinforcement distribution along the column. The design of horizontal reinforcement due to shear forces is not addressed in this Thesis. The method's formulations are meant to be applicable to the greatest number of cases, leaving to the engineer judgement many of the different parameters values. In this regard, this method is a helpful tool for a wide range of cases. The widespread use of European standards in the more recent years, in particular the so-called Eurocodes, has been one of the reasons why this Thesis has been developed in accordance with Eurocodes. Same trend has been followed for the bearings design implications, which are covered by the rather recent European code EN-1337. One of the most relevant aspects that this work has taken from the Eurocodes is the non-linear calculations security format. The biaxial bending simplified approach that shows the Design Method presented in this work also lies on Eurocodes recommendations. The columns under analysis are governed by a set of dimensionless parameters that are presented in this work. The identification of these parameters is a helpful for design purposes for two columns with identical dimensionless parameters may be designed together. The first group of these parameters have to do with the cross-sectional behavior, represented in the bending-curvature diagrams. A second group of parameters define the columns response. Thanks to this identification of the governing dimensionless parameters, it has been possible what has been named as Dimensionless Design Curves, which basically allows to obtain in a reduced time a preliminary vertical reinforcement column distribution. These curves are of little use nowadays, firstly because each family of curves refer to specific values of many different parameters and secondly because the use of computers allows for extremely quick and accurate calculations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wind-flow pattern over embankments involves an overexposure of the rolling stock travelling on them to wind loads. Windbreaks are a common solution for changing the flow characteristic in order to decrease unwanted effects induced by the presence of crosswind. The shelter effectiveness of a set of windbreaks placed over a railway twin-track embankment is experimentally analysed. A set of two-dimensional wind tunnel tests are undertaken and results corresponding to pressure tap measurements over a section of a typical high-speed train are herein presented.The results indicate that even small-height windbreaks provide sheltering effects to the vehicles. Also, eaves located at the windbreak tips seem to improve their sheltering effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La mejora continua de los procesos de fabricación es fundamental para alcanzar niveles óptimos de productividad, calidad y coste en la producción de componentes y productos. Para ello es necesario disponer de modelos que relacionen de forma precisa las variables que intervienen en el proceso de corte. Esta investigación tiene como objetivo determinar la influencia de la velocidad de corte y el avance en el desgaste del flanco de los insertos de carburos recubiertos GC1115 y GC2015 y en la rugosidad superficial de la pieza mecanizada de la pieza en el torneado de alta velocidad en seco del acero AISI 316L. Se utilizaron entre otros los métodos de observación científica, experimental, medición, inteligencia artificial y estadísticos. El inserto GC1115 consigue el mejor resultado de acuerdo al gráfico de medias y de las ecuaciones de regresión múltiple de desgaste del flanco para v= 350 m/min, mientras que para las restantes velocidades el inserto GC2015 consigue el mejor desempeño. El mejor comportamiento en cuanto a la rugosidad superficial de la pieza mecanizada se obtuvo con el inserto GC1115 en las velocidades de 350 m/min y 400 m/min, en la velocidad de 450 m/min el mejor resultado correspondió al inserto GC2015. Se analizaron dos criterios nuevos, el coeficiente de vida útil de la herramienta de corte en relación al volumen de metal cortado y el coeficiente de rugosidad superficial de la pieza mecanizada en relación al volumen de metal cortado. Fueron determinados los modelos de regresión múltiple que permitieron calcular el tiempo de mecanizado de los insertos sin que alcanzaran el límite del criterio de desgaste del flanco. Los modelos desarrollados fueron evaluados por sus capacidades de predicción con los valores medidos experimentalmente. ABSTRACT The continuous improvement of manufacturing processes is critical to achieving optimal levels of productivity, quality and cost in the production of components and products. This is necessary to have models that accurately relate the variables involved in the cutting process. This research aims to determine the influence of the cutting speed and feed on the flank wear of carbide inserts coated by GC1115 and GC2015 and the surface roughness of the workpiece for turning dry high speed steel AISI 316L. Among various scientific methods this study were used of observation, experiment, measurement, statistical and artificial intelligence. The GC1115 insert gets the best result according to the graph of means and multiple regression equations of flank wear for v = 350 m / min, while for the other speeds the GC2015 insert gets the best performance. Two approaches are discussed, the life ratio of the cutting tool relative to the cut volume and surface roughness coefficient in relation to the cut volume. Multiple regression models were determined to calculate the machining time of the inserts without reaching the limit of the criterion flank wear. The developed models were evaluated for their predictive capabilities with the experimentally measured values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente Trabajo fin Fin de Máster, versa sobre una caracterización preliminar del comportamiento de un robot de tipo industrial, configurado por 4 eslabones y 4 grados de libertad, y sometido a fuerzas de mecanizado en su extremo. El entorno de trabajo planteado es el de plantas de fabricación de piezas de aleaciones de aluminio para automoción. Este tipo de componentes parte de un primer proceso de fundición que saca la pieza en bruto. Para series medias y altas, en función de las propiedades mecánicas y plásticas requeridas y los costes de producción, la inyección a alta presión (HPDC) y la fundición a baja presión (LPC) son las dos tecnologías más usadas en esta primera fase. Para inyección a alta presión, las aleaciones de aluminio más empleadas son, en designación simbólica según norma EN 1706 (entre paréntesis su designación numérica); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). Para baja presión, EN AC AlSi7Mg0,3 (EN AC 42100). En los 3 primeros casos, los límites de Silicio permitidos pueden superan el 10%. En el cuarto caso, es inferior al 10% por lo que, a los efectos de ser sometidas a mecanizados, las piezas fabricadas en aleaciones con Si superior al 10%, se puede considerar que son equivalentes, diferenciándolas de la cuarta. Las tolerancias geométricas y dimensionales conseguibles directamente de fundición, recogidas en normas como ISO 8062 o DIN 1688-1, establecen límites para este proceso. Fuera de esos límites, las garantías en conseguir producciones con los objetivos de ppms aceptados en la actualidad por el mercado, obligan a ir a fases posteriores de mecanizado. Aquellas geometrías que, funcionalmente, necesitan disponer de unas tolerancias geométricas y/o dimensionales definidas acorde a ISO 1101, y no capaces por este proceso inicial de moldeado a presión, deben ser procesadas en una fase posterior en células de mecanizado. En este caso, las tolerancias alcanzables para procesos de arranque de viruta se recogen en normas como ISO 2768. Las células de mecanizado se componen, por lo general, de varios centros de control numérico interrelacionados y comunicados entre sí por robots que manipulan las piezas en proceso de uno a otro. Dichos robots, disponen en su extremo de una pinza utillada para poder coger y soltar las piezas en los útiles de mecanizado, las mesas de intercambio para cambiar la pieza de posición o en utillajes de equipos de medición y prueba, o en cintas de entrada o salida. La repetibilidad es alta, de centésimas incluso, definida según norma ISO 9283. El problema es que, estos rangos de repetibilidad sólo se garantizan si no se hacen esfuerzos o éstos son despreciables (caso de mover piezas). Aunque las inercias de mover piezas a altas velocidades hacen que la trayectoria intermedia tenga poca precisión, al inicio y al final (al coger y dejar pieza, p.e.) se hacen a velocidades relativamente bajas que hacen que el efecto de las fuerzas de inercia sean menores y que permiten garantizar la repetibilidad anteriormente indicada. No ocurre así si se quitara la garra y se intercambia con un cabezal motorizado con una herramienta como broca, mandrino, plato de cuchillas, fresas frontales o tangenciales… Las fuerzas ejercidas de mecanizado generarían unos pares en las uniones tan grandes y tan variables que el control del robot no sería capaz de responder (o no está preparado, en un principio) y generaría una desviación en la trayectoria, realizada a baja velocidad, que desencadenaría en un error de posición (ver norma ISO 5458) no asumible para la funcionalidad deseada. Se podría llegar al caso de que la tolerancia alcanzada por un pretendido proceso más exacto diera una dimensión peor que la que daría el proceso de fundición, en principio con mayor variabilidad dimensional en proceso (y por ende con mayor intervalo de tolerancia garantizable). De hecho, en los CNCs, la precisión es muy elevada, (pudiéndose despreciar en la mayoría de los casos) y no es la responsable de, por ejemplo la tolerancia de posición al taladrar un agujero. Factores como, temperatura de la sala y de la pieza, calidad constructiva de los utillajes y rigidez en el amarre, error en el giro de mesas y de colocación de pieza, si lleva agujeros previos o no, si la herramienta está bien equilibrada y el cono es el adecuado para el tipo de mecanizado… influyen más. Es interesante que, un elemento no específico tan común en una planta industrial, en el entorno anteriormente descrito, como es un robot, el cual no sería necesario añadir por disponer de él ya (y por lo tanto la inversión sería muy pequeña), puede mejorar la cadena de valor disminuyendo el costo de fabricación. Y si se pudiera conjugar que ese robot destinado a tareas de manipulación, en los muchos tiempos de espera que va a disfrutar mientras el CNC arranca viruta, pudiese coger un cabezal y apoyar ese mecanizado; sería doblemente interesante. Por lo tanto, se antoja sugestivo poder conocer su comportamiento e intentar explicar qué sería necesario para llevar esto a cabo, motivo de este trabajo. La arquitectura de robot seleccionada es de tipo SCARA. La búsqueda de un robot cómodo de modelar y de analizar cinemática y dinámicamente, sin limitaciones relevantes en la multifuncionalidad de trabajos solicitados, ha llevado a esta elección, frente a otras arquitecturas como por ejemplo los robots antropomórficos de 6 grados de libertad, muy populares a nivel industrial. Este robot dispone de 3 uniones, de las cuales 2 son de tipo par de revolución (1 grado de libertad cada una) y la tercera es de tipo corredera o par cilíndrico (2 grados de libertad). La primera unión, de tipo par de revolución, sirve para unir el suelo (considerado como eslabón número 1) con el eslabón número 2. La segunda unión, también de ese tipo, une el eslabón número 2 con el eslabón número 3. Estos 2 brazos, pueden describir un movimiento horizontal, en el plano X-Y. El tercer eslabón, está unido al eslabón número 4 por la unión de tipo corredera. El movimiento que puede describir es paralelo al eje Z. El robot es de 4 grados de libertad (4 motores). En relación a los posibles trabajos que puede realizar este tipo de robot, su versatilidad abarca tanto operaciones típicas de manipulación como operaciones de arranque de viruta. Uno de los mecanizados más usuales es el taladrado, por lo cual se elige éste para su modelización y análisis. Dentro del taladrado se elegirá para acotar las fuerzas, taladrado en macizo con broca de diámetro 9 mm. El robot se ha considerado por el momento que tenga comportamiento de sólido rígido, por ser el mayor efecto esperado el de los pares en las uniones. Para modelar el robot se utiliza el método de los sistemas multicuerpos. Dentro de este método existen diversos tipos de formulaciones (p.e. Denavit-Hartenberg). D-H genera una cantidad muy grande de ecuaciones e incógnitas. Esas incógnitas son de difícil comprensión y, para cada posición, hay que detenerse a pensar qué significado tienen. Se ha optado por la formulación de coordenadas naturales. Este sistema utiliza puntos y vectores unitarios para definir la posición de los distintos cuerpos, y permite compartir, cuando es posible y se quiere, para definir los pares cinemáticos y reducir al mismo tiempo el número de variables. Las incógnitas son intuitivas, las ecuaciones de restricción muy sencillas y se reduce considerablemente el número de ecuaciones e incógnitas. Sin embargo, las coordenadas naturales “puras” tienen 2 problemas. El primero, que 2 elementos con un ángulo de 0 o 180 grados, dan lugar a puntos singulares que pueden crear problemas en las ecuaciones de restricción y por lo tanto han de evitarse. El segundo, que tampoco inciden directamente sobre la definición o el origen de los movimientos. Por lo tanto, es muy conveniente complementar esta formulación con ángulos y distancias (coordenadas relativas). Esto da lugar a las coordenadas naturales mixtas, que es la formulación final elegida para este TFM. Las coordenadas naturales mixtas no tienen el problema de los puntos singulares. Y la ventaja más importante reside en su utilidad a la hora de aplicar fuerzas motrices, momentos o evaluar errores. Al incidir sobre la incógnita origen (ángulos o distancias) controla los motores de manera directa. El algoritmo, la simulación y la obtención de resultados se ha programado mediante Matlab. Para realizar el modelo en coordenadas naturales mixtas, es preciso modelar en 2 pasos el robot a estudio. El primer modelo se basa en coordenadas naturales. Para su validación, se plantea una trayectoria definida y se analiza cinemáticamente si el robot satisface el movimiento solicitado, manteniendo su integridad como sistema multicuerpo. Se cuantifican los puntos (en este caso inicial y final) que configuran el robot. Al tratarse de sólidos rígidos, cada eslabón queda definido por sus respectivos puntos inicial y final (que son los más interesantes para la cinemática y la dinámica) y por un vector unitario no colineal a esos 2 puntos. Los vectores unitarios se colocan en los lugares en los que se tenga un eje de rotación o cuando se desee obtener información de un ángulo. No son necesarios vectores unitarios para medir distancias. Tampoco tienen por qué coincidir los grados de libertad con el número de vectores unitarios. Las longitudes de cada eslabón quedan definidas como constantes geométricas. Se establecen las restricciones que definen la naturaleza del robot y las relaciones entre los diferentes elementos y su entorno. La trayectoria se genera por una nube de puntos continua, definidos en coordenadas independientes. Cada conjunto de coordenadas independientes define, en un instante concreto, una posición y postura de robot determinada. Para conocerla, es necesario saber qué coordenadas dependientes hay en ese instante, y se obtienen resolviendo por el método de Newton-Rhapson las ecuaciones de restricción en función de las coordenadas independientes. El motivo de hacerlo así es porque las coordenadas dependientes deben satisfacer las restricciones, cosa que no ocurre con las coordenadas independientes. Cuando la validez del modelo se ha probado (primera validación), se pasa al modelo 2. El modelo número 2, incorpora a las coordenadas naturales del modelo número 1, las coordenadas relativas en forma de ángulos en los pares de revolución (3 ángulos; ϕ1, ϕ 2 y ϕ3) y distancias en los pares prismáticos (1 distancia; s). Estas coordenadas relativas pasan a ser las nuevas coordenadas independientes (sustituyendo a las coordenadas independientes cartesianas del modelo primero, que eran coordenadas naturales). Es necesario revisar si el sistema de vectores unitarios del modelo 1 es suficiente o no. Para este caso concreto, se han necesitado añadir 1 vector unitario adicional con objeto de que los ángulos queden perfectamente determinados con las correspondientes ecuaciones de producto escalar y/o vectorial. Las restricciones habrán de ser incrementadas en, al menos, 4 ecuaciones; una por cada nueva incógnita. La validación del modelo número 2, tiene 2 fases. La primera, al igual que se hizo en el modelo número 1, a través del análisis cinemático del comportamiento con una trayectoria definida. Podrían obtenerse del modelo 2 en este análisis, velocidades y aceleraciones, pero no son necesarios. Tan sólo interesan los movimientos o desplazamientos finitos. Comprobada la coherencia de movimientos (segunda validación), se pasa a analizar cinemáticamente el comportamiento con trayectorias interpoladas. El análisis cinemático con trayectorias interpoladas, trabaja con un número mínimo de 3 puntos máster. En este caso se han elegido 3; punto inicial, punto intermedio y punto final. El número de interpolaciones con el que se actúa es de 50 interpolaciones en cada tramo (cada 2 puntos máster hay un tramo), resultando un total de 100 interpolaciones. El método de interpolación utilizado es el de splines cúbicas con condición de aceleración inicial y final constantes, que genera las coordenadas independientes de los puntos interpolados de cada tramo. Las coordenadas dependientes se obtienen resolviendo las ecuaciones de restricción no lineales con el método de Newton-Rhapson. El método de las splines cúbicas es muy continuo, por lo que si se desea modelar una trayectoria en el que haya al menos 2 movimientos claramente diferenciados, es preciso hacerlo en 2 tramos y unirlos posteriormente. Sería el caso en el que alguno de los motores se desee expresamente que esté parado durante el primer movimiento y otro distinto lo esté durante el segundo movimiento (y así sucesivamente). Obtenido el movimiento, se calculan, también mediante fórmulas de diferenciación numérica, las velocidades y aceleraciones independientes. El proceso es análogo al anteriormente explicado, recordando la condición impuesta de que la aceleración en el instante t= 0 y en instante t= final, se ha tomado como 0. Las velocidades y aceleraciones dependientes se calculan resolviendo las correspondientes derivadas de las ecuaciones de restricción. Se comprueba, de nuevo, en una tercera validación del modelo, la coherencia del movimiento interpolado. La dinámica inversa calcula, para un movimiento definido -conocidas la posición, velocidad y la aceleración en cada instante de tiempo-, y conocidas las fuerzas externas que actúan (por ejemplo el peso); qué fuerzas hay que aplicar en los motores (donde hay control) para que se obtenga el citado movimiento. En la dinámica inversa, cada instante del tiempo es independiente de los demás y tiene una posición, una velocidad y una aceleración y unas fuerzas conocidas. En este caso concreto, se desean aplicar, de momento, sólo las fuerzas debidas al peso, aunque se podrían haber incorporado fuerzas de otra naturaleza si se hubiese deseado. Las posiciones, velocidades y aceleraciones, proceden del cálculo cinemático. El efecto inercial de las fuerzas tenidas en cuenta (el peso) es calculado. Como resultado final del análisis dinámico inverso, se obtienen los pares que han de ejercer los cuatro motores para replicar el movimiento prescrito con las fuerzas que estaban actuando. La cuarta validación del modelo consiste en confirmar que el movimiento obtenido por aplicar los pares obtenidos en la dinámica inversa, coinciden con el obtenido en el análisis cinemático (movimiento teórico). Para ello, es necesario acudir a la dinámica directa. La dinámica directa se encarga de calcular el movimiento del robot, resultante de aplicar unos pares en motores y unas fuerzas en el robot. Por lo tanto, el movimiento real resultante, al no haber cambiado ninguna condición de las obtenidas en la dinámica inversa (pares de motor y fuerzas inerciales debidas al peso de los eslabones) ha de ser el mismo al movimiento teórico. Siendo así, se considera que el robot está listo para trabajar. Si se introduce una fuerza exterior de mecanizado no contemplada en la dinámica inversa y se asigna en los motores los mismos pares resultantes de la resolución del problema dinámico inverso, el movimiento real obtenido no es igual al movimiento teórico. El control de lazo cerrado se basa en ir comparando el movimiento real con el deseado e introducir las correcciones necesarias para minimizar o anular las diferencias. Se aplican ganancias en forma de correcciones en posición y/o velocidad para eliminar esas diferencias. Se evalúa el error de posición como la diferencia, en cada punto, entre el movimiento teórico deseado en el análisis cinemático y el movimiento real obtenido para cada fuerza de mecanizado y una ganancia concreta. Finalmente, se mapea el error de posición obtenido para cada fuerza de mecanizado y las diferentes ganancias previstas, graficando la mejor precisión que puede dar el robot para cada operación que se le requiere, y en qué condiciones. -------------- This Master´s Thesis deals with a preliminary characterization of the behaviour for an industrial robot, configured with 4 elements and 4 degrees of freedoms, and subjected to machining forces at its end. Proposed working conditions are those typical from manufacturing plants with aluminium alloys for automotive industry. This type of components comes from a first casting process that produces rough parts. For medium and high volumes, high pressure die casting (HPDC) and low pressure die casting (LPC) are the most used technologies in this first phase. For high pressure die casting processes, most used aluminium alloys are, in simbolic designation according EN 1706 standard (between brackets, its numerical designation); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). For low pressure, EN AC AlSi7Mg0,3 (EN AC 42100). For the 3 first alloys, Si allowed limits can exceed 10% content. Fourth alloy has admisible limits under 10% Si. That means, from the point of view of machining, that components made of alloys with Si content above 10% can be considered as equivalent, and the fourth one must be studied separately. Geometrical and dimensional tolerances directly achievables from casting, gathered in standards such as ISO 8062 or DIN 1688-1, establish a limit for this process. Out from those limits, guarantees to achieve batches with objetive ppms currently accepted by market, force to go to subsequent machining process. Those geometries that functionally require a geometrical and/or dimensional tolerance defined according ISO 1101, not capable with initial moulding process, must be obtained afterwards in a machining phase with machining cells. In this case, tolerances achievables with cutting processes are gathered in standards such as ISO 2768. In general terms, machining cells contain several CNCs that they are interrelated and connected by robots that handle parts in process among them. Those robots have at their end a gripper in order to take/remove parts in machining fixtures, in interchange tables to modify position of part, in measurement and control tooling devices, or in entrance/exit conveyors. Repeatibility for robot is tight, even few hundredths of mm, defined according ISO 9283. Problem is like this; those repeatibilty ranks are only guaranteed when there are no stresses or they are not significant (f.e. due to only movement of parts). Although inertias due to moving parts at a high speed make that intermediate paths have little accuracy, at the beginning and at the end of trajectories (f.e, when picking part or leaving it) movement is made with very slow speeds that make lower the effect of inertias forces and allow to achieve repeatibility before mentioned. It does not happens the same if gripper is removed and it is exchanged by an spindle with a machining tool such as a drilling tool, a pcd boring tool, a face or a tangential milling cutter… Forces due to machining would create such big and variable torques in joints that control from the robot would not be able to react (or it is not prepared in principle) and would produce a deviation in working trajectory, made at a low speed, that would trigger a position error (see ISO 5458 standard) not assumable for requested function. Then it could be possible that tolerance achieved by a more exact expected process would turn out into a worst dimension than the one that could be achieved with casting process, in principle with a larger dimensional variability in process (and hence with a larger tolerance range reachable). As a matter of fact, accuracy is very tight in CNC, (its influence can be ignored in most cases) and it is not the responsible of, for example position tolerance when drilling a hole. Factors as, room and part temperature, manufacturing quality of machining fixtures, stiffness at clamping system, rotating error in 4th axis and part positioning error, if there are previous holes, if machining tool is properly balanced, if shank is suitable for that machining type… have more influence. It is interesting to know that, a non specific element as common, at a manufacturing plant in the enviroment above described, as a robot (not needed to be added, therefore with an additional minimum investment), can improve value chain decreasing manufacturing costs. And when it would be possible to combine that the robot dedicated to handling works could support CNCs´ works in its many waiting time while CNCs cut, and could take an spindle and help to cut; it would be double interesting. So according to all this, it would be interesting to be able to know its behaviour and try to explain what would be necessary to make this possible, reason of this work. Selected robot architecture is SCARA type. The search for a robot easy to be modeled and kinematically and dinamically analyzed, without significant limits in the multifunctionality of requested operations, has lead to this choice. Due to that, other very popular architectures in the industry, f.e. 6 DOFs anthropomorphic robots, have been discarded. This robot has 3 joints, 2 of them are revolute joints (1 DOF each one) and the third one is a cylindrical joint (2 DOFs). The first joint, a revolute one, is used to join floor (body 1) with body 2. The second one, a revolute joint too, joins body 2 with body 3. These 2 bodies can move horizontally in X-Y plane. Body 3 is linked to body 4 with a cylindrical joint. Movement that can be made is paralell to Z axis. The robt has 4 degrees of freedom (4 motors). Regarding potential works that this type of robot can make, its versatility covers either typical handling operations or cutting operations. One of the most common machinings is to drill. That is the reason why it has been chosen for the model and analysis. Within drilling, in order to enclose spectrum force, a typical solid drilling with 9 mm diameter. The robot is considered, at the moment, to have a behaviour as rigid body, as biggest expected influence is the one due to torques at joints. In order to modelize robot, it is used multibodies system method. There are under this heading different sorts of formulations (f.e. Denavit-Hartenberg). D-H creates a great amount of equations and unknown quantities. Those unknown quatities are of a difficult understanding and, for each position, one must stop to think about which meaning they have. The choice made is therefore one of formulation in natural coordinates. This system uses points and unit vectors to define position of each different elements, and allow to share, when it is possible and wished, to define kinematic torques and reduce number of variables at the same time. Unknown quantities are intuitive, constrain equations are easy and number of equations and variables are strongly reduced. However, “pure” natural coordinates suffer 2 problems. The first one is that 2 elements with an angle of 0° or 180°, give rise to singular positions that can create problems in constrain equations and therefore they must be avoided. The second problem is that they do not work directly over the definition or the origin of movements. Given that, it is highly recommended to complement this formulation with angles and distances (relative coordinates). This leads to mixed natural coordinates, and they are the final formulation chosen for this MTh. Mixed natural coordinates have not the problem of singular positions. And the most important advantage lies in their usefulness when applying driving forces, torques or evaluating errors. As they influence directly over origin variable (angles or distances), they control motors directly. The algorithm, simulation and obtaining of results has been programmed with Matlab. To design the model in mixed natural coordinates, it is necessary to model the robot to be studied in 2 steps. The first model is based in natural coordinates. To validate it, it is raised a defined trajectory and it is kinematically analyzed if robot fulfils requested movement, keeping its integrity as multibody system. The points (in this case starting and ending points) that configure the robot are quantified. As the elements are considered as rigid bodies, each of them is defined by its respectively starting and ending point (those points are the most interesting ones from the point of view of kinematics and dynamics) and by a non-colinear unit vector to those points. Unit vectors are placed where there is a rotating axis or when it is needed information of an angle. Unit vectors are not needed to measure distances. Neither DOFs must coincide with the number of unit vectors. Lengths of each arm are defined as geometrical constants. The constrains that define the nature of the robot and relationships among different elements and its enviroment are set. Path is generated by a cloud of continuous points, defined in independent coordinates. Each group of independent coordinates define, in an specific instant, a defined position and posture for the robot. In order to know it, it is needed to know which dependent coordinates there are in that instant, and they are obtained solving the constraint equations with Newton-Rhapson method according to independent coordinates. The reason to make it like this is because dependent coordinates must meet constraints, and this is not the case with independent coordinates. When suitability of model is checked (first approval), it is given next step to model 2. Model 2 adds to natural coordinates from model 1, the relative coordinates in the shape of angles in revoluting torques (3 angles; ϕ1, ϕ 2 and ϕ3) and distances in prismatic torques (1 distance; s). These relative coordinates become the new independent coordinates (replacing to cartesian independent coordinates from model 1, that they were natural coordinates). It is needed to review if unit vector system from model 1 is enough or not . For this specific case, it was necessary to add 1 additional unit vector to define perfectly angles with their related equations of dot and/or cross product. Constrains must be increased in, at least, 4 equations; one per each new variable. The approval of model 2 has two phases. The first one, same as made with model 1, through kinematic analysis of behaviour with a defined path. During this analysis, it could be obtained from model 2, velocities and accelerations, but they are not needed. They are only interesting movements and finite displacements. Once that the consistence of movements has been checked (second approval), it comes when the behaviour with interpolated trajectories must be kinematically analyzed. Kinematic analysis with interpolated trajectories work with a minimum number of 3 master points. In this case, 3 points have been chosen; starting point, middle point and ending point. The number of interpolations has been of 50 ones in each strecht (each 2 master points there is an strecht), turning into a total of 100 interpolations. The interpolation method used is the cubic splines one with condition of constant acceleration both at the starting and at the ending point. This method creates the independent coordinates of interpolated points of each strecht. The dependent coordinates are achieved solving the non-linear constrain equations with Newton-Rhapson method. The method of cubic splines is very continuous, therefore when it is needed to design a trajectory in which there are at least 2 movements clearly differents, it is required to make it in 2 steps and join them later. That would be the case when any of the motors would keep stopped during the first movement, and another different motor would remain stopped during the second movement (and so on). Once that movement is obtained, they are calculated, also with numerical differenciation formulas, the independent velocities and accelerations. This process is analogous to the one before explained, reminding condition that acceleration when t=0 and t=end are 0. Dependent velocities and accelerations are calculated solving related derivatives of constrain equations. In a third approval of the model it is checked, again, consistence of interpolated movement. Inverse dynamics calculates, for a defined movement –knowing position, velocity and acceleration in each instant of time-, and knowing external forces that act (f.e. weights); which forces must be applied in motors (where there is control) in order to obtain requested movement. In inverse dynamics, each instant of time is independent of the others and it has a position, a velocity, an acceleration and known forces. In this specific case, it is intended to apply, at the moment, only forces due to the weight, though forces of another nature could have been added if it would have been preferred. The positions, velocities and accelerations, come from kinematic calculation. The inertial effect of forces taken into account (weight) is calculated. As final result of the inverse dynamic analysis, the are obtained torques that the 4 motors must apply to repeat requested movement with the forces that were acting. The fourth approval of the model consists on confirming that the achieved movement due to the use of the torques obtained in the inverse dynamics, are in accordance with movements from kinematic analysis (theoretical movement). For this, it is necessary to work with direct dynamics. Direct dynamic is in charge of calculating the movements of robot that results from applying torques at motors and forces at the robot. Therefore, the resultant real movement, as there was no change in any condition of the ones obtained at the inverse dynamics (motor torques and inertial forces due to weight of elements) must be the same than theoretical movement. When these results are achieved, it is considered that robot is ready to work. When a machining external force is introduced and it was not taken into account before during the inverse dynamics, and torques at motors considered are the ones of the inverse dynamics, the real movement obtained is not the same than the theoretical movement. Closed loop control is based on comparing real movement with expected movement and introducing required corrrections to minimize or cancel differences. They are applied gains in the way of corrections for position and/or tolerance to remove those differences. Position error is evaluated as the difference, in each point, between theoretical movemment (calculated in the kinematic analysis) and the real movement achieved for each machining force and for an specific gain. Finally, the position error obtained for each machining force and gains are mapped, giving a chart with the best accuracy that the robot can give for each operation that has been requested and which conditions must be provided.