981 resultados para low speed CCD


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study we compared the microleakage of conventional glass ionomer cement (GIC) restorations following the use of different methods of root caries removal. In vitro root caries were induced in 75 human root dentin samples that were divided in five groups of 15 each according to the method used for caries removal: in group 1 spherical carbide burs at low speed were used, in group 2 a hand-held excavator was used, and in groups 3 to 5 an Er,Cr:YSGG laser was used at 2.25 W, 40.18 J/cm(2) (group 3), 2.50 W, 44.64 J/cm(2) (group 4) and 2.75 W, 49.11 J/cm(2) (group 5). The air/water cooling during irradiation was set to 55%/65% respectively. All cavities were filled with GIC. Five samples from each group were evaluated by scanning electron microscopy (SEM) and the other ten samples were thermocycled and submitted to a microleakage test. The data obtained were compared by ANOVA followed by Fisher's test (pa parts per thousand currency sign0.05). Group 4 showed the lowest microleakage index (56.65 6.30; p < 0.05). There were no significant differences among the other groups. On SEM images samples of groups 1 and 2 showed a more regular interface than the irradiated samples. Demineralized dentin below the restoration was observed, that was probably affected dentin. Group 4 showed the lowest microleakage values compared to the other experimental groups, so under the conditions of the present study the method that provided the lowest microleakage was the Er,Cr:YSGG laser with a power output of 2.5 W yielding an energy density of 44.64 J/cm(2).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nell’ambito della presente tesi verrà descritto un approccio generalizzato per il controllo delle macchine elettriche trifasi; la prima parte è incentrata nello sviluppo di una metodologia di modellizzazione generale, ossia in grado di descrivere, da un punto di vista matematico, il comportamento di una generica macchina elettrica, che possa quindi includere in sé stessa tutte le caratteristiche salienti che possano caratterizzare ogni specifica tipologia di macchina elettrica. Il passo successivo è quello di realizzare un algoritmo di controllo per macchine elettriche che si poggi sulla teoria generalizzata e che utilizzi per il proprio funzionamento quelle grandezze offerte dal modello unico delle macchine elettriche. La tipologia di controllo che è stata utilizzata è quella che comunemente viene definita come controllo ad orientamento di campo (FOC), per la quale sono stati individuati degli accorgimenti atti a migliorarne le prestazioni dinamiche e di controllo della coppia erogata. Per concludere verrà presentata una serie di prove sperimentali con lo scopo di mettere in risalto alcuni aspetti cruciali nel controllo delle macchine elettriche mediante un algoritmo ad orientamento di campo e soprattutto di verificare l’attendibilità dell’approccio generalizzato alle macchine elettriche trifasi. I risultati sperimentali confermano quindi l’applicabilità del metodo a diverse tipologie di macchine (asincrone e sincrone) e sono stati verificate nelle condizioni operative più critiche: bassa velocità, alta velocità bassi carichi, dinamica lenta e dinamica veloce.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, the industrial application of control a Permanent Magnet Synchronous Motor in a sensorless configuration has been faced, and in particular the task of estimating the unknown “parameters” necessary for the application of standard motor control algorithms. In literature several techniques have been proposed to cope with this task, among them the technique based on model-based nonlinear observer has been followed. The hypothesis of neglecting the mechanical dynamics from the motor model has been applied due to practical and physical considerations, therefore only the electromagnetic dynamics has been used for the observers design. First observer proposed is based on stator currents and Stator Flux dynamics described in a generic rotating reference frame. Stator flux dynamics are known apart their initial conditions which are estimated, with speed that is also unknown, through the use of the Adaptive Theory. The second observer proposed is based on stator currents and Rotor Flux dynamics described in a self-aligning reference frame. Rotor flux dynamics are described in the stationary reference frame exploiting polar coordinates instead of classical Cartesian coordinates, by means the estimation of amplitude and speed of the rotor flux. The stability proof is derived in a Singular Perturbation Framework, which allows for the use the current estimation errors as a measure of rotor flux estimation errors. The stability properties has been derived using a specific theory for systems with time scale separation, which guarantees a semi-global practical stability. For the two observer ideal simulations and real simulations have been performed to prove the effectiveness of the observers proposed, real simulations on which the effects of the Inverter nonlinearities have been introduced, showing the already known problems of the model-based observers for low speed applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background As predicted by theory, traits associated with reproduction often evolve at a comparatively high speed. This is especially the case for courtship behaviour which plays a central role in reproductive isolation. On the other hand, courtship behavioural traits often involve morphological and behavioural adaptations in both sexes; this suggests that their evolution might be under severe constraints, for instance irreversibility of character loss. Here, we use a recently proposed method to retrieve data on a peculiar courtship behavioural trait, i.e. antennal coiling, for 56 species of diplazontine parasitoid wasps. On the basis of a well-resolved phylogeny, we reconstruct the evolutionary history of antennal coiling and associated morphological modifications to study the mode of evolution of this complex character system. Results Our study reveals a large variation in shape, location and ultra-structure of male-specific modifications on the antennae. As for antennal coiling, we find either single-coiling, double-coiling or the absence of coiling; each state is present in multiple genera. Using a model comparison approach, we show that the possession of antennal modifications is highly correlated with antennal coiling behaviour. Ancestral state reconstruction shows that both antennal modifications and antennal coiling are highly congruent with the molecular phylogeny, implying low levels of homoplasy and a comparatively low speed of evolution. Antennal coiling is lost on two independent occasions, and never reacquired. A zero rate of regaining antennal coiling is supported by maximum parsimony, maximum likelihood and Bayesian approaches. Conclusions Our study provides the first comparative evidence for a tight correlation between male-specific antennal modifications and the use of the antennae during courtship. Antennal coiling in Diplazontinae evolved at a comparatively low rate, and was never reacquired in any of the studied taxa. This suggests that the loss of antennal coiling is irreversible on the timescale examined here, and therefore that evolutionary constraints have greatly influenced the evolution of antennal courtship in this group of parasitoid wasps. Further studies are needed to ascertain whether the loss of antennal coiling is irreversible on larger timescales, and whether evolutionary constraints have influenced courtship behavioural traits in a similar way in other groups.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Imaging of biological samples has been performed with a variety of techniques for example electromagnetic waves, electrons, neutrons, ultrasound and X-rays. Also conventional X-ray imaging represents the basis of medical diagnostic imaging, it remains of limited use in this application because it is based solely on the differential absorption of X-rays by tissues. Coherent and bright photon beams, such as those produced by third-generation synchrotron X-ray sources, provide further information on subtle X-ray phase changes at matter interfaces. This complements conventional X-ray absorption by edge enhancement phenomena. Thus, phase contrast imaging has the potential to improve the detection of structures on images by detecting those structures that are invisible with X-ray absorption imaging. Images of a weakly absorbing nylon fibre were recorded in in-line holography geometry using a high resolution low-noise CCD camera at the ESRF in Grenoble. The method was also applied to improve image contrast for images of biological tissues. This paper presents phase contrast microradiographs of vascular tree casts and images of a housefly. These reveal very fine structures, that remain invisible with conventional absorption contrast only.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Building energy meter network, based on per-appliance monitoring system, willbe an important part of the Advanced Metering Infrastructure. Two key issues exist for designing such networks. One is the network structure to be used. The other is the implementation of the network structure on a large amount of small low power devices, and the maintenance of high quality communication when the devices have electric connection with high voltage AC line. The recent advancement of low-power wireless communication makes itself the right candidate for house and building energy network. Among all kinds of wireless solutions, the low speed but highly reliable 802.15.4 radio has been chosen in this design. While many network-layer solutions have been provided on top of 802.15.4, an IPv6 based method is used in this design. 6LOWPAN is the particular protocol which adapts IP on low power personal network radio. In order to extend the network into building area without, a specific network layer routing mechanism-RPL, is included in this design. The fundamental unit of the building energy monitoring system is a smart wall plug. It is consisted of an electricity energy meter, a RF communication module and a low power CPU. The real challenge for designing such a device is its network firmware. In this design, IPv6 is implemented through Contiki operation system. Customize hardware driver and meter application program have been developed on top of the Contiki OS. Some experiments have been done, in order to prove the network ability of this system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To analyse risk factors in alpine skiing. DESIGN: A controlled multicentre survey of injured and non-injured alpine skiers. SETTING: One tertiary and two secondary trauma centres in Bern, Switzerland. PATIENTS AND METHODS: All injured skiers admitted from November 2007 to April 2008 were analysed using a completed questionnaire incorporating 15 parameters. The same questionnaire was distributed to non-injured controls. Multiple logistic regression was performed. Patterns of combined risk factors were calculated by inference trees. A total of 782 patients and 496 controls were interviewed. RESULTS: Parameters that were significant for the patients were: high readiness for risk (p = 0.0365, OR 1.84, 95% CI 1.04 to 3.27); low readiness for speed (p = 0.0008, OR 0.29, 95% CI 0.14 to 0.60); no aggressive behaviour on slopes (p<0.0001, OR 0.19, 95% CI 0.09 to 0.37); new skiing equipment (p = 0.0228, OR 59, 95% CI 0.37 to 0.93); warm-up performed (p = 0.0015, OR 1.79, 95% CI 1.25 to 2.57); old snow compared with fresh snow (p = 0.0155, OR 0.31, 95% CI 0.12 to 0.80); old snow compared with artificial snow (p = 0.0037, OR 0.21, 95% CI 0.07 to 0.60); powder snow compared with slushy snow (p = 0.0035, OR 0.25, 95% CI 0.10 to 0.63); drug consumption (p = 0.0044, OR 5.92, 95% CI 1.74 to 20.11); and alcohol abstinence (p<0.0001, OR 0.14, 95% CI 0.05 to 0.34). Three groups at risk were detected: (1) warm-up 3-12 min, visual analogue scale (VAS)(speed) >4 and bad weather/visibility; (2) VAS(speed) 4-7, icy slopes and not wearing a helmet; (3) warm-up >12 min and new skiing equipment. CONCLUSIONS: Low speed, high readiness for risk, new skiing equipment, old and powder snow, and drug consumption are significant risk factors when skiing. Future work should aim to identify more precisely specific groups at risk and develop recommendations--for example, a snow weather index at valley stations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We appreciate the comments and concerns expressed by Arakawa and colleagues regarding our article, titled “Pulsatile control of rotary blood pumps: Does the modulation waveform matter?”1 Unfortunately, we have to disagree with Arakawa and colleagues. As is obvious from the title of our article, it investigates the effect of different waveforms on the heart–device interaction. In contrast to the authors' claim, this is the first article in the literature that uses basic waveforms (sine, triangle, saw tooth, and rectangular) with different phase shifts to examines their impact on left ventricular unloading. The previous publications2, 3 and 4 just varied the pump speed during systole and diastole, which was first reported by Bearnson and associates5 in 1996, and studied its effect on aortic pressure, coronary flow, and end-diastolic volume. We should mention that dp/dtmax is a load-sensitive parameter of contractility and not representative for the degree of unloading. Moreover, none of the aforementioned reports has studied mechanical unloading and in particular the stroke work of the left ventricle. Our method is unique because we do not just alternate between high and low speed but have accurate control of the waveform because of the direct drive system of Levitronix Technologies LLC (Waltham, Mass) and a custom-developed pump controller. Without referring, Arakawa and associates state “several previous studies have already reported the coronary flow diminishes as the left ventricular assist device support increases.” It should be noted that all the waveforms used in our study have 2000 rpm average value with 1000 rpm amplitude, which is not an excessive speed for the CentriMag rotary pump (Levitronix) to collapse the ventricle and diminish the coronary flow. We agree with Arakawa and coworkers that there is a need for a heart failure model to come to more relevant results with respect to clinical expectations. However, we have explored many existing models, including species and breeds that have a native proneness to cardiomyopathy, but all of them differ from the genetic presentation in humans. We certainly do not believe that the use of microembolization, in which the coronary circulation is impaired by the injection of microspheres, would form a good model from which to draw conclusions about coronary flow change under different loading conditions. A model would be needed in which either an infarct is created to mimic ischemic heart failure or the coronary circulation remains untouched to simulate, for instance, dilated cardiomyopathy. Furthermore, in discussion we clearly mention that “lack of heart failure is a major limitation of our study.” We also believe that unloading is not the only factor of the cardiac functional recovery, and an excessive unloading of the left ventricle might lead to cardiac tissue atrophy. Therefore, in our article we mention that control of the level of cardiac unloading by assist devices has been suggested as a mechanical tool to promote recovery, and more studies are required to find better strategies for the speed modulation of rotary pumps and to achieve an optimal heart load control to enhance myocardial recovery. Finally, there are many publications about pulsing rotary blood pumps and it was impossible to include them all. We preferred to reference some of the earlier basic works such as an original research by Bearnson and coworkers5 and another article published by our group,6 which is more relevant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background. The present retrospective study was intended to investigate whether working out and other low-speed sports can provoke cardiovascular, neurological, or traumatic damage. Material and Methods. Patient data from 2007 to 2013 was collected and saved at the university department of emergency medicine in an electronic patient record database. Results. Of the 138 patients included in this study, 83.3% (n = 115) were male and 16.7% female (n = 23). Most admissions were due to musculoskeletal accidents (n = 77; 55.8%), followed by neurological incidents (n = 23; 16.7%), cardiovascular incidents (n = 19; 13.8%), soft tissue injuries (n = 3; 2.2%), and others (n = 16; 11.6%). The mean age of the allover injured people was 36.7 years. The majority of the patients (n = 113; 81.9%) were treated as outpatients; 24 (17.4%) were inpatients. Discussion. In Switzerland, this is the first study that describes emergency department admissions after workout and examines trauma and neurological and cardiovascular incidents. As specific injuries, such as brain haemorrhages, STEMIs, and epileptic seizures, were relatively frequent, it was hypothesised that workout with its physiological changes may be an actual trigger for these injuries, at least for a specific population. Conclusion. Strenuous physical activity may trigger the risk of cardiovascular, neurological, or trauma events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Separated transitional boundary layers appear on key aeronautical processes such as the flow around wings or turbomachinery blades. The aim of this thesis is the study of these flows in representative scenarios of technological applications, gaining knowledge about phenomenology and physical processes that occur there and, developing a simple model for scaling them. To achieve this goal, experimental measurements have been carried out in a low speed facility, ensuring the flow homogeneity and a low disturbances level such that unwanted transitional mechanisms are avoided. The studied boundary layers have been developed on a flat plate, by imposing a pressure gradient by means of contoured walls. They generate an initial acceleration region followed by a deceleration zone. The initial region is designed to obtain at the beginning of the deceleration the Blasius profile, characterized by its momentum thickness, and an edge boundary layer velocity, defining the problem characteristic velocity. The deceleration region is designed to obtain a linear evolution of the edge velocity, thereby defining the characteristic length of the problem. Several experimental techniques, both intrusive (hot wire anemometry, total pressure probes) as nonintrusive (PIV and LDV anemometry, high-speed filming), have been used in order to take advantage of each of them and allow cross-validation of the results. Once the boundary layer at the deceleration beginning has been characterized, ensuring the desired integral parameters and level of disturbance, the evolution of the laminar boundary layer up to the point of separation is studied. It has been compared with integral methods, and numerical simulations. In view of the results a new model for this evolution is proposed. Downstream from the separation, the flow near to the wall is configured as a shear layer that encloses low momentum recirculating fluid. The region where the shear layer remains laminar tends to be positioned to compensate the adverse pressure gradient associated with the imposed deceleration. Under these conditions, the momentum thickness remains almost constant. This laminar shear layer region extends up to where transitional phenomena appear, extension that scales with the momentum thickness at separation. These transitional phenomena are of inviscid type, similar to those found in free shear layers. The transitional region analysis begins with a study of the disturbances evolution in the linear growth region and the comparison of experimental results with a numerical model based on Linear Stability Theory for parallel flows and with data from other authors. The results’ coalescence for both the disturbances growth and the excited frequencies is stated. For the transition final stages the vorticity concentration into vortex blobs is found, analogously to what happens in free shear layers. Unlike these, the presence of the wall and the pressure gradient make the large scale structures to move towards the wall and quickly disappear under certain circumstances. In these cases, the recirculating flow is confined into a closed region saying the bubble is closed or the boundary layer reattaches. From the reattachment point, the fluid shows a configuration in the vicinity of the wall traditionally considered as turbulent. It has been observed that existing integral methods for turbulent boundary layers do not fit well to the experimental results, due to these methods being valid only for fully developed turbulent flow. Nevertheless, it has been found that downstream from the reattachment point the velocity profiles are self-similar, and a model has been proposed for the evolution of the integral parameters of the boundary layer in this region. Finally, the phenomenon known as bubble burst is analyzed. It has been checked the validity of existing models in literature and a new one is proposed. This phenomenon is blamed to the inability of the large scale structures formed after the transition to overcome with the adverse pressure gradient, move towards the wall and close the bubble. El estudio de capas límites transicionales con separación es de gran relevancia en distintas aplicaciones tecnológicas. Particularmente, en tecnología aeronáutica, aparecen en procesos claves, tales como el flujo alrededor de alas o álabes de turbomaquinaria. El objetivo de esta tesis es el estudio de estos flujos en situaciones representativas de las aplicaciones tecnológicas, ganando por un lado conocimiento sobre la fenomenología y los procesos físicos que aparecen y, por otra parte, desarrollando un modelo sencillo para el escalado de los mismos. Para conseguir este objetivo se han realizado ensayos en una instalación experimental de baja velocidad específicamente diseñada para asegurar un flujo homogéneo y con bajo nivel de perturbaciones, de modo que se evita el disparo de mecanismos transicionales no deseados. La capa límite bajo estudio se ha desarrollado sobre una placa plana, imponiendo un gradiente de presión a la misma por medio de paredes de geometría especificada. éstas generan una región inicial de aceleración seguida de una zona de deceleración. La región inicial se diseña para tener en al inicio de la deceleración un perfil de capa límite de Blasius, caracterizado por su espesor de cantidad de movimiento, y una cierta velocidad externa a la capa límite que se considera la velocidad característica del problema. La región de deceleración está concebida para que la variación de la velocidad externa a la capa límite sea lineal, definiendo de esta forma una longitud característica del problema. Los ensayos se han realizado explotando varias técnicas experimentales, tanto intrusivas (anemometría de hilo caliente, sondas de presión total) como no intrusivas (anemometrías láser y PIV, filmación de alta velocidad), de cara a aprovechar las ventajas de cada una de ellas y permitir validación cruzada de resultados entre las mismas. Caracterizada la capa límite al comienzo de la deceleración, y garantizados los parámetros integrales y niveles de perturbación deseados se procede al estudio de la zona de deceleración. Se presenta en la tesis un análisis de la evolución de la capa límite laminar desde el inicio de la misma hasta el punto de separación, comparando con métodos integrales, simulaciones numéricas, y proponiendo un nuevo modelo para esta evolución. Aguas abajo de la separación, el flujo en las proximidades de la pared se configura como una capa de cortadura que encierra una región de fluido recirculatorio de baja cantidad de movimiento. Se ha caracterizado la región en que dicha capa de cortadura permanece laminar, encontrando que se posiciona de modo que compensa el gradiente adverso de presión asociado a la deceleración de la corriente. En estas condiciones, el espesor de cantidad de movimiento permanece prácticamente constante y esta capa de cortadura laminar se extiende hasta que los fenómenos transicionales aparecen. Estos fenómenos son de tipo no viscoso, similares a los que aparecen en una capa de cortadura libre. El análisis de la región transicional comienza con un estudio de la evolución de las vii viii RESUMEN perturbaciones en la zona de crecimiento lineal de las mismas y la comparación de los resultados experimentales con un modelo numérico y con datos de otros autores. La coalescencia de los resultados tanto para el crecimiento de las perturbaciones como para las frecuencias excitadas queda demostrada. Para los estadios finales de la transición se observa la concentración de la vorticidad en torbellinos, de modo análogo a lo que ocurre en capas de cortadura libres. A diferencia de estas, la presencia de la pared y del gradiente de presión hace que, bajo ciertas condiciones, la gran escala se desplace hacia la pared y desaparezca rápidamente. En este caso el flujo recirculatorio queda confinado en una región cerrada y se habla de cierre de la burbuja o readherencia de la capa límite. A partir del punto de readherencia se tiene una configuración fluida en las proximidades de la pared que tradicionalmente se ha considerado turbulenta. Se ha observado que los métodos integrales existentes para capas límites turbulentas no ajustan bien a las medidas experimentales realizadas, hecho imputable a que no se obtiene en dicha región un flujo turbulento plenamente desarrollado. Se ha encontrado, sin embargo, que pasado el punto de readherencia los perfiles de velocidad próximos a la pared son autosemejantes entre sí y se ha propuesto un modelo para la evolución de los parámetros integrales de la capa límite en esta región. Finalmente, el fenómeno conocido como “estallido” de la burbuja se ha analizado. Se ha comprobado la validez de los modelos existentes en la literatura y se propone uno nuevo. Este fenómeno se achaca a la incapacidad de la gran estructura formada tras la transición para vencer el gradiente adverso de presión, desplazarse hacia la pared y cerrar la burbuja.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente Trabajo fin Fin de Máster, versa sobre una caracterización preliminar del comportamiento de un robot de tipo industrial, configurado por 4 eslabones y 4 grados de libertad, y sometido a fuerzas de mecanizado en su extremo. El entorno de trabajo planteado es el de plantas de fabricación de piezas de aleaciones de aluminio para automoción. Este tipo de componentes parte de un primer proceso de fundición que saca la pieza en bruto. Para series medias y altas, en función de las propiedades mecánicas y plásticas requeridas y los costes de producción, la inyección a alta presión (HPDC) y la fundición a baja presión (LPC) son las dos tecnologías más usadas en esta primera fase. Para inyección a alta presión, las aleaciones de aluminio más empleadas son, en designación simbólica según norma EN 1706 (entre paréntesis su designación numérica); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). Para baja presión, EN AC AlSi7Mg0,3 (EN AC 42100). En los 3 primeros casos, los límites de Silicio permitidos pueden superan el 10%. En el cuarto caso, es inferior al 10% por lo que, a los efectos de ser sometidas a mecanizados, las piezas fabricadas en aleaciones con Si superior al 10%, se puede considerar que son equivalentes, diferenciándolas de la cuarta. Las tolerancias geométricas y dimensionales conseguibles directamente de fundición, recogidas en normas como ISO 8062 o DIN 1688-1, establecen límites para este proceso. Fuera de esos límites, las garantías en conseguir producciones con los objetivos de ppms aceptados en la actualidad por el mercado, obligan a ir a fases posteriores de mecanizado. Aquellas geometrías que, funcionalmente, necesitan disponer de unas tolerancias geométricas y/o dimensionales definidas acorde a ISO 1101, y no capaces por este proceso inicial de moldeado a presión, deben ser procesadas en una fase posterior en células de mecanizado. En este caso, las tolerancias alcanzables para procesos de arranque de viruta se recogen en normas como ISO 2768. Las células de mecanizado se componen, por lo general, de varios centros de control numérico interrelacionados y comunicados entre sí por robots que manipulan las piezas en proceso de uno a otro. Dichos robots, disponen en su extremo de una pinza utillada para poder coger y soltar las piezas en los útiles de mecanizado, las mesas de intercambio para cambiar la pieza de posición o en utillajes de equipos de medición y prueba, o en cintas de entrada o salida. La repetibilidad es alta, de centésimas incluso, definida según norma ISO 9283. El problema es que, estos rangos de repetibilidad sólo se garantizan si no se hacen esfuerzos o éstos son despreciables (caso de mover piezas). Aunque las inercias de mover piezas a altas velocidades hacen que la trayectoria intermedia tenga poca precisión, al inicio y al final (al coger y dejar pieza, p.e.) se hacen a velocidades relativamente bajas que hacen que el efecto de las fuerzas de inercia sean menores y que permiten garantizar la repetibilidad anteriormente indicada. No ocurre así si se quitara la garra y se intercambia con un cabezal motorizado con una herramienta como broca, mandrino, plato de cuchillas, fresas frontales o tangenciales… Las fuerzas ejercidas de mecanizado generarían unos pares en las uniones tan grandes y tan variables que el control del robot no sería capaz de responder (o no está preparado, en un principio) y generaría una desviación en la trayectoria, realizada a baja velocidad, que desencadenaría en un error de posición (ver norma ISO 5458) no asumible para la funcionalidad deseada. Se podría llegar al caso de que la tolerancia alcanzada por un pretendido proceso más exacto diera una dimensión peor que la que daría el proceso de fundición, en principio con mayor variabilidad dimensional en proceso (y por ende con mayor intervalo de tolerancia garantizable). De hecho, en los CNCs, la precisión es muy elevada, (pudiéndose despreciar en la mayoría de los casos) y no es la responsable de, por ejemplo la tolerancia de posición al taladrar un agujero. Factores como, temperatura de la sala y de la pieza, calidad constructiva de los utillajes y rigidez en el amarre, error en el giro de mesas y de colocación de pieza, si lleva agujeros previos o no, si la herramienta está bien equilibrada y el cono es el adecuado para el tipo de mecanizado… influyen más. Es interesante que, un elemento no específico tan común en una planta industrial, en el entorno anteriormente descrito, como es un robot, el cual no sería necesario añadir por disponer de él ya (y por lo tanto la inversión sería muy pequeña), puede mejorar la cadena de valor disminuyendo el costo de fabricación. Y si se pudiera conjugar que ese robot destinado a tareas de manipulación, en los muchos tiempos de espera que va a disfrutar mientras el CNC arranca viruta, pudiese coger un cabezal y apoyar ese mecanizado; sería doblemente interesante. Por lo tanto, se antoja sugestivo poder conocer su comportamiento e intentar explicar qué sería necesario para llevar esto a cabo, motivo de este trabajo. La arquitectura de robot seleccionada es de tipo SCARA. La búsqueda de un robot cómodo de modelar y de analizar cinemática y dinámicamente, sin limitaciones relevantes en la multifuncionalidad de trabajos solicitados, ha llevado a esta elección, frente a otras arquitecturas como por ejemplo los robots antropomórficos de 6 grados de libertad, muy populares a nivel industrial. Este robot dispone de 3 uniones, de las cuales 2 son de tipo par de revolución (1 grado de libertad cada una) y la tercera es de tipo corredera o par cilíndrico (2 grados de libertad). La primera unión, de tipo par de revolución, sirve para unir el suelo (considerado como eslabón número 1) con el eslabón número 2. La segunda unión, también de ese tipo, une el eslabón número 2 con el eslabón número 3. Estos 2 brazos, pueden describir un movimiento horizontal, en el plano X-Y. El tercer eslabón, está unido al eslabón número 4 por la unión de tipo corredera. El movimiento que puede describir es paralelo al eje Z. El robot es de 4 grados de libertad (4 motores). En relación a los posibles trabajos que puede realizar este tipo de robot, su versatilidad abarca tanto operaciones típicas de manipulación como operaciones de arranque de viruta. Uno de los mecanizados más usuales es el taladrado, por lo cual se elige éste para su modelización y análisis. Dentro del taladrado se elegirá para acotar las fuerzas, taladrado en macizo con broca de diámetro 9 mm. El robot se ha considerado por el momento que tenga comportamiento de sólido rígido, por ser el mayor efecto esperado el de los pares en las uniones. Para modelar el robot se utiliza el método de los sistemas multicuerpos. Dentro de este método existen diversos tipos de formulaciones (p.e. Denavit-Hartenberg). D-H genera una cantidad muy grande de ecuaciones e incógnitas. Esas incógnitas son de difícil comprensión y, para cada posición, hay que detenerse a pensar qué significado tienen. Se ha optado por la formulación de coordenadas naturales. Este sistema utiliza puntos y vectores unitarios para definir la posición de los distintos cuerpos, y permite compartir, cuando es posible y se quiere, para definir los pares cinemáticos y reducir al mismo tiempo el número de variables. Las incógnitas son intuitivas, las ecuaciones de restricción muy sencillas y se reduce considerablemente el número de ecuaciones e incógnitas. Sin embargo, las coordenadas naturales “puras” tienen 2 problemas. El primero, que 2 elementos con un ángulo de 0 o 180 grados, dan lugar a puntos singulares que pueden crear problemas en las ecuaciones de restricción y por lo tanto han de evitarse. El segundo, que tampoco inciden directamente sobre la definición o el origen de los movimientos. Por lo tanto, es muy conveniente complementar esta formulación con ángulos y distancias (coordenadas relativas). Esto da lugar a las coordenadas naturales mixtas, que es la formulación final elegida para este TFM. Las coordenadas naturales mixtas no tienen el problema de los puntos singulares. Y la ventaja más importante reside en su utilidad a la hora de aplicar fuerzas motrices, momentos o evaluar errores. Al incidir sobre la incógnita origen (ángulos o distancias) controla los motores de manera directa. El algoritmo, la simulación y la obtención de resultados se ha programado mediante Matlab. Para realizar el modelo en coordenadas naturales mixtas, es preciso modelar en 2 pasos el robot a estudio. El primer modelo se basa en coordenadas naturales. Para su validación, se plantea una trayectoria definida y se analiza cinemáticamente si el robot satisface el movimiento solicitado, manteniendo su integridad como sistema multicuerpo. Se cuantifican los puntos (en este caso inicial y final) que configuran el robot. Al tratarse de sólidos rígidos, cada eslabón queda definido por sus respectivos puntos inicial y final (que son los más interesantes para la cinemática y la dinámica) y por un vector unitario no colineal a esos 2 puntos. Los vectores unitarios se colocan en los lugares en los que se tenga un eje de rotación o cuando se desee obtener información de un ángulo. No son necesarios vectores unitarios para medir distancias. Tampoco tienen por qué coincidir los grados de libertad con el número de vectores unitarios. Las longitudes de cada eslabón quedan definidas como constantes geométricas. Se establecen las restricciones que definen la naturaleza del robot y las relaciones entre los diferentes elementos y su entorno. La trayectoria se genera por una nube de puntos continua, definidos en coordenadas independientes. Cada conjunto de coordenadas independientes define, en un instante concreto, una posición y postura de robot determinada. Para conocerla, es necesario saber qué coordenadas dependientes hay en ese instante, y se obtienen resolviendo por el método de Newton-Rhapson las ecuaciones de restricción en función de las coordenadas independientes. El motivo de hacerlo así es porque las coordenadas dependientes deben satisfacer las restricciones, cosa que no ocurre con las coordenadas independientes. Cuando la validez del modelo se ha probado (primera validación), se pasa al modelo 2. El modelo número 2, incorpora a las coordenadas naturales del modelo número 1, las coordenadas relativas en forma de ángulos en los pares de revolución (3 ángulos; ϕ1, ϕ 2 y ϕ3) y distancias en los pares prismáticos (1 distancia; s). Estas coordenadas relativas pasan a ser las nuevas coordenadas independientes (sustituyendo a las coordenadas independientes cartesianas del modelo primero, que eran coordenadas naturales). Es necesario revisar si el sistema de vectores unitarios del modelo 1 es suficiente o no. Para este caso concreto, se han necesitado añadir 1 vector unitario adicional con objeto de que los ángulos queden perfectamente determinados con las correspondientes ecuaciones de producto escalar y/o vectorial. Las restricciones habrán de ser incrementadas en, al menos, 4 ecuaciones; una por cada nueva incógnita. La validación del modelo número 2, tiene 2 fases. La primera, al igual que se hizo en el modelo número 1, a través del análisis cinemático del comportamiento con una trayectoria definida. Podrían obtenerse del modelo 2 en este análisis, velocidades y aceleraciones, pero no son necesarios. Tan sólo interesan los movimientos o desplazamientos finitos. Comprobada la coherencia de movimientos (segunda validación), se pasa a analizar cinemáticamente el comportamiento con trayectorias interpoladas. El análisis cinemático con trayectorias interpoladas, trabaja con un número mínimo de 3 puntos máster. En este caso se han elegido 3; punto inicial, punto intermedio y punto final. El número de interpolaciones con el que se actúa es de 50 interpolaciones en cada tramo (cada 2 puntos máster hay un tramo), resultando un total de 100 interpolaciones. El método de interpolación utilizado es el de splines cúbicas con condición de aceleración inicial y final constantes, que genera las coordenadas independientes de los puntos interpolados de cada tramo. Las coordenadas dependientes se obtienen resolviendo las ecuaciones de restricción no lineales con el método de Newton-Rhapson. El método de las splines cúbicas es muy continuo, por lo que si se desea modelar una trayectoria en el que haya al menos 2 movimientos claramente diferenciados, es preciso hacerlo en 2 tramos y unirlos posteriormente. Sería el caso en el que alguno de los motores se desee expresamente que esté parado durante el primer movimiento y otro distinto lo esté durante el segundo movimiento (y así sucesivamente). Obtenido el movimiento, se calculan, también mediante fórmulas de diferenciación numérica, las velocidades y aceleraciones independientes. El proceso es análogo al anteriormente explicado, recordando la condición impuesta de que la aceleración en el instante t= 0 y en instante t= final, se ha tomado como 0. Las velocidades y aceleraciones dependientes se calculan resolviendo las correspondientes derivadas de las ecuaciones de restricción. Se comprueba, de nuevo, en una tercera validación del modelo, la coherencia del movimiento interpolado. La dinámica inversa calcula, para un movimiento definido -conocidas la posición, velocidad y la aceleración en cada instante de tiempo-, y conocidas las fuerzas externas que actúan (por ejemplo el peso); qué fuerzas hay que aplicar en los motores (donde hay control) para que se obtenga el citado movimiento. En la dinámica inversa, cada instante del tiempo es independiente de los demás y tiene una posición, una velocidad y una aceleración y unas fuerzas conocidas. En este caso concreto, se desean aplicar, de momento, sólo las fuerzas debidas al peso, aunque se podrían haber incorporado fuerzas de otra naturaleza si se hubiese deseado. Las posiciones, velocidades y aceleraciones, proceden del cálculo cinemático. El efecto inercial de las fuerzas tenidas en cuenta (el peso) es calculado. Como resultado final del análisis dinámico inverso, se obtienen los pares que han de ejercer los cuatro motores para replicar el movimiento prescrito con las fuerzas que estaban actuando. La cuarta validación del modelo consiste en confirmar que el movimiento obtenido por aplicar los pares obtenidos en la dinámica inversa, coinciden con el obtenido en el análisis cinemático (movimiento teórico). Para ello, es necesario acudir a la dinámica directa. La dinámica directa se encarga de calcular el movimiento del robot, resultante de aplicar unos pares en motores y unas fuerzas en el robot. Por lo tanto, el movimiento real resultante, al no haber cambiado ninguna condición de las obtenidas en la dinámica inversa (pares de motor y fuerzas inerciales debidas al peso de los eslabones) ha de ser el mismo al movimiento teórico. Siendo así, se considera que el robot está listo para trabajar. Si se introduce una fuerza exterior de mecanizado no contemplada en la dinámica inversa y se asigna en los motores los mismos pares resultantes de la resolución del problema dinámico inverso, el movimiento real obtenido no es igual al movimiento teórico. El control de lazo cerrado se basa en ir comparando el movimiento real con el deseado e introducir las correcciones necesarias para minimizar o anular las diferencias. Se aplican ganancias en forma de correcciones en posición y/o velocidad para eliminar esas diferencias. Se evalúa el error de posición como la diferencia, en cada punto, entre el movimiento teórico deseado en el análisis cinemático y el movimiento real obtenido para cada fuerza de mecanizado y una ganancia concreta. Finalmente, se mapea el error de posición obtenido para cada fuerza de mecanizado y las diferentes ganancias previstas, graficando la mejor precisión que puede dar el robot para cada operación que se le requiere, y en qué condiciones. -------------- This Master´s Thesis deals with a preliminary characterization of the behaviour for an industrial robot, configured with 4 elements and 4 degrees of freedoms, and subjected to machining forces at its end. Proposed working conditions are those typical from manufacturing plants with aluminium alloys for automotive industry. This type of components comes from a first casting process that produces rough parts. For medium and high volumes, high pressure die casting (HPDC) and low pressure die casting (LPC) are the most used technologies in this first phase. For high pressure die casting processes, most used aluminium alloys are, in simbolic designation according EN 1706 standard (between brackets, its numerical designation); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). For low pressure, EN AC AlSi7Mg0,3 (EN AC 42100). For the 3 first alloys, Si allowed limits can exceed 10% content. Fourth alloy has admisible limits under 10% Si. That means, from the point of view of machining, that components made of alloys with Si content above 10% can be considered as equivalent, and the fourth one must be studied separately. Geometrical and dimensional tolerances directly achievables from casting, gathered in standards such as ISO 8062 or DIN 1688-1, establish a limit for this process. Out from those limits, guarantees to achieve batches with objetive ppms currently accepted by market, force to go to subsequent machining process. Those geometries that functionally require a geometrical and/or dimensional tolerance defined according ISO 1101, not capable with initial moulding process, must be obtained afterwards in a machining phase with machining cells. In this case, tolerances achievables with cutting processes are gathered in standards such as ISO 2768. In general terms, machining cells contain several CNCs that they are interrelated and connected by robots that handle parts in process among them. Those robots have at their end a gripper in order to take/remove parts in machining fixtures, in interchange tables to modify position of part, in measurement and control tooling devices, or in entrance/exit conveyors. Repeatibility for robot is tight, even few hundredths of mm, defined according ISO 9283. Problem is like this; those repeatibilty ranks are only guaranteed when there are no stresses or they are not significant (f.e. due to only movement of parts). Although inertias due to moving parts at a high speed make that intermediate paths have little accuracy, at the beginning and at the end of trajectories (f.e, when picking part or leaving it) movement is made with very slow speeds that make lower the effect of inertias forces and allow to achieve repeatibility before mentioned. It does not happens the same if gripper is removed and it is exchanged by an spindle with a machining tool such as a drilling tool, a pcd boring tool, a face or a tangential milling cutter… Forces due to machining would create such big and variable torques in joints that control from the robot would not be able to react (or it is not prepared in principle) and would produce a deviation in working trajectory, made at a low speed, that would trigger a position error (see ISO 5458 standard) not assumable for requested function. Then it could be possible that tolerance achieved by a more exact expected process would turn out into a worst dimension than the one that could be achieved with casting process, in principle with a larger dimensional variability in process (and hence with a larger tolerance range reachable). As a matter of fact, accuracy is very tight in CNC, (its influence can be ignored in most cases) and it is not the responsible of, for example position tolerance when drilling a hole. Factors as, room and part temperature, manufacturing quality of machining fixtures, stiffness at clamping system, rotating error in 4th axis and part positioning error, if there are previous holes, if machining tool is properly balanced, if shank is suitable for that machining type… have more influence. It is interesting to know that, a non specific element as common, at a manufacturing plant in the enviroment above described, as a robot (not needed to be added, therefore with an additional minimum investment), can improve value chain decreasing manufacturing costs. And when it would be possible to combine that the robot dedicated to handling works could support CNCs´ works in its many waiting time while CNCs cut, and could take an spindle and help to cut; it would be double interesting. So according to all this, it would be interesting to be able to know its behaviour and try to explain what would be necessary to make this possible, reason of this work. Selected robot architecture is SCARA type. The search for a robot easy to be modeled and kinematically and dinamically analyzed, without significant limits in the multifunctionality of requested operations, has lead to this choice. Due to that, other very popular architectures in the industry, f.e. 6 DOFs anthropomorphic robots, have been discarded. This robot has 3 joints, 2 of them are revolute joints (1 DOF each one) and the third one is a cylindrical joint (2 DOFs). The first joint, a revolute one, is used to join floor (body 1) with body 2. The second one, a revolute joint too, joins body 2 with body 3. These 2 bodies can move horizontally in X-Y plane. Body 3 is linked to body 4 with a cylindrical joint. Movement that can be made is paralell to Z axis. The robt has 4 degrees of freedom (4 motors). Regarding potential works that this type of robot can make, its versatility covers either typical handling operations or cutting operations. One of the most common machinings is to drill. That is the reason why it has been chosen for the model and analysis. Within drilling, in order to enclose spectrum force, a typical solid drilling with 9 mm diameter. The robot is considered, at the moment, to have a behaviour as rigid body, as biggest expected influence is the one due to torques at joints. In order to modelize robot, it is used multibodies system method. There are under this heading different sorts of formulations (f.e. Denavit-Hartenberg). D-H creates a great amount of equations and unknown quantities. Those unknown quatities are of a difficult understanding and, for each position, one must stop to think about which meaning they have. The choice made is therefore one of formulation in natural coordinates. This system uses points and unit vectors to define position of each different elements, and allow to share, when it is possible and wished, to define kinematic torques and reduce number of variables at the same time. Unknown quantities are intuitive, constrain equations are easy and number of equations and variables are strongly reduced. However, “pure” natural coordinates suffer 2 problems. The first one is that 2 elements with an angle of 0° or 180°, give rise to singular positions that can create problems in constrain equations and therefore they must be avoided. The second problem is that they do not work directly over the definition or the origin of movements. Given that, it is highly recommended to complement this formulation with angles and distances (relative coordinates). This leads to mixed natural coordinates, and they are the final formulation chosen for this MTh. Mixed natural coordinates have not the problem of singular positions. And the most important advantage lies in their usefulness when applying driving forces, torques or evaluating errors. As they influence directly over origin variable (angles or distances), they control motors directly. The algorithm, simulation and obtaining of results has been programmed with Matlab. To design the model in mixed natural coordinates, it is necessary to model the robot to be studied in 2 steps. The first model is based in natural coordinates. To validate it, it is raised a defined trajectory and it is kinematically analyzed if robot fulfils requested movement, keeping its integrity as multibody system. The points (in this case starting and ending points) that configure the robot are quantified. As the elements are considered as rigid bodies, each of them is defined by its respectively starting and ending point (those points are the most interesting ones from the point of view of kinematics and dynamics) and by a non-colinear unit vector to those points. Unit vectors are placed where there is a rotating axis or when it is needed information of an angle. Unit vectors are not needed to measure distances. Neither DOFs must coincide with the number of unit vectors. Lengths of each arm are defined as geometrical constants. The constrains that define the nature of the robot and relationships among different elements and its enviroment are set. Path is generated by a cloud of continuous points, defined in independent coordinates. Each group of independent coordinates define, in an specific instant, a defined position and posture for the robot. In order to know it, it is needed to know which dependent coordinates there are in that instant, and they are obtained solving the constraint equations with Newton-Rhapson method according to independent coordinates. The reason to make it like this is because dependent coordinates must meet constraints, and this is not the case with independent coordinates. When suitability of model is checked (first approval), it is given next step to model 2. Model 2 adds to natural coordinates from model 1, the relative coordinates in the shape of angles in revoluting torques (3 angles; ϕ1, ϕ 2 and ϕ3) and distances in prismatic torques (1 distance; s). These relative coordinates become the new independent coordinates (replacing to cartesian independent coordinates from model 1, that they were natural coordinates). It is needed to review if unit vector system from model 1 is enough or not . For this specific case, it was necessary to add 1 additional unit vector to define perfectly angles with their related equations of dot and/or cross product. Constrains must be increased in, at least, 4 equations; one per each new variable. The approval of model 2 has two phases. The first one, same as made with model 1, through kinematic analysis of behaviour with a defined path. During this analysis, it could be obtained from model 2, velocities and accelerations, but they are not needed. They are only interesting movements and finite displacements. Once that the consistence of movements has been checked (second approval), it comes when the behaviour with interpolated trajectories must be kinematically analyzed. Kinematic analysis with interpolated trajectories work with a minimum number of 3 master points. In this case, 3 points have been chosen; starting point, middle point and ending point. The number of interpolations has been of 50 ones in each strecht (each 2 master points there is an strecht), turning into a total of 100 interpolations. The interpolation method used is the cubic splines one with condition of constant acceleration both at the starting and at the ending point. This method creates the independent coordinates of interpolated points of each strecht. The dependent coordinates are achieved solving the non-linear constrain equations with Newton-Rhapson method. The method of cubic splines is very continuous, therefore when it is needed to design a trajectory in which there are at least 2 movements clearly differents, it is required to make it in 2 steps and join them later. That would be the case when any of the motors would keep stopped during the first movement, and another different motor would remain stopped during the second movement (and so on). Once that movement is obtained, they are calculated, also with numerical differenciation formulas, the independent velocities and accelerations. This process is analogous to the one before explained, reminding condition that acceleration when t=0 and t=end are 0. Dependent velocities and accelerations are calculated solving related derivatives of constrain equations. In a third approval of the model it is checked, again, consistence of interpolated movement. Inverse dynamics calculates, for a defined movement –knowing position, velocity and acceleration in each instant of time-, and knowing external forces that act (f.e. weights); which forces must be applied in motors (where there is control) in order to obtain requested movement. In inverse dynamics, each instant of time is independent of the others and it has a position, a velocity, an acceleration and known forces. In this specific case, it is intended to apply, at the moment, only forces due to the weight, though forces of another nature could have been added if it would have been preferred. The positions, velocities and accelerations, come from kinematic calculation. The inertial effect of forces taken into account (weight) is calculated. As final result of the inverse dynamic analysis, the are obtained torques that the 4 motors must apply to repeat requested movement with the forces that were acting. The fourth approval of the model consists on confirming that the achieved movement due to the use of the torques obtained in the inverse dynamics, are in accordance with movements from kinematic analysis (theoretical movement). For this, it is necessary to work with direct dynamics. Direct dynamic is in charge of calculating the movements of robot that results from applying torques at motors and forces at the robot. Therefore, the resultant real movement, as there was no change in any condition of the ones obtained at the inverse dynamics (motor torques and inertial forces due to weight of elements) must be the same than theoretical movement. When these results are achieved, it is considered that robot is ready to work. When a machining external force is introduced and it was not taken into account before during the inverse dynamics, and torques at motors considered are the ones of the inverse dynamics, the real movement obtained is not the same than the theoretical movement. Closed loop control is based on comparing real movement with expected movement and introducing required corrrections to minimize or cancel differences. They are applied gains in the way of corrections for position and/or tolerance to remove those differences. Position error is evaluated as the difference, in each point, between theoretical movemment (calculated in the kinematic analysis) and the real movement achieved for each machining force and for an specific gain. Finally, the position error obtained for each machining force and gains are mapped, giving a chart with the best accuracy that the robot can give for each operation that has been requested and which conditions must be provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis doctoral se ha centrado en el estudio de las cargas aerodinámicas no estacionario en romos cuerpos o no aerodinámicos (bluff bodies). Con este objetivo se han identificado y analizado los siguientes puntos: -Caracterización del flujo medido con diferentes tipos de tubos de Pitot y anemómetro de hilo caliente en condiciones de flujo no estacionario inestable generado por un túnel aerodinamico de ráfagas. -Diseño e integración de los montajes experimentales requeridos para medir las cargas de viento internas y externas que actúan sobre los cuerpos romos en condiciones de flujo de viento con ráfagas. -Implementación de modelos matemáticos semi-empíricos basados en flujo potencial y las teorías fenomenológicas pertinentes para simular los resultados experimentales. -En diversan condiciones de flujo con ráfagas, la identificación y el análisis de la influencia de los parámetros obtenida a partir de los modelos teóricos desarrollados. -Se proponen estimaciones empíricas para averiguar los valores adecuados de los parámetros que influyente, mediante el ajuste de los resultados experimentales y los predichos teóricamente. Los montajes experimentales se has reakizado en un tunel aerodinamico de circuito abierto, provisto de baja velocidad, cámara de ensayes cerrada, un nuevo concepto de mecanismo generador de ráfaga sinusoidal, diseñado y construido en el Instituto de Microgravedad "Ignacio Da Riva" de la Universidad Politécnica de Madrid, (IDR / UPM). La principal característica de este túnel aerodynamico es la capacidad de generar un flujo con un perfil de velocidad uniforme y una fluctuación sinusoidal en el tiempo. Se han realizado pruebas experimentales para estudiar el efecto de los flujos no estacionarios en cuerpos romos situados en el suelo. Se han propuesto dos modelos teóricos para diterminar las cargas de presión externas e internas respectivamente. Con el fin de satisfacer la necesidad de la crea ráfagas de viento sinusoidales para comprobar las predicciones de los modelos teóricos, se han obtenido velocidades de hasta 30 m/s y frecuencias ráfaga de hasta 10 Hz. La sección de la cámara de ensayos es de 0,39 m x 0,54 m, dimensiónes adecuadas para llevar a cabo experimentos con modelos de ensayos. Se muestra que en la gama de parámetros explorados los resultados experimentales están en buen acuerdo con las predicciones de los modelos teóricos. Se han realizado pruebas experimentales para estudiar los efectos del flujo no estacionario, las cuales pueden ayudar a aclarar el fenómeno de las cargas de presión externa sobre los cuerpos romos sometidos a ráfagas de viento: y tambien para determinan las cargas de presión interna, que dependen del tamaño de los orificios de ventilación de la construcción. Por último, se ha analizado la contribución de los términos provenientes del flujo no estacionario, y se han caracterizado o los saltos de presión debido a la pérdida no estacionario de presión a través de los orificios de ventilación. ABSTRACT This Doctoral dissertation has been focused to study the unsteady aerodynamic loads on bluff bodies. To this aim the following points have been identified and analyzed: -Characterization of the flow measured with different types of Pitot tubes and hot wire anemometer at unsteady flow conditions generated by a gust wind tunnel. -Design and integrating of the experimental setups required to measure the internal and external wind loads acting on bluff bodies at gusty wind flow conditions. -Implementation of semi-empirical mathematical models based on potential flow and relevant phenomenological theories to simulate the experimental results.-At various gusty flow conditions, extracting and analyzing the influence of parameters obtained from the developed theoretical models. -Empirical estimations are proposed to find out suitable values of the influencing parameters, by fitting the experimental and theoretically predicted results. The experimental setups are performed in an open circuit, closed test section, low speed wind tunnel, with a new sinusoidal gust generator mechanism concept, designed and built at the Instituto de Microgravedad “Ignacio Da Riva” of the Universidad Politécnica de Madrid, (IDR/UPM). The main characteristic of this wind tunnel is the ability to generate a flow with a uniform velocity profile and a sinusoidal time fluctuation of the speed. Experimental tests have been devoted to study the effect of unsteady flows on bluff bodies lying on the ground. Two theoretical models have been proposed to measure the external and internal pressure loads respectively. In order to meet the need of creating sinusoidal wind gusts to check the theoretical model predictions, the gust wind tunnel maximum flow speed and, gust frequency in the test section have been limited to 30 m/s and 10 Hz, respectively have been obtained. The test section is 0.39 m × 0.54 m, which is suitable to perform experiments with testing models. It is shown that, in the range of parameters explored, the experimental results are in good agreement with the theoretical model predictions. Experimental tests have been performed to study the unsteady flow effects, which can help in clarifying the phenomenon of the external pressure loads on bluff bodies under gusty winds: and also to study internal pressure loads, which depend on the size of the venting holes of the building. Finally, the contribution of the unsteady flow terms in the theoretical model has been analyzed, and the pressure jumps due to the unsteady pressure losses through the venting holes have been characterized.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This brief communication concerns the unsteady aerodynamic external pressure loads acting on a semi-circular bluff body lying on a floor under wind gusts and describes the theoretical model, experimental setup, and experimental results obtained. The experimental setup is based on an open circuit, closed test section, low speed wind tunnel, which includes a sinusoidal gust generating mechanism, designed and built at the Instituto de Microgravedad “Ignacio Da Riva” of the Universidad Politécnica de Madrid (IDR/UPM). Based on the potential flow theory, a theoretical model has been proposed to analyse the problem, and experimental tests have been performed to study the unsteady aerodynamic loads on a semi-circular bluff body. By fitting the theoretical model predictions with the experimental results, influencing parameters of the unsteady aerodynamic loads are ascertained. The values of these parameters can help in clarifying the phenomenon of the external pressure loads on semi-circular bluff body under various gust frequencies. The theoretical model proposed allows the pressure variation to be split into two contributions, a quasi-steady term and an unsteady term with a simple physical meaning