939 resultados para non-specific immune functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dental caries is a common preventable childhood disease leading to severe physical, mental and economic repercussions for children and their families if left untreated. A needs assessment in Harris County reported that 45.9% of second graders had untreated dental caries. In order to address this growing problem, the School Sealant Program (SSP), a primary preventive initiative, was launched by the Houston Department of Health and Human Services (HDHHS) to provide oral health education, and underutilized dental preventive services to second grade children from participating Local School Districts (LSDs). ^ To determine the effectiveness and efficiency of the SSP, a program evaluation was conducted by the HDHHS between September 2007 and June 2008 for the Oral Health Education (OHE) component of the SSP. The objective of the evaluation was to assess short term changes in oral health knowledge of the participants and determine if these changes, if any, were due to the OHE sessions. An 8-item multiple choice pre/post test was developed for this purpose and administered to the participants before and immediately after the OHE sessions. ^ The present project analyzed pre and post test data of 1,088 second graders from 22 participating schools. Changes in overall and topic-specific knowledge of the program participants before and after the OHE sessions were analyzed using the Wilcoxon's signed rank test. ^ Results. The overall knowledge assessment showed a statistically significant (p <0.001) increase in the dental health knowledge of the participants after the oral health education sessions. Participants in the higher scoring category (7-8 correct responses) increased from 9.5% at baseline to 60.8% after the education sessions. Overall knowledge increased in all school regions with the highest knowledge gains seen in the Central and South regions. Males and females had similar knowledge gains. Significant knowledge differences were also found for each of the topic specific categories (functions of teeth, healthy diet, healthy habits, dental sealants; p<0.001) indicating an increase in topic specific knowledge of the participants post-health education sessions. ^ Conclusions. The OHE sessions were successful in increasing the short term oral health knowledge of the participants. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. To evaluate the use of the Legionella Urine Antigen Test as a cost effective method for diagnosing Legionnaires’ disease in five San Antonio Hospitals from January 2007 to December 2009. ^ Methods. The data reported by five San Antonio hospitals to the San Antonio Metropolitan Health District during a 3-year retrospective study (January 2007 to December 2009) were evaluated for the frequency of non-specific pneumonia infections, the number of Legionella Urine Antigen Tests performed, and the percentage of positive cases of Legionnaires’ disease diagnosed by the Legionella Urine Antigen Test.^ Results. There were a total of 7,087 cases of non-specific pneumonias reported across the five San Antonio hospitals studied from 2007 to 2009. A total of 5,371 Legionella Urine Antigen Tests were performed from January, 2007 to December, 2009 across the five San Antonio hospitals in the study. A total of 38 positive cases of Legionnaires’ disease were identified by the use of Legionella Urinary Antigen Test from 2007-2009.^ Conclusions. In spite of the limitations of this study in obtaining sufficient relevant data to evaluate the cost effectiveness of Legionella Urinary Antigen Test in diagnosing Legionnaires’ disease, the Legionella Urinary Antigen Test is simple, accurate, faster, as results can be obtained within minutes to hours; and convenient because it can be performed in emergency room department to any patient who presents with the clinical signs or symptoms of pneumonia. Over the long run, it remains to be shown if this test may decrease mortality, lower total medical costs by decreasing the number of broad-spectrum antibiotics prescribed, shorten patient wait time/hospital stay, and decrease the need for unnecessary ancillary testing, and improve overall patient outcomes.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RNA processing and degradation are two important functions that control gene expression and promote RNA fidelity in the cell. A major ribonuclease complex, called the exosome, is involved in both of these processes. The exosome is composed of ten essential proteins with only one catalytically active subunit, called Rrp44. While the same ten essential subunits make up both the nuclear and cytoplasmic exosome, there are nuclear and cytoplasmic exosome cofactors that promote specific exosome functions in each of the cell compartments. To date, it is unclear how the exosome distinguishes between RNA substrates. We hypothesize that compartment specific cofactors may promote the substrate specificity of the exosome. In this work, I characterize several cofactors of the exosome, both nuclear and cytoplasmic. First, I describe the arch domain, which is a unique domain in a nuclear and a cytoplasmic cofactor of the exosome. Specifically, I show that the arch domain of the nuclear exosome cofactor, Mtr4, is required for specific exosome-mediated activities and overlaps functionally with the exosome-associated exonuclease, Rrp6. Further, I show that the arch domain of Ski2 is required for the degradation of normal and aberrant mRNAs. Additionally, this work describes in detail the Mtr4 domains involved in the physical association with other RNA processing proteins. Further, I characterize the minimal Mtr4-binding region in a third exosome cofactor, Trf5. Understanding how exosome cofactors synergistically promote exosome function will provide us a better understanding of how the exosome complex precisely regulates its catalytic activities. As described here, cofactors play a major role in determining the substrate specificity of the nuclear and cytoplasmic exosome. Moreover, specific accessory domains, which are not involved in the catalytic function of the cofactor, are required for substrate targeting of the eukaryotic RNA exosome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nine membrane-bound isoforms of adenylyl cyclase (AC), via synthesis of the signaling molecule cyclic AMP (cAMP), are involved in many isoform specific physiological functions. Decreasing AC5 activity has been shown to have potential therapeutic benefit, including reduced stress on the heart, pain relief, and attenuation of morphine dependence and withdrawal behaviors. However, AC structure is well conserved, and there are currently no isoform selective AC inhibitors in clinical use. P-site inhibitors inhibit AC directly at the catalytic site, but with an uncompetitive or noncompetitive mechanism. Due to this mechanism and nanomolar potency in cell-free systems, attempts at ligand-based drug design of novel AC inhibitors frequently use P-site inhibitors as a starting template. One small molecule inhibitor designed through this process, NKY80, is described as an AC5 selective inhibitor with low micromolar potency in vitro. P-site inhibitors reveal important ligand binding “pockets” in the AC catalytic site, but specific interactions that give NKY80 selectivity are unclear. Identifying and characterizing unique interactions between NKY80 and AC isoforms would significantly aid the development of isoform selective AC inhibitors. I hypothesized that NKY80’s selective inhibition is conferred by AC isoform specific interactions with the compound within the catalytic site. A structure-based virtual screen of the AC catalytic site was used to identify novel small molecule AC inhibitors. Identified novel inhibitors are isoform selective, supporting the catalytic site as a region capable of more potent isoform selective inhibition. Although NKY80 is touted commercially as an AC5 selective inhibitor, its characterization suggests strong inhibition of both AC5 and the closely related AC6. NKY80 was also virtually docked to AC to determine how NKY80 binds to the catalytic site. My results show a difference between NKY80 binding and the conformation of classic P-site inhibitors. The selectivity and notable differences in NKY80 binding to the AC catalytic site suggest a catalytic subregion more flexible in AC5 and AC6 that can be targeted by selective small molecule inhibitors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The availability of transplantable, syngeneic murine melanomas made it possible to study the potential effects of UV radiation on the growth and progression of melanomas in an animal model. The purpose of my study was to determine how UV-irradiation increases the incidence of melanoma out-growth, when syngeneic melanoma cells are transplanted into a UV-irradiated site. Short term intermittent UVB exposure produces a transitory change in the mice which allows the increased outgrowth of melanoma cells injected into the UV-irradiated site. One possible mechanism is an immunomodulatory effect of UVR on the host. An alternative mechanism to account for the increased tumor incidence in the UV-irradiated site, is the release of inflammatory mediators from UV-irradiated epidermal cells. A third possibility is that UVR could induce the production and/or release of melanoma-specific growth factors resulting in increased melanoma outgrowth.^ My first step in distinguishing among these different possible mechanisms was to characterize further the conditions leading to increased development of melanoma cells in UV-irradiated mouse skin. Next, I attempted to determine which of the 3 proposed mechanisms was most likely. To do this, I defined the specificity of the effect by examining the growth of additional C3H tumorigenic cell lines in UV-irradiated skin. Second, I determined the immunogenicity of these tumor cell lines. The tumor cell lines exhibiting increased tumor incidence are restricted to those tumor cell lines which are immunogenic in normal C3H mice. Third, I determined the effect of UVR on melanoma development did not occur in immunosuppressed mice.^ Because of results from these three lines of investigation suggested that the effect was immunologically mediated, I then investigated whether specific immune reactions were affected by local UV irradiation. To accomplish this, I investigated the effect of UVR on cutaneous immune cells and on induction of contact hypersensitivity (CHS), and I also determined the effect of UVR on the development and the expression of systemic immunity against the melanoma cells. There is no clear cut relationship between the number of Langerhans or Thy1+ cells and the UV effect on tumor incidence. Furthermore, there was no suppression of CHS in the UV-irradiated mice. While the development of systemic immunity is significantly reduced, it appears to be sufficient to provide in vivo immunity to tumor challenge. However the elicitation of tumor immunity in immunized mice can be abrogated if tumor challenge occurs in the site of UV irradiation. This investigation provides new information on an effect of UVR on the elicitation of tumor immunity. Furthermore, it indicates that UV radiation can play a role in the development of melanoma other than just in the transformation of melanocytes. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ozone (O3) phytototoxicity has been reported on a wide range of plantspecies, inducing the appearance of specific foliar injury or increasing leaf senescence. No information regarding the sensitivity of plantspecies from dehesa Mediterranean grasslands has been provided in spite of their great biological diversity. A screening study was carried out in open-top chambers (OTCs) to assess the O3-sensitivity of 22 representative therophytes of these ecosystems based on the appearance and extent of foliar injury. A distinction was made between specific O3injury and non-specific discolorations. Three O3 treatments (charcoal-filtered air, non-filtered air and non-filtered air supplemented with 40 nl l−1 O3 during 5 days per week) and three OTCs per treatment were used. The Papilionaceae species were more sensitive to O3 than the Poaceae species involved in the experiment since ambient levels induced foliar symptoms in 67% and 27%, respectively, of both plant families. An O3-sensitivity ranking of the species involved in the assessment is provided, which could be useful for bioindication programmes in Mediterranean areas. The assessed Trifoliumspecies were particularly sensitive since foliar symptoms were apparent in association with O3 accumulated exposures well below the current critical level for the prevention of this kind of effect. The exposure indices involving lower cut-off values (i.e. 30 nl l−1) were best related with the extent of O3-induced injury on these species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SUMMARY Concentration Photovoltaic Systems (CPV) have been proposed as an alternative to conventional systems. During the last years, there has been a boom of the CPV industry caused by the technological progress in all the elements of the system. and mainly caused by the use of multijunction solar cells based on III-V semiconductors, with efficiencies exceeding to 43%. III-V solar cells have been used with high reliability results in a great number of space missions without concentration. However, there are no previous results regarding their reliability in concentration terrestrial applications, where the working conditions are completely different. This lack of experience, together with the important industrial interest, has generated the need to evaluate the reliability of the cells. For this reason, nowadays there are several research centers around the undertaking this task. The evaluation of the reliability of this type of devices by means of accelerated tests is especially problematic when they work at medium or high concentration, because it is practically impossible to emulate real working conditions of the cell inside climatic chambers. In fact, as far as we know, the results that appear in this Thesis are the first estimating the Activation Energy of the failure mechanism involved, as well as the warranty of the III-V concentrator solar cells tested here. To evaluate the reliability of III-V very high concentrator solar cells by means of accelerated tests, a variety of activities, described in this Thesis have been carried out. The First Part of the memory presents the theoretical part of the Doctoral Thesis. After the Introduction, chapter 2 presents the state of the art in degradation and reliability of CPV systems and solar cells. Chapter 3 introduces some reliability definitions and the application of specific statistical functions to the evaluation of the reliability and parameters. From these functions, important parameters will be calculated to be used later in the experimental results of Thesis. The Second Part of the memory contains the experimental. Chapter 4 shows the types of accelerated tests and the main goals pursuit with them when carried out over CPV systems and solar cells. In order to evaluate quantitatively the reliability of the III-V concentrator solar cells used in these tests, some modifications have been introduced which discussion will be tackled here. Based on this analysis the working plan of the tests carried out in this Doctoral Thesis is presented. Chapter 5 presents a new methodology as well as the necessary instrumentation to carry out the tests described here. This new methodology takes into account the adaptation, improvement and novel techniques needed to test concentrator solar cells. The core of this memory is chapter 6, which presents the results of the characterization of the cells during the accelerated life tests and the analysis of the aforementioned results with the purpose of getting quantitative values of reliability in real working conditions. The acceleration factor of the accelerated life tests, under nominal working conditions has been calculated. Accordingly, the validity of the methodology as well as the calculations based on the reliability assessment, have also been demonstrated. Finally, quantitative values of degradation, reliability and warranty of the solar cells under field nominal working conditions have been calculated. With the development of this Doctoral Thesis the reliability of very high concentrator GaAs solar cells of small area has been evaluated. It is very interesting to generalize the procedures described up to this point to III-V multijunction solar cells of greater area. Therefore, chapter 7 develops this generalization and introduces also a useful thermal modeling by means of finite elements of the test cells’ circuits. In the last chapter, the summary of the results and the main contributions of this Thesis are outlined and future research activities are identified. RESUMEN Los Sistemas Fotovoltaicos de Concentración (SFC) han sido propuestos como una alternativa a los sistemas convencionales de generación de energía. Durante los últimos años ha habido un auge de los SFC debido a las mejoras tecnológicas en todos los elementos del sistema, y principalmente por el uso de células multiunión III-V que superan el 43% de rendimiento. Las células solares III-V han sido utilizadas con elevada fiabilidad en aplicaciones espaciales sin concentración, pero no existe experiencia de su fiabilidad en ambiente terrestre a altos niveles de concentración solar. Esta falta de experiencia junto al gran interés industrial ha generado la necesidad de evaluar la fiabilidad de las células, y actualmente hay un significativo número de centros de investigación trabajando en esta área. La evaluación de la fiabilidad de este tipo de dispositivos mediante ensayos acelerados es especialmente problemática cuando trabajan a media o alta concentración por la casi imposibilidad de emular las condiciones de trabajo reales de la célula dentro de cámaras climáticas. De hecho, que sepamos, en los resultados de esta Tesis se evalúa por primera vez la Energía de Activación del mecanismo de fallo de las células, así como la garantía en campo de las células de concentración III-V analizadas. Para evaluar la fiabilidad de células solares III-V de muy alta concentración mediante ensayos de vida acelerada se han realizado diversas actividades que han sido descritas en la memoria de la Tesis. En la Primera Parte de la memoria se presenta la parte teórica de la Tesis Doctoral. Tras la Introducción, en el capítulo 2 se muestra el estado del arte en degradación y fiabilidad de células y Sistemas Fotovoltaicos de Concentración. En el capítulo 3 se exponen de forma resumida las definiciones de fiabilidad y funciones estadísticas que se utilizan para la evaluación de la fiabilidad y sus parámetros, las cuales se emplearán posteriormente en los ensayos descritos en este Tesis. La Segunda Parte de la memoria es experimental. En el capítulo 4 se describen los tipos y objetivos de los ensayos acelerados actualmente aplicados a SFC y a las células, así como las modificaciones necesarias que permitan evaluar cuantitativamente la fiabilidad de las células solares de concentración III-V. En base a este análisis se presenta la planificación de los trabajos realizados en esta Tesis Doctoral. A partir de esta planificación y debido a la necesidad de adaptar, mejorar e innovar las técnicas de ensayos de vida acelerada para una adecuada aplicación a este tipo de dispositivos, en el capítulo 5 se muestra la metodología empleada y la instrumentación necesaria para realizar los ensayos de esta Tesis Doctoral. El núcleo de la memoria es el capítulo 6, en él se presentan los resultados de caracterización de las células durante los ensayos de vida acelerada y el análisis de dichos resultados con el objetivo de obtener valores cuantitativos de fiabilidad en condiciones reales de trabajo. Se calcula el Factor de Aceleración de los ensayos acelerados con respecto a las condiciones nominales de funcionamiento a partir de la Energía de Activación obtenida, y se demuestra la validez de la metodología y cálculos empleados, que son la base de la evaluación de la fiabilidad. Finalmente se calculan valores cuantitativos de degradación, fiabilidad y garantía de las células en condiciones nominales en campo durante toda la vida de la célula. Con el desarrollo de esta Tesis Doctoral se ha evaluado la fiabilidad de células III-V de área pequeña, pero es muy interesante generalizar los procedimientos aquí desarrollados para las células III-V comerciales de área grande. Por este motivo, en el capítulo 7 se analiza dicha generalización, incluyendo el modelado térmico mediante elementos finitos de los circuitos de ensayo de las células. En el último capítulo se realiza un resume del trabajo y las aportaciones realizadas, y se identifican las líneas de trabajo a emprender en el futuro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La propulsión eléctrica constituye hoy una tecnología muy competitiva y de gran proyección de futuro. Dentro de los diversos motores de plasma existentes, el motor de efecto Hall ha adquirido una gran madurez y constituye un medio de propulsión idóneo para un rango amplio de misiones. En la presente Tesis se estudian los motores Hall con geometría convencional y paredes dieléctricas. La compleja interacción entre los múltiples fenómenos físicos presentes hace que sea difícil la simulación del plasma en estos motores. Los modelos híbridos son los que representan un mejor compromiso entre precisión y tiempo de cálculo. Se basan en utilizar un modelo fluido para los electrones y algoritmos de dinámica de partículas PIC (Particle-In- Cell) para los iones y los neutros. Permiten hacer uso de la hipótesis de cuasineutralidad del plasma, a cambio de resolver separadamente las capas límite (o vainas) que se forman en torno a las paredes de la cámara. Partiendo de un código híbrido existente, llamado HPHall-2, el objetivo de la Tesis doctoral ha sido el desarrollo de un código híbrido avanzado que mejorara la simulación de la descarga de plasma en un motor de efecto Hall. Las actualizaciones y mejoras realizadas en las diferentes partes que componen el código comprenden tanto aspectos teóricos como numéricos. Fruto de la extensa revisión de la algoritmia del código HPHall-2 se han conseguido reducir los errores de precisión un orden de magnitud, y se ha incrementado notablemente su consistencia y robustez, permitiendo la simulación del motor en un amplio rango de condiciones. Algunos aspectos relevantes a destacar en el subcódigo de partículas son: la implementación de un nuevo algoritmo de pesado que permite determinar de forma más precisa el flujo de las magnitudes del plasma; la implementación de un nuevo algoritmo de control de población, que permite tener suficiente número de partículas cerca de las paredes de la cámara, donde los gradientes son mayores y las condiciones de cálculo son más críticas; las mejoras en los balances de masa y energía; y un mejor cálculo del campo eléctrico en una malla no uniforme. Merece especial atención el cumplimiento de la condición de Bohm en el borde de vaina, que en los códigos híbridos representa una condición de contorno necesaria para obtener una solución consistente con el modelo de interacción plasma-pared, y que en HPHall-2 aún no se había resuelto satisfactoriamente. En esta Tesis se ha implementado el criterio cinético de Bohm para una población de iones con diferentes cargas eléctricas y una gran dispersión de velocidades. En el código, el cumplimiento de la condición cinética de Bohm se consigue por medio de un algoritmo que introduce una fina capa de aceleración nocolisional adyacente a la vaina y mide adecuadamente el flujo de partículas en el espacio y en el tiempo. Las mejoras realizadas en el subcódigo de electrones incrementan la capacidad de simulación del código, especialmente en la región aguas abajo del motor, donde se simula la neutralización del chorro del plasma por medio de un modelo de cátodo volumétrico. Sin abordar el estudio detallado de la turbulencia del plasma, se implementan modelos sencillos de ajuste de la difusión anómala de Bohm, que permiten reproducir los valores experimentales del potencial y la temperatura del plasma, así como la corriente de descarga del motor. En cuanto a los aspectos teóricos, se hace especial énfasis en la interacción plasma-pared y en la dinámica de los electrones secundarios libres en el interior del plasma, cuestiones que representan hoy en día problemas abiertos en la simulación de los motores Hall. Los nuevos modelos desarrollados buscan una imagen más fiel a la realidad. Así, se implementa el modelo de vaina de termalización parcial, que considera una función de distribución no-Maxwelliana para los electrones primarios y contabiliza unas pérdidas energéticas más cercanas a la realidad. Respecto a los electrones secundarios, se realiza un estudio cinético simplificado para evaluar su grado de confinamiento en el plasma, y mediante un modelo fluido en el límite no-colisional, se determinan las densidades y energías de los electrones secundarios libres, así como su posible efecto en la ionización. El resultado obtenido muestra que los electrones secundarios se pierden en las paredes rápidamente, por lo que su efecto en el plasma es despreciable, no así en las vainas, donde determinan el salto de potencial. Por último, el trabajo teórico y de simulación numérica se complementa con el trabajo experimental realizado en el Pnnceton Plasma Physics Laboratory, en el que se analiza el interesante transitorio inicial que experimenta el motor en el proceso de arranque. Del estudio se extrae que la presencia de gases residuales adheridos a las paredes juegan un papel relevante, y se recomienda, en general, la purga completa del motor antes del modo normal de operación. El resultado final de la investigación muestra que el código híbrido desarrollado representa una buena herramienta de simulación de un motor Hall. Reproduce adecuadamente la física del motor, proporcionando resultados similares a los experimentales, y demuestra ser un buen laboratorio numérico para estudiar el plasma en el interior del motor. Abstract Electric propulsion is today a very competitive technology and has a great projection into the future. Among the various existing plasma thrusters, the Hall effect thruster has acquired a considerable maturity and constitutes an ideal means of propulsion for a wide range of missions. In the present Thesis only Hall thrusters with conventional geometry and dielectric walls are studied. The complex interaction between multiple physical phenomena makes difficult the plasma simulation in these engines. Hybrid models are those representing a better compromise between precision and computational cost. They use a fluid model for electrons and Particle-In-Cell (PIC) algorithms for ions and neutrals. The hypothesis of plasma quasineutrality is invoked, which requires to solve separately the sheaths formed around the chamber walls. On the basis of an existing hybrid code, called HPHall-2, the aim of this doctoral Thesis is to develop an advanced hybrid code that better simulates the plasma discharge in a Hall effect thruster. Updates and improvements of the code include both theoretical and numerical issues. The extensive revision of the algorithms has succeeded in reducing the accuracy errors in one order of magnitude, and the consistency and robustness of the code have been notably increased, allowing the simulation of the thruster in a wide range of conditions. The most relevant achievements related to the particle subcode are: the implementation of a new weighing algorithm that determines more accurately the plasma flux magnitudes; the implementation of a new algorithm to control the particle population, assuring enough number of particles near the chamber walls, where there are strong gradients and the conditions to perform good computations are more critical; improvements in the mass and energy balances; and a new algorithm to compute the electric field in a non-uniform mesh. It deserves special attention the fulfilment of the Bohm condition at the edge of the sheath, which represents a boundary condition necessary to match consistently the hybrid code solution with the plasma-wall interaction, and remained as a question unsatisfactory solved in the HPHall-2 code. In this Thesis, the kinetic Bohm criterion has been implemented for an ion particle population with different electric charges and a large dispersion in their velocities. In the code, the fulfilment of the kinetic Bohm condition is accomplished by an algorithm that introduces a thin non-collisional layer next to the sheaths, producing the ion acceleration, and measures properly the flux of particles in time and space. The improvements made in the electron subcode increase the code simulation capabilities, specially in the region downstream of the thruster, where the neutralization of the plasma jet is simulated using a volumetric cathode model. Without addressing the detailed study of the plasma turbulence, simple models for a parametric adjustment of the anomalous Bohm difussion are implemented in the code. They allow to reproduce the experimental values of the plasma potential and the electron temperature, as well as the discharge current of the thruster. Regarding the theoretical issues, special emphasis has been made in the plasma-wall interaction of the thruster and in the dynamics of free secondary electrons within the plasma, questions that still remain unsolved in the simulation of Hall thrusters. The new developed models look for results closer to reality, such as the partial thermalization sheath model, that assumes a non-Maxwellian distribution functions for primary electrons, and better computes the energy losses at the walls. The evaluation of secondary electrons confinement within the chamber is addressed by a simplified kinetic study; and using a collisionless fluid model, the densities and energies of free secondary electrons are computed, as well as their effect on the plasma ionization. Simulations show that secondary electrons are quickly lost at walls, with a negligible effect in the bulk of the plasma, but they determine the potential fall at sheaths. Finally, numerical simulation and theoretical work is complemented by the experimental work carried out at the Princeton Plasma Physics Laboratory, devoted to analyze the interesting transitional regime experienced by the thruster in the startup process. It is concluded that the gas impurities adhered to the thruster walls play a relevant role in the transitional regime and, as a general recomendation, a complete purge of the thruster before starting its normal mode of operation it is suggested. The final result of the research conducted in this Thesis shows that the developed code represents a good tool for the simulation of Hall thrusters. The code reproduces properly the physics of the thruster, with results similar to the experimental ones, and represents a good numerical laboratory to study the plasma inside the thruster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente Trabajo fin Fin de Máster, versa sobre una caracterización preliminar del comportamiento de un robot de tipo industrial, configurado por 4 eslabones y 4 grados de libertad, y sometido a fuerzas de mecanizado en su extremo. El entorno de trabajo planteado es el de plantas de fabricación de piezas de aleaciones de aluminio para automoción. Este tipo de componentes parte de un primer proceso de fundición que saca la pieza en bruto. Para series medias y altas, en función de las propiedades mecánicas y plásticas requeridas y los costes de producción, la inyección a alta presión (HPDC) y la fundición a baja presión (LPC) son las dos tecnologías más usadas en esta primera fase. Para inyección a alta presión, las aleaciones de aluminio más empleadas son, en designación simbólica según norma EN 1706 (entre paréntesis su designación numérica); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). Para baja presión, EN AC AlSi7Mg0,3 (EN AC 42100). En los 3 primeros casos, los límites de Silicio permitidos pueden superan el 10%. En el cuarto caso, es inferior al 10% por lo que, a los efectos de ser sometidas a mecanizados, las piezas fabricadas en aleaciones con Si superior al 10%, se puede considerar que son equivalentes, diferenciándolas de la cuarta. Las tolerancias geométricas y dimensionales conseguibles directamente de fundición, recogidas en normas como ISO 8062 o DIN 1688-1, establecen límites para este proceso. Fuera de esos límites, las garantías en conseguir producciones con los objetivos de ppms aceptados en la actualidad por el mercado, obligan a ir a fases posteriores de mecanizado. Aquellas geometrías que, funcionalmente, necesitan disponer de unas tolerancias geométricas y/o dimensionales definidas acorde a ISO 1101, y no capaces por este proceso inicial de moldeado a presión, deben ser procesadas en una fase posterior en células de mecanizado. En este caso, las tolerancias alcanzables para procesos de arranque de viruta se recogen en normas como ISO 2768. Las células de mecanizado se componen, por lo general, de varios centros de control numérico interrelacionados y comunicados entre sí por robots que manipulan las piezas en proceso de uno a otro. Dichos robots, disponen en su extremo de una pinza utillada para poder coger y soltar las piezas en los útiles de mecanizado, las mesas de intercambio para cambiar la pieza de posición o en utillajes de equipos de medición y prueba, o en cintas de entrada o salida. La repetibilidad es alta, de centésimas incluso, definida según norma ISO 9283. El problema es que, estos rangos de repetibilidad sólo se garantizan si no se hacen esfuerzos o éstos son despreciables (caso de mover piezas). Aunque las inercias de mover piezas a altas velocidades hacen que la trayectoria intermedia tenga poca precisión, al inicio y al final (al coger y dejar pieza, p.e.) se hacen a velocidades relativamente bajas que hacen que el efecto de las fuerzas de inercia sean menores y que permiten garantizar la repetibilidad anteriormente indicada. No ocurre así si se quitara la garra y se intercambia con un cabezal motorizado con una herramienta como broca, mandrino, plato de cuchillas, fresas frontales o tangenciales… Las fuerzas ejercidas de mecanizado generarían unos pares en las uniones tan grandes y tan variables que el control del robot no sería capaz de responder (o no está preparado, en un principio) y generaría una desviación en la trayectoria, realizada a baja velocidad, que desencadenaría en un error de posición (ver norma ISO 5458) no asumible para la funcionalidad deseada. Se podría llegar al caso de que la tolerancia alcanzada por un pretendido proceso más exacto diera una dimensión peor que la que daría el proceso de fundición, en principio con mayor variabilidad dimensional en proceso (y por ende con mayor intervalo de tolerancia garantizable). De hecho, en los CNCs, la precisión es muy elevada, (pudiéndose despreciar en la mayoría de los casos) y no es la responsable de, por ejemplo la tolerancia de posición al taladrar un agujero. Factores como, temperatura de la sala y de la pieza, calidad constructiva de los utillajes y rigidez en el amarre, error en el giro de mesas y de colocación de pieza, si lleva agujeros previos o no, si la herramienta está bien equilibrada y el cono es el adecuado para el tipo de mecanizado… influyen más. Es interesante que, un elemento no específico tan común en una planta industrial, en el entorno anteriormente descrito, como es un robot, el cual no sería necesario añadir por disponer de él ya (y por lo tanto la inversión sería muy pequeña), puede mejorar la cadena de valor disminuyendo el costo de fabricación. Y si se pudiera conjugar que ese robot destinado a tareas de manipulación, en los muchos tiempos de espera que va a disfrutar mientras el CNC arranca viruta, pudiese coger un cabezal y apoyar ese mecanizado; sería doblemente interesante. Por lo tanto, se antoja sugestivo poder conocer su comportamiento e intentar explicar qué sería necesario para llevar esto a cabo, motivo de este trabajo. La arquitectura de robot seleccionada es de tipo SCARA. La búsqueda de un robot cómodo de modelar y de analizar cinemática y dinámicamente, sin limitaciones relevantes en la multifuncionalidad de trabajos solicitados, ha llevado a esta elección, frente a otras arquitecturas como por ejemplo los robots antropomórficos de 6 grados de libertad, muy populares a nivel industrial. Este robot dispone de 3 uniones, de las cuales 2 son de tipo par de revolución (1 grado de libertad cada una) y la tercera es de tipo corredera o par cilíndrico (2 grados de libertad). La primera unión, de tipo par de revolución, sirve para unir el suelo (considerado como eslabón número 1) con el eslabón número 2. La segunda unión, también de ese tipo, une el eslabón número 2 con el eslabón número 3. Estos 2 brazos, pueden describir un movimiento horizontal, en el plano X-Y. El tercer eslabón, está unido al eslabón número 4 por la unión de tipo corredera. El movimiento que puede describir es paralelo al eje Z. El robot es de 4 grados de libertad (4 motores). En relación a los posibles trabajos que puede realizar este tipo de robot, su versatilidad abarca tanto operaciones típicas de manipulación como operaciones de arranque de viruta. Uno de los mecanizados más usuales es el taladrado, por lo cual se elige éste para su modelización y análisis. Dentro del taladrado se elegirá para acotar las fuerzas, taladrado en macizo con broca de diámetro 9 mm. El robot se ha considerado por el momento que tenga comportamiento de sólido rígido, por ser el mayor efecto esperado el de los pares en las uniones. Para modelar el robot se utiliza el método de los sistemas multicuerpos. Dentro de este método existen diversos tipos de formulaciones (p.e. Denavit-Hartenberg). D-H genera una cantidad muy grande de ecuaciones e incógnitas. Esas incógnitas son de difícil comprensión y, para cada posición, hay que detenerse a pensar qué significado tienen. Se ha optado por la formulación de coordenadas naturales. Este sistema utiliza puntos y vectores unitarios para definir la posición de los distintos cuerpos, y permite compartir, cuando es posible y se quiere, para definir los pares cinemáticos y reducir al mismo tiempo el número de variables. Las incógnitas son intuitivas, las ecuaciones de restricción muy sencillas y se reduce considerablemente el número de ecuaciones e incógnitas. Sin embargo, las coordenadas naturales “puras” tienen 2 problemas. El primero, que 2 elementos con un ángulo de 0 o 180 grados, dan lugar a puntos singulares que pueden crear problemas en las ecuaciones de restricción y por lo tanto han de evitarse. El segundo, que tampoco inciden directamente sobre la definición o el origen de los movimientos. Por lo tanto, es muy conveniente complementar esta formulación con ángulos y distancias (coordenadas relativas). Esto da lugar a las coordenadas naturales mixtas, que es la formulación final elegida para este TFM. Las coordenadas naturales mixtas no tienen el problema de los puntos singulares. Y la ventaja más importante reside en su utilidad a la hora de aplicar fuerzas motrices, momentos o evaluar errores. Al incidir sobre la incógnita origen (ángulos o distancias) controla los motores de manera directa. El algoritmo, la simulación y la obtención de resultados se ha programado mediante Matlab. Para realizar el modelo en coordenadas naturales mixtas, es preciso modelar en 2 pasos el robot a estudio. El primer modelo se basa en coordenadas naturales. Para su validación, se plantea una trayectoria definida y se analiza cinemáticamente si el robot satisface el movimiento solicitado, manteniendo su integridad como sistema multicuerpo. Se cuantifican los puntos (en este caso inicial y final) que configuran el robot. Al tratarse de sólidos rígidos, cada eslabón queda definido por sus respectivos puntos inicial y final (que son los más interesantes para la cinemática y la dinámica) y por un vector unitario no colineal a esos 2 puntos. Los vectores unitarios se colocan en los lugares en los que se tenga un eje de rotación o cuando se desee obtener información de un ángulo. No son necesarios vectores unitarios para medir distancias. Tampoco tienen por qué coincidir los grados de libertad con el número de vectores unitarios. Las longitudes de cada eslabón quedan definidas como constantes geométricas. Se establecen las restricciones que definen la naturaleza del robot y las relaciones entre los diferentes elementos y su entorno. La trayectoria se genera por una nube de puntos continua, definidos en coordenadas independientes. Cada conjunto de coordenadas independientes define, en un instante concreto, una posición y postura de robot determinada. Para conocerla, es necesario saber qué coordenadas dependientes hay en ese instante, y se obtienen resolviendo por el método de Newton-Rhapson las ecuaciones de restricción en función de las coordenadas independientes. El motivo de hacerlo así es porque las coordenadas dependientes deben satisfacer las restricciones, cosa que no ocurre con las coordenadas independientes. Cuando la validez del modelo se ha probado (primera validación), se pasa al modelo 2. El modelo número 2, incorpora a las coordenadas naturales del modelo número 1, las coordenadas relativas en forma de ángulos en los pares de revolución (3 ángulos; ϕ1, ϕ 2 y ϕ3) y distancias en los pares prismáticos (1 distancia; s). Estas coordenadas relativas pasan a ser las nuevas coordenadas independientes (sustituyendo a las coordenadas independientes cartesianas del modelo primero, que eran coordenadas naturales). Es necesario revisar si el sistema de vectores unitarios del modelo 1 es suficiente o no. Para este caso concreto, se han necesitado añadir 1 vector unitario adicional con objeto de que los ángulos queden perfectamente determinados con las correspondientes ecuaciones de producto escalar y/o vectorial. Las restricciones habrán de ser incrementadas en, al menos, 4 ecuaciones; una por cada nueva incógnita. La validación del modelo número 2, tiene 2 fases. La primera, al igual que se hizo en el modelo número 1, a través del análisis cinemático del comportamiento con una trayectoria definida. Podrían obtenerse del modelo 2 en este análisis, velocidades y aceleraciones, pero no son necesarios. Tan sólo interesan los movimientos o desplazamientos finitos. Comprobada la coherencia de movimientos (segunda validación), se pasa a analizar cinemáticamente el comportamiento con trayectorias interpoladas. El análisis cinemático con trayectorias interpoladas, trabaja con un número mínimo de 3 puntos máster. En este caso se han elegido 3; punto inicial, punto intermedio y punto final. El número de interpolaciones con el que se actúa es de 50 interpolaciones en cada tramo (cada 2 puntos máster hay un tramo), resultando un total de 100 interpolaciones. El método de interpolación utilizado es el de splines cúbicas con condición de aceleración inicial y final constantes, que genera las coordenadas independientes de los puntos interpolados de cada tramo. Las coordenadas dependientes se obtienen resolviendo las ecuaciones de restricción no lineales con el método de Newton-Rhapson. El método de las splines cúbicas es muy continuo, por lo que si se desea modelar una trayectoria en el que haya al menos 2 movimientos claramente diferenciados, es preciso hacerlo en 2 tramos y unirlos posteriormente. Sería el caso en el que alguno de los motores se desee expresamente que esté parado durante el primer movimiento y otro distinto lo esté durante el segundo movimiento (y así sucesivamente). Obtenido el movimiento, se calculan, también mediante fórmulas de diferenciación numérica, las velocidades y aceleraciones independientes. El proceso es análogo al anteriormente explicado, recordando la condición impuesta de que la aceleración en el instante t= 0 y en instante t= final, se ha tomado como 0. Las velocidades y aceleraciones dependientes se calculan resolviendo las correspondientes derivadas de las ecuaciones de restricción. Se comprueba, de nuevo, en una tercera validación del modelo, la coherencia del movimiento interpolado. La dinámica inversa calcula, para un movimiento definido -conocidas la posición, velocidad y la aceleración en cada instante de tiempo-, y conocidas las fuerzas externas que actúan (por ejemplo el peso); qué fuerzas hay que aplicar en los motores (donde hay control) para que se obtenga el citado movimiento. En la dinámica inversa, cada instante del tiempo es independiente de los demás y tiene una posición, una velocidad y una aceleración y unas fuerzas conocidas. En este caso concreto, se desean aplicar, de momento, sólo las fuerzas debidas al peso, aunque se podrían haber incorporado fuerzas de otra naturaleza si se hubiese deseado. Las posiciones, velocidades y aceleraciones, proceden del cálculo cinemático. El efecto inercial de las fuerzas tenidas en cuenta (el peso) es calculado. Como resultado final del análisis dinámico inverso, se obtienen los pares que han de ejercer los cuatro motores para replicar el movimiento prescrito con las fuerzas que estaban actuando. La cuarta validación del modelo consiste en confirmar que el movimiento obtenido por aplicar los pares obtenidos en la dinámica inversa, coinciden con el obtenido en el análisis cinemático (movimiento teórico). Para ello, es necesario acudir a la dinámica directa. La dinámica directa se encarga de calcular el movimiento del robot, resultante de aplicar unos pares en motores y unas fuerzas en el robot. Por lo tanto, el movimiento real resultante, al no haber cambiado ninguna condición de las obtenidas en la dinámica inversa (pares de motor y fuerzas inerciales debidas al peso de los eslabones) ha de ser el mismo al movimiento teórico. Siendo así, se considera que el robot está listo para trabajar. Si se introduce una fuerza exterior de mecanizado no contemplada en la dinámica inversa y se asigna en los motores los mismos pares resultantes de la resolución del problema dinámico inverso, el movimiento real obtenido no es igual al movimiento teórico. El control de lazo cerrado se basa en ir comparando el movimiento real con el deseado e introducir las correcciones necesarias para minimizar o anular las diferencias. Se aplican ganancias en forma de correcciones en posición y/o velocidad para eliminar esas diferencias. Se evalúa el error de posición como la diferencia, en cada punto, entre el movimiento teórico deseado en el análisis cinemático y el movimiento real obtenido para cada fuerza de mecanizado y una ganancia concreta. Finalmente, se mapea el error de posición obtenido para cada fuerza de mecanizado y las diferentes ganancias previstas, graficando la mejor precisión que puede dar el robot para cada operación que se le requiere, y en qué condiciones. -------------- This Master´s Thesis deals with a preliminary characterization of the behaviour for an industrial robot, configured with 4 elements and 4 degrees of freedoms, and subjected to machining forces at its end. Proposed working conditions are those typical from manufacturing plants with aluminium alloys for automotive industry. This type of components comes from a first casting process that produces rough parts. For medium and high volumes, high pressure die casting (HPDC) and low pressure die casting (LPC) are the most used technologies in this first phase. For high pressure die casting processes, most used aluminium alloys are, in simbolic designation according EN 1706 standard (between brackets, its numerical designation); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). For low pressure, EN AC AlSi7Mg0,3 (EN AC 42100). For the 3 first alloys, Si allowed limits can exceed 10% content. Fourth alloy has admisible limits under 10% Si. That means, from the point of view of machining, that components made of alloys with Si content above 10% can be considered as equivalent, and the fourth one must be studied separately. Geometrical and dimensional tolerances directly achievables from casting, gathered in standards such as ISO 8062 or DIN 1688-1, establish a limit for this process. Out from those limits, guarantees to achieve batches with objetive ppms currently accepted by market, force to go to subsequent machining process. Those geometries that functionally require a geometrical and/or dimensional tolerance defined according ISO 1101, not capable with initial moulding process, must be obtained afterwards in a machining phase with machining cells. In this case, tolerances achievables with cutting processes are gathered in standards such as ISO 2768. In general terms, machining cells contain several CNCs that they are interrelated and connected by robots that handle parts in process among them. Those robots have at their end a gripper in order to take/remove parts in machining fixtures, in interchange tables to modify position of part, in measurement and control tooling devices, or in entrance/exit conveyors. Repeatibility for robot is tight, even few hundredths of mm, defined according ISO 9283. Problem is like this; those repeatibilty ranks are only guaranteed when there are no stresses or they are not significant (f.e. due to only movement of parts). Although inertias due to moving parts at a high speed make that intermediate paths have little accuracy, at the beginning and at the end of trajectories (f.e, when picking part or leaving it) movement is made with very slow speeds that make lower the effect of inertias forces and allow to achieve repeatibility before mentioned. It does not happens the same if gripper is removed and it is exchanged by an spindle with a machining tool such as a drilling tool, a pcd boring tool, a face or a tangential milling cutter… Forces due to machining would create such big and variable torques in joints that control from the robot would not be able to react (or it is not prepared in principle) and would produce a deviation in working trajectory, made at a low speed, that would trigger a position error (see ISO 5458 standard) not assumable for requested function. Then it could be possible that tolerance achieved by a more exact expected process would turn out into a worst dimension than the one that could be achieved with casting process, in principle with a larger dimensional variability in process (and hence with a larger tolerance range reachable). As a matter of fact, accuracy is very tight in CNC, (its influence can be ignored in most cases) and it is not the responsible of, for example position tolerance when drilling a hole. Factors as, room and part temperature, manufacturing quality of machining fixtures, stiffness at clamping system, rotating error in 4th axis and part positioning error, if there are previous holes, if machining tool is properly balanced, if shank is suitable for that machining type… have more influence. It is interesting to know that, a non specific element as common, at a manufacturing plant in the enviroment above described, as a robot (not needed to be added, therefore with an additional minimum investment), can improve value chain decreasing manufacturing costs. And when it would be possible to combine that the robot dedicated to handling works could support CNCs´ works in its many waiting time while CNCs cut, and could take an spindle and help to cut; it would be double interesting. So according to all this, it would be interesting to be able to know its behaviour and try to explain what would be necessary to make this possible, reason of this work. Selected robot architecture is SCARA type. The search for a robot easy to be modeled and kinematically and dinamically analyzed, without significant limits in the multifunctionality of requested operations, has lead to this choice. Due to that, other very popular architectures in the industry, f.e. 6 DOFs anthropomorphic robots, have been discarded. This robot has 3 joints, 2 of them are revolute joints (1 DOF each one) and the third one is a cylindrical joint (2 DOFs). The first joint, a revolute one, is used to join floor (body 1) with body 2. The second one, a revolute joint too, joins body 2 with body 3. These 2 bodies can move horizontally in X-Y plane. Body 3 is linked to body 4 with a cylindrical joint. Movement that can be made is paralell to Z axis. The robt has 4 degrees of freedom (4 motors). Regarding potential works that this type of robot can make, its versatility covers either typical handling operations or cutting operations. One of the most common machinings is to drill. That is the reason why it has been chosen for the model and analysis. Within drilling, in order to enclose spectrum force, a typical solid drilling with 9 mm diameter. The robot is considered, at the moment, to have a behaviour as rigid body, as biggest expected influence is the one due to torques at joints. In order to modelize robot, it is used multibodies system method. There are under this heading different sorts of formulations (f.e. Denavit-Hartenberg). D-H creates a great amount of equations and unknown quantities. Those unknown quatities are of a difficult understanding and, for each position, one must stop to think about which meaning they have. The choice made is therefore one of formulation in natural coordinates. This system uses points and unit vectors to define position of each different elements, and allow to share, when it is possible and wished, to define kinematic torques and reduce number of variables at the same time. Unknown quantities are intuitive, constrain equations are easy and number of equations and variables are strongly reduced. However, “pure” natural coordinates suffer 2 problems. The first one is that 2 elements with an angle of 0° or 180°, give rise to singular positions that can create problems in constrain equations and therefore they must be avoided. The second problem is that they do not work directly over the definition or the origin of movements. Given that, it is highly recommended to complement this formulation with angles and distances (relative coordinates). This leads to mixed natural coordinates, and they are the final formulation chosen for this MTh. Mixed natural coordinates have not the problem of singular positions. And the most important advantage lies in their usefulness when applying driving forces, torques or evaluating errors. As they influence directly over origin variable (angles or distances), they control motors directly. The algorithm, simulation and obtaining of results has been programmed with Matlab. To design the model in mixed natural coordinates, it is necessary to model the robot to be studied in 2 steps. The first model is based in natural coordinates. To validate it, it is raised a defined trajectory and it is kinematically analyzed if robot fulfils requested movement, keeping its integrity as multibody system. The points (in this case starting and ending points) that configure the robot are quantified. As the elements are considered as rigid bodies, each of them is defined by its respectively starting and ending point (those points are the most interesting ones from the point of view of kinematics and dynamics) and by a non-colinear unit vector to those points. Unit vectors are placed where there is a rotating axis or when it is needed information of an angle. Unit vectors are not needed to measure distances. Neither DOFs must coincide with the number of unit vectors. Lengths of each arm are defined as geometrical constants. The constrains that define the nature of the robot and relationships among different elements and its enviroment are set. Path is generated by a cloud of continuous points, defined in independent coordinates. Each group of independent coordinates define, in an specific instant, a defined position and posture for the robot. In order to know it, it is needed to know which dependent coordinates there are in that instant, and they are obtained solving the constraint equations with Newton-Rhapson method according to independent coordinates. The reason to make it like this is because dependent coordinates must meet constraints, and this is not the case with independent coordinates. When suitability of model is checked (first approval), it is given next step to model 2. Model 2 adds to natural coordinates from model 1, the relative coordinates in the shape of angles in revoluting torques (3 angles; ϕ1, ϕ 2 and ϕ3) and distances in prismatic torques (1 distance; s). These relative coordinates become the new independent coordinates (replacing to cartesian independent coordinates from model 1, that they were natural coordinates). It is needed to review if unit vector system from model 1 is enough or not . For this specific case, it was necessary to add 1 additional unit vector to define perfectly angles with their related equations of dot and/or cross product. Constrains must be increased in, at least, 4 equations; one per each new variable. The approval of model 2 has two phases. The first one, same as made with model 1, through kinematic analysis of behaviour with a defined path. During this analysis, it could be obtained from model 2, velocities and accelerations, but they are not needed. They are only interesting movements and finite displacements. Once that the consistence of movements has been checked (second approval), it comes when the behaviour with interpolated trajectories must be kinematically analyzed. Kinematic analysis with interpolated trajectories work with a minimum number of 3 master points. In this case, 3 points have been chosen; starting point, middle point and ending point. The number of interpolations has been of 50 ones in each strecht (each 2 master points there is an strecht), turning into a total of 100 interpolations. The interpolation method used is the cubic splines one with condition of constant acceleration both at the starting and at the ending point. This method creates the independent coordinates of interpolated points of each strecht. The dependent coordinates are achieved solving the non-linear constrain equations with Newton-Rhapson method. The method of cubic splines is very continuous, therefore when it is needed to design a trajectory in which there are at least 2 movements clearly differents, it is required to make it in 2 steps and join them later. That would be the case when any of the motors would keep stopped during the first movement, and another different motor would remain stopped during the second movement (and so on). Once that movement is obtained, they are calculated, also with numerical differenciation formulas, the independent velocities and accelerations. This process is analogous to the one before explained, reminding condition that acceleration when t=0 and t=end are 0. Dependent velocities and accelerations are calculated solving related derivatives of constrain equations. In a third approval of the model it is checked, again, consistence of interpolated movement. Inverse dynamics calculates, for a defined movement –knowing position, velocity and acceleration in each instant of time-, and knowing external forces that act (f.e. weights); which forces must be applied in motors (where there is control) in order to obtain requested movement. In inverse dynamics, each instant of time is independent of the others and it has a position, a velocity, an acceleration and known forces. In this specific case, it is intended to apply, at the moment, only forces due to the weight, though forces of another nature could have been added if it would have been preferred. The positions, velocities and accelerations, come from kinematic calculation. The inertial effect of forces taken into account (weight) is calculated. As final result of the inverse dynamic analysis, the are obtained torques that the 4 motors must apply to repeat requested movement with the forces that were acting. The fourth approval of the model consists on confirming that the achieved movement due to the use of the torques obtained in the inverse dynamics, are in accordance with movements from kinematic analysis (theoretical movement). For this, it is necessary to work with direct dynamics. Direct dynamic is in charge of calculating the movements of robot that results from applying torques at motors and forces at the robot. Therefore, the resultant real movement, as there was no change in any condition of the ones obtained at the inverse dynamics (motor torques and inertial forces due to weight of elements) must be the same than theoretical movement. When these results are achieved, it is considered that robot is ready to work. When a machining external force is introduced and it was not taken into account before during the inverse dynamics, and torques at motors considered are the ones of the inverse dynamics, the real movement obtained is not the same than the theoretical movement. Closed loop control is based on comparing real movement with expected movement and introducing required corrrections to minimize or cancel differences. They are applied gains in the way of corrections for position and/or tolerance to remove those differences. Position error is evaluated as the difference, in each point, between theoretical movemment (calculated in the kinematic analysis) and the real movement achieved for each machining force and for an specific gain. Finally, the position error obtained for each machining force and gains are mapped, giving a chart with the best accuracy that the robot can give for each operation that has been requested and which conditions must be provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes an optimization of a semi-supervised Change Detection methodology based on a combination of Change Indices (CI) derived from an image multitemporal data set. For this purpose, SPOT 5 Panchromatic images with 2.5 m spatial resolution have been used, from which three Change Indices have been calculated. Two of them are usually known indices; however the third one has been derived considering the Kullbak-Leibler divergence. Then, these three indices have been combined forming a multiband image that has been used in as input for a Support Vector Machine (SVM) classifier where four different discriminant functions have been tested in order to differentiate between change and no_change categories. The performance of the suggested procedure has been assessed applying different quality measures, reaching in each case highly satisfactory values. These results have demonstrated that the simultaneous combination of basic change indices with others more sophisticated like the Kullback-Leibler distance, and the application of non-parametric discriminant functions like those employees in the SVM method, allows solving efficiently a change detection problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fadiga é um sintoma inespecífico, encontrado com freqüência na população. Ela é definida como sensação de cansaço físico profundo, perda de energia ou mesmo sensação de exaustão, e é importante a sua diferenciação com depressão ou fraqueza. Os transtornos depressivos e ansiosos constituem os transtornos psiquiátricos mais freqüentes no idoso, e quase sempre dão lugar a conseqüências graves neste grupo etário. Este estudo visa avaliar a influência da ansiedade e depressão sobre o desencadeamento de fadiga e evolução de problemas de saúde e de comportamentos peculiares ao processo de envelhecimento. Trata-se de um estudo, do tipo caso-controle investigando ansiedade, depressão e fadiga. Foram avaliados 61 indivíduos com 60 anos de idade ou mais. Um grupo controle constituído por 60 indivíduos jovens (idade até 35 anos), foram selecionados entre estudantes do Centro Universitário de Santo André que responderam um Questionário de Características Gerais, um Inventário de Ansiedade traço-estado, um Inventário de Depressão de Beck e uma Escala de Severidade de Fadiga. O grupo de idosos apresentou um escore significativamente maior em relação ao grupo controle na escala de severidade de fadiga. O grupo de idosos apresentou escore médio de 36,87 ± 14,61 enquanto o grupo controle apresentou escore médio de 31,47 ± 12,74 (t = 2,167; df = 119; p = 0,032). No entanto, o grupo de idosos apresentou escores significativamente maiores na escala de Beck (10,54 ± 8,63) em relação aos controles (6,83 ± 7,95); t = 2,455; df = 119; p = 0,016). Analisando-se apenas o grupo de indivíduos idosos, observou-se uma correlação significativa entre os escore da escala de severidade de fadiga e a escala de depressão de Beck (correlação de Pearson = 0,332; p = 0,009). Ainda trabalhando apenas com o grupo de indivíduos idosos, observou-se um escore significativamente maior da escala de severidade de fadiga naqueles indivíduos que praticavam atividade física regular, sendo, escore médio de 31,55 ± 13,36; (t = 2,203; df = 58; p = 0,032). A partir da análise dos resultados deste estudo pôde-se concluir que o grupo de indivíduos idosos apresentam estatisticamente significante escore maior, quando comparado com o grupo controle, apresentando mais sintomas de fadiga e depressão. Estes sintomas de fadiga ocorreram em conjunto com sintomas depressivos sugerindo uma possível correlação entre estes. Quando se observou apenas os idosos, esta correlação foi confirmada. Analisado-se ainda somente o grupo de indivíduos idosos observa-se que o grupo de idosos que praticam atividade física regularmente apresentam menos sintomas fadiga que o grupo que não pratica atividade física.(AU)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fadiga é um sintoma inespecífico, encontrado com freqüência na população. Ela é definida como sensação de cansaço físico profundo, perda de energia ou mesmo sensação de exaustão, e é importante a sua diferenciação com depressão ou fraqueza. Os transtornos depressivos e ansiosos constituem os transtornos psiquiátricos mais freqüentes no idoso, e quase sempre dão lugar a conseqüências graves neste grupo etário. Este estudo visa avaliar a influência da ansiedade e depressão sobre o desencadeamento de fadiga e evolução de problemas de saúde e de comportamentos peculiares ao processo de envelhecimento. Trata-se de um estudo, do tipo caso-controle investigando ansiedade, depressão e fadiga. Foram avaliados 61 indivíduos com 60 anos de idade ou mais. Um grupo controle constituído por 60 indivíduos jovens (idade até 35 anos), foram selecionados entre estudantes do Centro Universitário de Santo André que responderam um Questionário de Características Gerais, um Inventário de Ansiedade traço-estado, um Inventário de Depressão de Beck e uma Escala de Severidade de Fadiga. O grupo de idosos apresentou um escore significativamente maior em relação ao grupo controle na escala de severidade de fadiga. O grupo de idosos apresentou escore médio de 36,87 ± 14,61 enquanto o grupo controle apresentou escore médio de 31,47 ± 12,74 (t = 2,167; df = 119; p = 0,032). No entanto, o grupo de idosos apresentou escores significativamente maiores na escala de Beck (10,54 ± 8,63) em relação aos controles (6,83 ± 7,95); t = 2,455; df = 119; p = 0,016). Analisando-se apenas o grupo de indivíduos idosos, observou-se uma correlação significativa entre os escore da escala de severidade de fadiga e a escala de depressão de Beck (correlação de Pearson = 0,332; p = 0,009). Ainda trabalhando apenas com o grupo de indivíduos idosos, observou-se um escore significativamente maior da escala de severidade de fadiga naqueles indivíduos que praticavam atividade física regular, sendo, escore médio de 31,55 ± 13,36; (t = 2,203; df = 58; p = 0,032). A partir da análise dos resultados deste estudo pôde-se concluir que o grupo de indivíduos idosos apresentam estatisticamente significante escore maior, quando comparado com o grupo controle, apresentando mais sintomas de fadiga e depressão. Estes sintomas de fadiga ocorreram em conjunto com sintomas depressivos sugerindo uma possível correlação entre estes. Quando se observou apenas os idosos, esta correlação foi confirmada. Analisado-se ainda somente o grupo de indivíduos idosos observa-se que o grupo de idosos que praticam atividade física regularmente apresentam menos sintomas fadiga que o grupo que não pratica atividade física.(AU)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Pto gene encodes a serine/threonine kinase that confers resistance in tomato to Pseudomonas syringae pv. tomato strains that express the avirulence gene avrPto. Partial characterization of the Pto signal transduction pathway and the availability of transgenic tomato lines (± Pto) make this an ideal system for exploring the molecular basis of disease resistance. In this paper, we test two transgenic tomato cell suspension cultures (±Pto) for production of H2O2 following independent challenge with two strains of P. syringae pv. tomato (±avrPto). Only when Pto and avrPto are present in the corresponding organisms are two distinct phases of the oxidative burst seen, a rapid first burst followed by a slower and more prolonged second burst. In the remaining three plant–pathogen interactions, we observe either no burst or only a first burst, indicating that the second burst is correlated with disease resistance. Further support for this observation comes from the finding that both resistant and susceptible tomato lines produce the critical second oxidative burst when challenged with P. syringae pv. tabaci, a nonhost pathogen that elicits a hypersensitive response on both tomato lines. The Pto kinase is not required, however, for the oxidative burst initiated by non-specific elicitors such as oligogalacturonides or osmotic stress. A model describing a possible role for the Pto kinase in the overall scheme of oxidative burst signaling is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As well as inducing a protective immune response against reinfection, acute measles is associated with a marked suppression of immune functions against superinfecting agents and recall antigens, and this association is the major cause of the current high morbidity and mortality rate associated with measles virus (MV) infections. Dendritic cells (DCs) are antigen-presenting cells crucially involved in the initiation of primary and secondary immune responses, so we set out to define the interaction of MV with these cells. We found that both mature and precursor human DCs generated from peripheral blood monocytic cells express the major MV protein receptor CD46 and are highly susceptible to infection with both MV vaccine (ED) and wild-type (WTF) strains, albeit with different kinetics. Except for the down-regulation of CD46, the expression pattern of functionally important surface antigens on mature DCs was not markedly altered after MV infection. However, precursor DCs up-regulated HLA-DR, CD83, and CD86 within 24 h of WTF infection and 72 h after ED infection, indicating their functional maturation. In addition, interleukin 12 synthesis was markedly enhanced after both ED and WTF infection in DCs. On the other hand, MV-infected DCs strongly interfered with mitogen-dependent proliferation of freshly isolated peripheral blood lymphocytes in vitro. These data indicate that the differentiation of effector functions of DCs is not impaired but rather is stimulated by MV infection. Yet, mature, activated DCs expressing MV surface antigens do give a negative signal to inhibit lymphocyte proliferation and thus contribute to MV-induced immunosuppression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Antigen recognition in the adaptive immune response by Ig and T-cell antigen receptors (TCRs) is effected through patterned differences in the peptide sequence in the V regions. V-region specificity forms through genetically programmed rearrangement of individual, diversified segmental elements in single somatic cells. Other Ig superfamily members, including natural killer receptors that mediate cell-surface recognition, do not undergo segmental reorganization, and contain type-2 C (C2) domains, which are structurally distinct from the C1 domains found in Ig and TCR. Immunoreceptor tyrosine-based inhibitory motifs that transduce negative regulatory signals through the cell membrane are found in certain natural killer and other cell surface inhibitory receptors, but not in Ig and TCR. In this study, we employ a genomic approach by using the pufferfish (Spheroides nephelus) to characterize a nonrearranging novel immune-type receptor gene family. Twenty-six different nonrearranging genes, which each encode highly diversified V as well as a V-like C2 extracellular domain, a transmembrane region, and in most instances, an immunoreceptor tyrosine-based inhibitory motif-containing cytoplasmic tail, are identified in an ≈113 kb P1 artificial chromosome insert. The presence in novel immune-type receptor genes of V regions that are related closely to those found in Ig and TCR as well as regulatory motifs that are characteristic of inhibitory receptors implies a heretofore unrecognized link between known receptors that mediate adaptive and innate immune functions.