947 resultados para Non-specific stress indicators
Resumo:
Ozone (O3) phytototoxicity has been reported on a wide range of plantspecies, inducing the appearance of specific foliar injury or increasing leaf senescence. No information regarding the sensitivity of plantspecies from dehesa Mediterranean grasslands has been provided in spite of their great biological diversity. A screening study was carried out in open-top chambers (OTCs) to assess the O3-sensitivity of 22 representative therophytes of these ecosystems based on the appearance and extent of foliar injury. A distinction was made between specific O3injury and non-specific discolorations. Three O3 treatments (charcoal-filtered air, non-filtered air and non-filtered air supplemented with 40 nl l−1 O3 during 5 days per week) and three OTCs per treatment were used. The Papilionaceae species were more sensitive to O3 than the Poaceae species involved in the experiment since ambient levels induced foliar symptoms in 67% and 27%, respectively, of both plant families. An O3-sensitivity ranking of the species involved in the assessment is provided, which could be useful for bioindication programmes in Mediterranean areas. The assessed Trifoliumspecies were particularly sensitive since foliar symptoms were apparent in association with O3 accumulated exposures well below the current critical level for the prevention of this kind of effect. The exposure indices involving lower cut-off values (i.e. 30 nl l−1) were best related with the extent of O3-induced injury on these species.
Resumo:
La prevalencia de las alergias está aumentando desde mediados del siglo XX, y se estima que actualmente afectan a alrededor del 2-8 % de la población, pero las causas de este aumento aún no están claras. Encontrar el origen del mecanismo por el cual una proteína inofensiva se convierte en capaz de inducir una respuesta alérgica es de vital importancia para prevenir y tratar estas enfermedades. Aunque la caracterización de alérgenos relevantes ha ayudado a mejorar el manejo clínico y a aclarar los mecanismos básicos de las reacciones alérgicas, todavía queda un largo camino para establecer el origen de la alergenicidad y reactividad cruzada. El objetivo de esta tesis ha sido caracterizar las bases moleculares de la alergenicidad tomando como modelo dos familias de panalergenos (proteínas de transferencia de lípidos –LTPs- y taumatinas –TLPs-) y estudiando los mecanismos que median la sensibilización y la reactividad cruzada para mejorar tanto el diagnóstico como el tratamiento de la alergia. Para ello, se llevaron a cabo dos estrategias: estudiar la reactividad cruzada de miembros de familias de panalérgenos; y estudiar moléculas-co-adyuvantes que pudieran favorecer la capacidad alergénica de dichas proteínas. Para estudiar la reactividad cruzada entre miembros de la misma familia de proteínas, se seleccionaron LTPs y TLPs, descritas como alergenos, tomando como modelo la alergia a frutas. Por otra parte, se estudiaron los perfiles de sensibilización a alérgenos de trigo relacionados con el asma del panadero, la enfermedad ocupacional más relevante de origen alérgico. Estos estudios se llevaron a cabo estandarizando ensayos tipo microarrays con alérgenos y analizando los resultados por la teoría de grafos. En relación al estudiar moléculas-co-adyuvantes que pudieran favorecer la capacidad alergénica de dichas proteínas, se llevaron a cabo estudios sobre la interacción de los alérgenos alimentarios con células del sistema inmune humano y murino y el epitelio de las mucosas, analizando la importancia de moléculas co-transportadas con los alérgenos en el desarrollo de una respuesta Th2. Para ello, Pru p 3(LTP y alérgeno principal del melocotón) se selección como modelo para llevarlo a cabo. Por otra parte, se analizó el papel de moléculas activadoras del sistema inmune producidas por patógenos en la inducción de alergias alimentarias seleccionando el modelo kiwi-alternaria, y el papel de Alt a 1, alérgeno mayor de dicho hongo, en la sensibilización a Act d 2, alérgeno mayor de kiwi. En resumen, el presente trabajo presenta una investigación innovadora aportando resultados de gran utilidad tanto para la mejora del diagnóstico como para nuevas investigaciones sobre la alergia y el esclarecimiento final de los mecanismos que caracterizan esta enfermedad. ABSTRACT Allergies are increasing their prevalence from mid twentieth century, and they are currently estimated to affect around 2-8% of the population but the underlying causes of this increase remain still elusive. The understanding of the mechanism by which a harmless protein becomes capable of inducing an allergic response provides us the basis to prevent and treat these diseases. Although the characterization of relevant allergens has led to improved clinical management and has helped to clarify the basic mechanisms of allergic reactions, it seems justified in aspiring to molecularly dissecting these allergens to establish the structural basis of their allergenicity and cross-reactivity. The aim of this thesis was to characterize the molecular basis of the allergenicity of model proteins belonging to different families (Lipid Transfer Proteins –LTPs-, and Thaumatin-like Proteins –TLPs-) in order to identify mechanisms that mediate sensitization and cross reactivity for developing new strategies in the management of allergy, both diagnosis and treatment, in the near future. With this purpose, two strategies have been conducted: studies of cross-reactivity among panallergen families and molecular studies of the contribution of cofactors in the induction of the allergic response by these panallergens. Following the first strategy, we studied the cross-reactivity among members of two plant panallergens (LTPs , Lipid Transfer Proteins , and TLPs , Thaumatin-like Proteins) using the peach allergy as a model. Similarly, we characterized the sensitization profiles to wheat allergens in baker's asthma development, the most relevant occupational disease. These studies were performed using allergen microarrays and the graph theory for analyzing the results. Regarding the second approach, we analyzed the interaction of plant allergens with immune and epithelial cells. To perform these studies , we examined the importance of ligands and co-transported molecules of plant allergens in the development of Th2 responses. To this end, Pru p 3, nsLTP (non-specific Lipid Transfer Protein) and peach major allergen, was selected as a model to investigate its interaction with cells of the human and murine immune systems as well as with the intestinal epithelium and the contribution of its ligand in inducing an allergic response was studied. Moreover, we analyzed the role of pathogen associated molecules in the induction of food allergy. For that, we selected the kiwi- alternaria system as a model and the role of Alt a 1 , major allergen of the fungus, in the development of Act d 2-sensitization was studied. In summary, this work presents an innovative research providing useful results for improving diagnosis and leading to further research on allergy and the final clarification of the mechanisms that characterize this disease.
Resumo:
El presente Trabajo fin Fin de Máster, versa sobre una caracterización preliminar del comportamiento de un robot de tipo industrial, configurado por 4 eslabones y 4 grados de libertad, y sometido a fuerzas de mecanizado en su extremo. El entorno de trabajo planteado es el de plantas de fabricación de piezas de aleaciones de aluminio para automoción. Este tipo de componentes parte de un primer proceso de fundición que saca la pieza en bruto. Para series medias y altas, en función de las propiedades mecánicas y plásticas requeridas y los costes de producción, la inyección a alta presión (HPDC) y la fundición a baja presión (LPC) son las dos tecnologías más usadas en esta primera fase. Para inyección a alta presión, las aleaciones de aluminio más empleadas son, en designación simbólica según norma EN 1706 (entre paréntesis su designación numérica); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). Para baja presión, EN AC AlSi7Mg0,3 (EN AC 42100). En los 3 primeros casos, los límites de Silicio permitidos pueden superan el 10%. En el cuarto caso, es inferior al 10% por lo que, a los efectos de ser sometidas a mecanizados, las piezas fabricadas en aleaciones con Si superior al 10%, se puede considerar que son equivalentes, diferenciándolas de la cuarta. Las tolerancias geométricas y dimensionales conseguibles directamente de fundición, recogidas en normas como ISO 8062 o DIN 1688-1, establecen límites para este proceso. Fuera de esos límites, las garantías en conseguir producciones con los objetivos de ppms aceptados en la actualidad por el mercado, obligan a ir a fases posteriores de mecanizado. Aquellas geometrías que, funcionalmente, necesitan disponer de unas tolerancias geométricas y/o dimensionales definidas acorde a ISO 1101, y no capaces por este proceso inicial de moldeado a presión, deben ser procesadas en una fase posterior en células de mecanizado. En este caso, las tolerancias alcanzables para procesos de arranque de viruta se recogen en normas como ISO 2768. Las células de mecanizado se componen, por lo general, de varios centros de control numérico interrelacionados y comunicados entre sí por robots que manipulan las piezas en proceso de uno a otro. Dichos robots, disponen en su extremo de una pinza utillada para poder coger y soltar las piezas en los útiles de mecanizado, las mesas de intercambio para cambiar la pieza de posición o en utillajes de equipos de medición y prueba, o en cintas de entrada o salida. La repetibilidad es alta, de centésimas incluso, definida según norma ISO 9283. El problema es que, estos rangos de repetibilidad sólo se garantizan si no se hacen esfuerzos o éstos son despreciables (caso de mover piezas). Aunque las inercias de mover piezas a altas velocidades hacen que la trayectoria intermedia tenga poca precisión, al inicio y al final (al coger y dejar pieza, p.e.) se hacen a velocidades relativamente bajas que hacen que el efecto de las fuerzas de inercia sean menores y que permiten garantizar la repetibilidad anteriormente indicada. No ocurre así si se quitara la garra y se intercambia con un cabezal motorizado con una herramienta como broca, mandrino, plato de cuchillas, fresas frontales o tangenciales… Las fuerzas ejercidas de mecanizado generarían unos pares en las uniones tan grandes y tan variables que el control del robot no sería capaz de responder (o no está preparado, en un principio) y generaría una desviación en la trayectoria, realizada a baja velocidad, que desencadenaría en un error de posición (ver norma ISO 5458) no asumible para la funcionalidad deseada. Se podría llegar al caso de que la tolerancia alcanzada por un pretendido proceso más exacto diera una dimensión peor que la que daría el proceso de fundición, en principio con mayor variabilidad dimensional en proceso (y por ende con mayor intervalo de tolerancia garantizable). De hecho, en los CNCs, la precisión es muy elevada, (pudiéndose despreciar en la mayoría de los casos) y no es la responsable de, por ejemplo la tolerancia de posición al taladrar un agujero. Factores como, temperatura de la sala y de la pieza, calidad constructiva de los utillajes y rigidez en el amarre, error en el giro de mesas y de colocación de pieza, si lleva agujeros previos o no, si la herramienta está bien equilibrada y el cono es el adecuado para el tipo de mecanizado… influyen más. Es interesante que, un elemento no específico tan común en una planta industrial, en el entorno anteriormente descrito, como es un robot, el cual no sería necesario añadir por disponer de él ya (y por lo tanto la inversión sería muy pequeña), puede mejorar la cadena de valor disminuyendo el costo de fabricación. Y si se pudiera conjugar que ese robot destinado a tareas de manipulación, en los muchos tiempos de espera que va a disfrutar mientras el CNC arranca viruta, pudiese coger un cabezal y apoyar ese mecanizado; sería doblemente interesante. Por lo tanto, se antoja sugestivo poder conocer su comportamiento e intentar explicar qué sería necesario para llevar esto a cabo, motivo de este trabajo. La arquitectura de robot seleccionada es de tipo SCARA. La búsqueda de un robot cómodo de modelar y de analizar cinemática y dinámicamente, sin limitaciones relevantes en la multifuncionalidad de trabajos solicitados, ha llevado a esta elección, frente a otras arquitecturas como por ejemplo los robots antropomórficos de 6 grados de libertad, muy populares a nivel industrial. Este robot dispone de 3 uniones, de las cuales 2 son de tipo par de revolución (1 grado de libertad cada una) y la tercera es de tipo corredera o par cilíndrico (2 grados de libertad). La primera unión, de tipo par de revolución, sirve para unir el suelo (considerado como eslabón número 1) con el eslabón número 2. La segunda unión, también de ese tipo, une el eslabón número 2 con el eslabón número 3. Estos 2 brazos, pueden describir un movimiento horizontal, en el plano X-Y. El tercer eslabón, está unido al eslabón número 4 por la unión de tipo corredera. El movimiento que puede describir es paralelo al eje Z. El robot es de 4 grados de libertad (4 motores). En relación a los posibles trabajos que puede realizar este tipo de robot, su versatilidad abarca tanto operaciones típicas de manipulación como operaciones de arranque de viruta. Uno de los mecanizados más usuales es el taladrado, por lo cual se elige éste para su modelización y análisis. Dentro del taladrado se elegirá para acotar las fuerzas, taladrado en macizo con broca de diámetro 9 mm. El robot se ha considerado por el momento que tenga comportamiento de sólido rígido, por ser el mayor efecto esperado el de los pares en las uniones. Para modelar el robot se utiliza el método de los sistemas multicuerpos. Dentro de este método existen diversos tipos de formulaciones (p.e. Denavit-Hartenberg). D-H genera una cantidad muy grande de ecuaciones e incógnitas. Esas incógnitas son de difícil comprensión y, para cada posición, hay que detenerse a pensar qué significado tienen. Se ha optado por la formulación de coordenadas naturales. Este sistema utiliza puntos y vectores unitarios para definir la posición de los distintos cuerpos, y permite compartir, cuando es posible y se quiere, para definir los pares cinemáticos y reducir al mismo tiempo el número de variables. Las incógnitas son intuitivas, las ecuaciones de restricción muy sencillas y se reduce considerablemente el número de ecuaciones e incógnitas. Sin embargo, las coordenadas naturales “puras” tienen 2 problemas. El primero, que 2 elementos con un ángulo de 0 o 180 grados, dan lugar a puntos singulares que pueden crear problemas en las ecuaciones de restricción y por lo tanto han de evitarse. El segundo, que tampoco inciden directamente sobre la definición o el origen de los movimientos. Por lo tanto, es muy conveniente complementar esta formulación con ángulos y distancias (coordenadas relativas). Esto da lugar a las coordenadas naturales mixtas, que es la formulación final elegida para este TFM. Las coordenadas naturales mixtas no tienen el problema de los puntos singulares. Y la ventaja más importante reside en su utilidad a la hora de aplicar fuerzas motrices, momentos o evaluar errores. Al incidir sobre la incógnita origen (ángulos o distancias) controla los motores de manera directa. El algoritmo, la simulación y la obtención de resultados se ha programado mediante Matlab. Para realizar el modelo en coordenadas naturales mixtas, es preciso modelar en 2 pasos el robot a estudio. El primer modelo se basa en coordenadas naturales. Para su validación, se plantea una trayectoria definida y se analiza cinemáticamente si el robot satisface el movimiento solicitado, manteniendo su integridad como sistema multicuerpo. Se cuantifican los puntos (en este caso inicial y final) que configuran el robot. Al tratarse de sólidos rígidos, cada eslabón queda definido por sus respectivos puntos inicial y final (que son los más interesantes para la cinemática y la dinámica) y por un vector unitario no colineal a esos 2 puntos. Los vectores unitarios se colocan en los lugares en los que se tenga un eje de rotación o cuando se desee obtener información de un ángulo. No son necesarios vectores unitarios para medir distancias. Tampoco tienen por qué coincidir los grados de libertad con el número de vectores unitarios. Las longitudes de cada eslabón quedan definidas como constantes geométricas. Se establecen las restricciones que definen la naturaleza del robot y las relaciones entre los diferentes elementos y su entorno. La trayectoria se genera por una nube de puntos continua, definidos en coordenadas independientes. Cada conjunto de coordenadas independientes define, en un instante concreto, una posición y postura de robot determinada. Para conocerla, es necesario saber qué coordenadas dependientes hay en ese instante, y se obtienen resolviendo por el método de Newton-Rhapson las ecuaciones de restricción en función de las coordenadas independientes. El motivo de hacerlo así es porque las coordenadas dependientes deben satisfacer las restricciones, cosa que no ocurre con las coordenadas independientes. Cuando la validez del modelo se ha probado (primera validación), se pasa al modelo 2. El modelo número 2, incorpora a las coordenadas naturales del modelo número 1, las coordenadas relativas en forma de ángulos en los pares de revolución (3 ángulos; ϕ1, ϕ 2 y ϕ3) y distancias en los pares prismáticos (1 distancia; s). Estas coordenadas relativas pasan a ser las nuevas coordenadas independientes (sustituyendo a las coordenadas independientes cartesianas del modelo primero, que eran coordenadas naturales). Es necesario revisar si el sistema de vectores unitarios del modelo 1 es suficiente o no. Para este caso concreto, se han necesitado añadir 1 vector unitario adicional con objeto de que los ángulos queden perfectamente determinados con las correspondientes ecuaciones de producto escalar y/o vectorial. Las restricciones habrán de ser incrementadas en, al menos, 4 ecuaciones; una por cada nueva incógnita. La validación del modelo número 2, tiene 2 fases. La primera, al igual que se hizo en el modelo número 1, a través del análisis cinemático del comportamiento con una trayectoria definida. Podrían obtenerse del modelo 2 en este análisis, velocidades y aceleraciones, pero no son necesarios. Tan sólo interesan los movimientos o desplazamientos finitos. Comprobada la coherencia de movimientos (segunda validación), se pasa a analizar cinemáticamente el comportamiento con trayectorias interpoladas. El análisis cinemático con trayectorias interpoladas, trabaja con un número mínimo de 3 puntos máster. En este caso se han elegido 3; punto inicial, punto intermedio y punto final. El número de interpolaciones con el que se actúa es de 50 interpolaciones en cada tramo (cada 2 puntos máster hay un tramo), resultando un total de 100 interpolaciones. El método de interpolación utilizado es el de splines cúbicas con condición de aceleración inicial y final constantes, que genera las coordenadas independientes de los puntos interpolados de cada tramo. Las coordenadas dependientes se obtienen resolviendo las ecuaciones de restricción no lineales con el método de Newton-Rhapson. El método de las splines cúbicas es muy continuo, por lo que si se desea modelar una trayectoria en el que haya al menos 2 movimientos claramente diferenciados, es preciso hacerlo en 2 tramos y unirlos posteriormente. Sería el caso en el que alguno de los motores se desee expresamente que esté parado durante el primer movimiento y otro distinto lo esté durante el segundo movimiento (y así sucesivamente). Obtenido el movimiento, se calculan, también mediante fórmulas de diferenciación numérica, las velocidades y aceleraciones independientes. El proceso es análogo al anteriormente explicado, recordando la condición impuesta de que la aceleración en el instante t= 0 y en instante t= final, se ha tomado como 0. Las velocidades y aceleraciones dependientes se calculan resolviendo las correspondientes derivadas de las ecuaciones de restricción. Se comprueba, de nuevo, en una tercera validación del modelo, la coherencia del movimiento interpolado. La dinámica inversa calcula, para un movimiento definido -conocidas la posición, velocidad y la aceleración en cada instante de tiempo-, y conocidas las fuerzas externas que actúan (por ejemplo el peso); qué fuerzas hay que aplicar en los motores (donde hay control) para que se obtenga el citado movimiento. En la dinámica inversa, cada instante del tiempo es independiente de los demás y tiene una posición, una velocidad y una aceleración y unas fuerzas conocidas. En este caso concreto, se desean aplicar, de momento, sólo las fuerzas debidas al peso, aunque se podrían haber incorporado fuerzas de otra naturaleza si se hubiese deseado. Las posiciones, velocidades y aceleraciones, proceden del cálculo cinemático. El efecto inercial de las fuerzas tenidas en cuenta (el peso) es calculado. Como resultado final del análisis dinámico inverso, se obtienen los pares que han de ejercer los cuatro motores para replicar el movimiento prescrito con las fuerzas que estaban actuando. La cuarta validación del modelo consiste en confirmar que el movimiento obtenido por aplicar los pares obtenidos en la dinámica inversa, coinciden con el obtenido en el análisis cinemático (movimiento teórico). Para ello, es necesario acudir a la dinámica directa. La dinámica directa se encarga de calcular el movimiento del robot, resultante de aplicar unos pares en motores y unas fuerzas en el robot. Por lo tanto, el movimiento real resultante, al no haber cambiado ninguna condición de las obtenidas en la dinámica inversa (pares de motor y fuerzas inerciales debidas al peso de los eslabones) ha de ser el mismo al movimiento teórico. Siendo así, se considera que el robot está listo para trabajar. Si se introduce una fuerza exterior de mecanizado no contemplada en la dinámica inversa y se asigna en los motores los mismos pares resultantes de la resolución del problema dinámico inverso, el movimiento real obtenido no es igual al movimiento teórico. El control de lazo cerrado se basa en ir comparando el movimiento real con el deseado e introducir las correcciones necesarias para minimizar o anular las diferencias. Se aplican ganancias en forma de correcciones en posición y/o velocidad para eliminar esas diferencias. Se evalúa el error de posición como la diferencia, en cada punto, entre el movimiento teórico deseado en el análisis cinemático y el movimiento real obtenido para cada fuerza de mecanizado y una ganancia concreta. Finalmente, se mapea el error de posición obtenido para cada fuerza de mecanizado y las diferentes ganancias previstas, graficando la mejor precisión que puede dar el robot para cada operación que se le requiere, y en qué condiciones. -------------- This Master´s Thesis deals with a preliminary characterization of the behaviour for an industrial robot, configured with 4 elements and 4 degrees of freedoms, and subjected to machining forces at its end. Proposed working conditions are those typical from manufacturing plants with aluminium alloys for automotive industry. This type of components comes from a first casting process that produces rough parts. For medium and high volumes, high pressure die casting (HPDC) and low pressure die casting (LPC) are the most used technologies in this first phase. For high pressure die casting processes, most used aluminium alloys are, in simbolic designation according EN 1706 standard (between brackets, its numerical designation); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). For low pressure, EN AC AlSi7Mg0,3 (EN AC 42100). For the 3 first alloys, Si allowed limits can exceed 10% content. Fourth alloy has admisible limits under 10% Si. That means, from the point of view of machining, that components made of alloys with Si content above 10% can be considered as equivalent, and the fourth one must be studied separately. Geometrical and dimensional tolerances directly achievables from casting, gathered in standards such as ISO 8062 or DIN 1688-1, establish a limit for this process. Out from those limits, guarantees to achieve batches with objetive ppms currently accepted by market, force to go to subsequent machining process. Those geometries that functionally require a geometrical and/or dimensional tolerance defined according ISO 1101, not capable with initial moulding process, must be obtained afterwards in a machining phase with machining cells. In this case, tolerances achievables with cutting processes are gathered in standards such as ISO 2768. In general terms, machining cells contain several CNCs that they are interrelated and connected by robots that handle parts in process among them. Those robots have at their end a gripper in order to take/remove parts in machining fixtures, in interchange tables to modify position of part, in measurement and control tooling devices, or in entrance/exit conveyors. Repeatibility for robot is tight, even few hundredths of mm, defined according ISO 9283. Problem is like this; those repeatibilty ranks are only guaranteed when there are no stresses or they are not significant (f.e. due to only movement of parts). Although inertias due to moving parts at a high speed make that intermediate paths have little accuracy, at the beginning and at the end of trajectories (f.e, when picking part or leaving it) movement is made with very slow speeds that make lower the effect of inertias forces and allow to achieve repeatibility before mentioned. It does not happens the same if gripper is removed and it is exchanged by an spindle with a machining tool such as a drilling tool, a pcd boring tool, a face or a tangential milling cutter… Forces due to machining would create such big and variable torques in joints that control from the robot would not be able to react (or it is not prepared in principle) and would produce a deviation in working trajectory, made at a low speed, that would trigger a position error (see ISO 5458 standard) not assumable for requested function. Then it could be possible that tolerance achieved by a more exact expected process would turn out into a worst dimension than the one that could be achieved with casting process, in principle with a larger dimensional variability in process (and hence with a larger tolerance range reachable). As a matter of fact, accuracy is very tight in CNC, (its influence can be ignored in most cases) and it is not the responsible of, for example position tolerance when drilling a hole. Factors as, room and part temperature, manufacturing quality of machining fixtures, stiffness at clamping system, rotating error in 4th axis and part positioning error, if there are previous holes, if machining tool is properly balanced, if shank is suitable for that machining type… have more influence. It is interesting to know that, a non specific element as common, at a manufacturing plant in the enviroment above described, as a robot (not needed to be added, therefore with an additional minimum investment), can improve value chain decreasing manufacturing costs. And when it would be possible to combine that the robot dedicated to handling works could support CNCs´ works in its many waiting time while CNCs cut, and could take an spindle and help to cut; it would be double interesting. So according to all this, it would be interesting to be able to know its behaviour and try to explain what would be necessary to make this possible, reason of this work. Selected robot architecture is SCARA type. The search for a robot easy to be modeled and kinematically and dinamically analyzed, without significant limits in the multifunctionality of requested operations, has lead to this choice. Due to that, other very popular architectures in the industry, f.e. 6 DOFs anthropomorphic robots, have been discarded. This robot has 3 joints, 2 of them are revolute joints (1 DOF each one) and the third one is a cylindrical joint (2 DOFs). The first joint, a revolute one, is used to join floor (body 1) with body 2. The second one, a revolute joint too, joins body 2 with body 3. These 2 bodies can move horizontally in X-Y plane. Body 3 is linked to body 4 with a cylindrical joint. Movement that can be made is paralell to Z axis. The robt has 4 degrees of freedom (4 motors). Regarding potential works that this type of robot can make, its versatility covers either typical handling operations or cutting operations. One of the most common machinings is to drill. That is the reason why it has been chosen for the model and analysis. Within drilling, in order to enclose spectrum force, a typical solid drilling with 9 mm diameter. The robot is considered, at the moment, to have a behaviour as rigid body, as biggest expected influence is the one due to torques at joints. In order to modelize robot, it is used multibodies system method. There are under this heading different sorts of formulations (f.e. Denavit-Hartenberg). D-H creates a great amount of equations and unknown quantities. Those unknown quatities are of a difficult understanding and, for each position, one must stop to think about which meaning they have. The choice made is therefore one of formulation in natural coordinates. This system uses points and unit vectors to define position of each different elements, and allow to share, when it is possible and wished, to define kinematic torques and reduce number of variables at the same time. Unknown quantities are intuitive, constrain equations are easy and number of equations and variables are strongly reduced. However, “pure” natural coordinates suffer 2 problems. The first one is that 2 elements with an angle of 0° or 180°, give rise to singular positions that can create problems in constrain equations and therefore they must be avoided. The second problem is that they do not work directly over the definition or the origin of movements. Given that, it is highly recommended to complement this formulation with angles and distances (relative coordinates). This leads to mixed natural coordinates, and they are the final formulation chosen for this MTh. Mixed natural coordinates have not the problem of singular positions. And the most important advantage lies in their usefulness when applying driving forces, torques or evaluating errors. As they influence directly over origin variable (angles or distances), they control motors directly. The algorithm, simulation and obtaining of results has been programmed with Matlab. To design the model in mixed natural coordinates, it is necessary to model the robot to be studied in 2 steps. The first model is based in natural coordinates. To validate it, it is raised a defined trajectory and it is kinematically analyzed if robot fulfils requested movement, keeping its integrity as multibody system. The points (in this case starting and ending points) that configure the robot are quantified. As the elements are considered as rigid bodies, each of them is defined by its respectively starting and ending point (those points are the most interesting ones from the point of view of kinematics and dynamics) and by a non-colinear unit vector to those points. Unit vectors are placed where there is a rotating axis or when it is needed information of an angle. Unit vectors are not needed to measure distances. Neither DOFs must coincide with the number of unit vectors. Lengths of each arm are defined as geometrical constants. The constrains that define the nature of the robot and relationships among different elements and its enviroment are set. Path is generated by a cloud of continuous points, defined in independent coordinates. Each group of independent coordinates define, in an specific instant, a defined position and posture for the robot. In order to know it, it is needed to know which dependent coordinates there are in that instant, and they are obtained solving the constraint equations with Newton-Rhapson method according to independent coordinates. The reason to make it like this is because dependent coordinates must meet constraints, and this is not the case with independent coordinates. When suitability of model is checked (first approval), it is given next step to model 2. Model 2 adds to natural coordinates from model 1, the relative coordinates in the shape of angles in revoluting torques (3 angles; ϕ1, ϕ 2 and ϕ3) and distances in prismatic torques (1 distance; s). These relative coordinates become the new independent coordinates (replacing to cartesian independent coordinates from model 1, that they were natural coordinates). It is needed to review if unit vector system from model 1 is enough or not . For this specific case, it was necessary to add 1 additional unit vector to define perfectly angles with their related equations of dot and/or cross product. Constrains must be increased in, at least, 4 equations; one per each new variable. The approval of model 2 has two phases. The first one, same as made with model 1, through kinematic analysis of behaviour with a defined path. During this analysis, it could be obtained from model 2, velocities and accelerations, but they are not needed. They are only interesting movements and finite displacements. Once that the consistence of movements has been checked (second approval), it comes when the behaviour with interpolated trajectories must be kinematically analyzed. Kinematic analysis with interpolated trajectories work with a minimum number of 3 master points. In this case, 3 points have been chosen; starting point, middle point and ending point. The number of interpolations has been of 50 ones in each strecht (each 2 master points there is an strecht), turning into a total of 100 interpolations. The interpolation method used is the cubic splines one with condition of constant acceleration both at the starting and at the ending point. This method creates the independent coordinates of interpolated points of each strecht. The dependent coordinates are achieved solving the non-linear constrain equations with Newton-Rhapson method. The method of cubic splines is very continuous, therefore when it is needed to design a trajectory in which there are at least 2 movements clearly differents, it is required to make it in 2 steps and join them later. That would be the case when any of the motors would keep stopped during the first movement, and another different motor would remain stopped during the second movement (and so on). Once that movement is obtained, they are calculated, also with numerical differenciation formulas, the independent velocities and accelerations. This process is analogous to the one before explained, reminding condition that acceleration when t=0 and t=end are 0. Dependent velocities and accelerations are calculated solving related derivatives of constrain equations. In a third approval of the model it is checked, again, consistence of interpolated movement. Inverse dynamics calculates, for a defined movement –knowing position, velocity and acceleration in each instant of time-, and knowing external forces that act (f.e. weights); which forces must be applied in motors (where there is control) in order to obtain requested movement. In inverse dynamics, each instant of time is independent of the others and it has a position, a velocity, an acceleration and known forces. In this specific case, it is intended to apply, at the moment, only forces due to the weight, though forces of another nature could have been added if it would have been preferred. The positions, velocities and accelerations, come from kinematic calculation. The inertial effect of forces taken into account (weight) is calculated. As final result of the inverse dynamic analysis, the are obtained torques that the 4 motors must apply to repeat requested movement with the forces that were acting. The fourth approval of the model consists on confirming that the achieved movement due to the use of the torques obtained in the inverse dynamics, are in accordance with movements from kinematic analysis (theoretical movement). For this, it is necessary to work with direct dynamics. Direct dynamic is in charge of calculating the movements of robot that results from applying torques at motors and forces at the robot. Therefore, the resultant real movement, as there was no change in any condition of the ones obtained at the inverse dynamics (motor torques and inertial forces due to weight of elements) must be the same than theoretical movement. When these results are achieved, it is considered that robot is ready to work. When a machining external force is introduced and it was not taken into account before during the inverse dynamics, and torques at motors considered are the ones of the inverse dynamics, the real movement obtained is not the same than the theoretical movement. Closed loop control is based on comparing real movement with expected movement and introducing required corrrections to minimize or cancel differences. They are applied gains in the way of corrections for position and/or tolerance to remove those differences. Position error is evaluated as the difference, in each point, between theoretical movemment (calculated in the kinematic analysis) and the real movement achieved for each machining force and for an specific gain. Finally, the position error obtained for each machining force and gains are mapped, giving a chart with the best accuracy that the robot can give for each operation that has been requested and which conditions must be provided.
Resumo:
A fadiga é um sintoma inespecífico, encontrado com freqüência na população. Ela é definida como sensação de cansaço físico profundo, perda de energia ou mesmo sensação de exaustão, e é importante a sua diferenciação com depressão ou fraqueza. Os transtornos depressivos e ansiosos constituem os transtornos psiquiátricos mais freqüentes no idoso, e quase sempre dão lugar a conseqüências graves neste grupo etário. Este estudo visa avaliar a influência da ansiedade e depressão sobre o desencadeamento de fadiga e evolução de problemas de saúde e de comportamentos peculiares ao processo de envelhecimento. Trata-se de um estudo, do tipo caso-controle investigando ansiedade, depressão e fadiga. Foram avaliados 61 indivíduos com 60 anos de idade ou mais. Um grupo controle constituído por 60 indivíduos jovens (idade até 35 anos), foram selecionados entre estudantes do Centro Universitário de Santo André que responderam um Questionário de Características Gerais, um Inventário de Ansiedade traço-estado, um Inventário de Depressão de Beck e uma Escala de Severidade de Fadiga. O grupo de idosos apresentou um escore significativamente maior em relação ao grupo controle na escala de severidade de fadiga. O grupo de idosos apresentou escore médio de 36,87 ± 14,61 enquanto o grupo controle apresentou escore médio de 31,47 ± 12,74 (t = 2,167; df = 119; p = 0,032). No entanto, o grupo de idosos apresentou escores significativamente maiores na escala de Beck (10,54 ± 8,63) em relação aos controles (6,83 ± 7,95); t = 2,455; df = 119; p = 0,016). Analisando-se apenas o grupo de indivíduos idosos, observou-se uma correlação significativa entre os escore da escala de severidade de fadiga e a escala de depressão de Beck (correlação de Pearson = 0,332; p = 0,009). Ainda trabalhando apenas com o grupo de indivíduos idosos, observou-se um escore significativamente maior da escala de severidade de fadiga naqueles indivíduos que praticavam atividade física regular, sendo, escore médio de 31,55 ± 13,36; (t = 2,203; df = 58; p = 0,032). A partir da análise dos resultados deste estudo pôde-se concluir que o grupo de indivíduos idosos apresentam estatisticamente significante escore maior, quando comparado com o grupo controle, apresentando mais sintomas de fadiga e depressão. Estes sintomas de fadiga ocorreram em conjunto com sintomas depressivos sugerindo uma possível correlação entre estes. Quando se observou apenas os idosos, esta correlação foi confirmada. Analisado-se ainda somente o grupo de indivíduos idosos observa-se que o grupo de idosos que praticam atividade física regularmente apresentam menos sintomas fadiga que o grupo que não pratica atividade física.(AU)
Resumo:
A fadiga é um sintoma inespecífico, encontrado com freqüência na população. Ela é definida como sensação de cansaço físico profundo, perda de energia ou mesmo sensação de exaustão, e é importante a sua diferenciação com depressão ou fraqueza. Os transtornos depressivos e ansiosos constituem os transtornos psiquiátricos mais freqüentes no idoso, e quase sempre dão lugar a conseqüências graves neste grupo etário. Este estudo visa avaliar a influência da ansiedade e depressão sobre o desencadeamento de fadiga e evolução de problemas de saúde e de comportamentos peculiares ao processo de envelhecimento. Trata-se de um estudo, do tipo caso-controle investigando ansiedade, depressão e fadiga. Foram avaliados 61 indivíduos com 60 anos de idade ou mais. Um grupo controle constituído por 60 indivíduos jovens (idade até 35 anos), foram selecionados entre estudantes do Centro Universitário de Santo André que responderam um Questionário de Características Gerais, um Inventário de Ansiedade traço-estado, um Inventário de Depressão de Beck e uma Escala de Severidade de Fadiga. O grupo de idosos apresentou um escore significativamente maior em relação ao grupo controle na escala de severidade de fadiga. O grupo de idosos apresentou escore médio de 36,87 ± 14,61 enquanto o grupo controle apresentou escore médio de 31,47 ± 12,74 (t = 2,167; df = 119; p = 0,032). No entanto, o grupo de idosos apresentou escores significativamente maiores na escala de Beck (10,54 ± 8,63) em relação aos controles (6,83 ± 7,95); t = 2,455; df = 119; p = 0,016). Analisando-se apenas o grupo de indivíduos idosos, observou-se uma correlação significativa entre os escore da escala de severidade de fadiga e a escala de depressão de Beck (correlação de Pearson = 0,332; p = 0,009). Ainda trabalhando apenas com o grupo de indivíduos idosos, observou-se um escore significativamente maior da escala de severidade de fadiga naqueles indivíduos que praticavam atividade física regular, sendo, escore médio de 31,55 ± 13,36; (t = 2,203; df = 58; p = 0,032). A partir da análise dos resultados deste estudo pôde-se concluir que o grupo de indivíduos idosos apresentam estatisticamente significante escore maior, quando comparado com o grupo controle, apresentando mais sintomas de fadiga e depressão. Estes sintomas de fadiga ocorreram em conjunto com sintomas depressivos sugerindo uma possível correlação entre estes. Quando se observou apenas os idosos, esta correlação foi confirmada. Analisado-se ainda somente o grupo de indivíduos idosos observa-se que o grupo de idosos que praticam atividade física regularmente apresentam menos sintomas fadiga que o grupo que não pratica atividade física.(AU)
Resumo:
A 14 nt DNA sequence 5′-AGAATGTGGCAAAG-3′ from the zinc finger repeat of the human KRAB zinc finger protein gene ZNF91 bearing the intercalator 2-methoxy,6-chloro,9-amino acridine (Acr) attached to the sugar–phosphate backbone in various positions has been shown to form a specific triple helix (triplex) with a 16 bp hairpin (intramolecular) or a two-stranded (intermolecular) duplex having the identical sequence in the same (parallel) orientation. Intramolecular targets with the identical sequence in the antiparallel orientation and a non-specific target sequence were tested as controls. Apparent binding constants for formation of the triplex were determined by quantitating electrophoretic band shifts. Binding of the single-stranded oligonucleotide probe sequence to the target led to an increase in the fluorescence anisotropy of acridine. The parallel orientation of the two identical sequence segments was confirmed by measurement of fluorescence resonance energy transfer between the acridine on the 5′-end of the probe strand as donor and BODIPY-Texas Red on the 3′-amino group of either strand of the target duplex as acceptor. There was full protection from OsO4-bipyridine modification of thymines in the probe strand of the triplex, in accordance with the presumed triplex formation, which excluded displacement of the homologous duplex strand by the probe–intercalator conjugate. The implications of these results for the existence of protein-independent parallel triplexes are discussed.
Resumo:
Sephadex-binding RNA ligands (aptamers) were obtained through in vitro selection. They could be classified into two groups based on their consensus sequences and the aptamers from both groups showed strong binding to Sephadex G-100. One of the highest affinity aptamers, D8, was chosen for further characterization. Aptamer D8 bound to dextran B512, the soluble base material of Sephadex, but not to isomaltose, isomaltotriose and isomaltotetraose, suggesting that its optimal binding site might consist of more than four glucose residues linked via α-1,6 linkages. The aptamer was very specific to the Sephadex matrix and did not bind appreciably to other supporting matrices, such as Sepharose, Sephacryl, cellulose or pustulan. Using Sephadex G-100, the aptamer could be purified from a complex mixture of cellular RNA, giving an enrichment of at least 60 000-fold, compared with a non-specific control RNA. These RNA aptamers can be used as affinity tags for RNAs or RNA subunits of ribonucleoproteins to allow rapid purification from complex mixtures of RNA using only Sephadex.
Resumo:
Analyses on DNA microarrays depend considerably on spot quality and a low background signal of the glass support. By using betaine as an additive to a spotting solution made of saline sodium citrate, both the binding efficiency of spotted PCR products and the homogeneity of the DNA spots is improved significantly on aminated surfaces such as glass slides coated with the widely used poly-l-lysine or aminosilane. In addition, non-specific background signal is markedly diminished. Concomitantly, during the arraying procedure, the betaine reduces evaporation from the microtitre dish wells, which hold the PCR products. Subsequent blocking of the chip surface with succinic anhydride was improved considerably in the presence of the non-polar, non-aqueous solvent 1,2-dichloroethane and the acylating catalyst N-methylimidazole. This procedure prevents the overall background signal that occurs with the frequently applied aqueous solvent 1-methyl-2-pyrrolidone in borate buffer because of DNA that re-dissolves from spots during the blocking process, only to bind again across the entire glass surface.
Resumo:
The pattern of expression of two genes coding for proteins rich in proline, HyPRP (hybrid proline-rich protein) and HRGP (hydroxyproline-rich glycoprotein), has been studied in maize (Zea mays) embryos by RNA analysis and in situ hybridization. mRNA accumulation is high during the first 20 d after pollination, and disappears in the maturation stages of embryogenesis. The two genes are also expressed during the development of the pistillate spikelet and during the first stages of embryo development in adjacent but different tissues. HyPRP mRNA accumulates mainly in the scutellum and HRGP mRNA mainly in the embryo axis and the suspensor. The two genes appear to be under the control of different regulatory pathways during embryogenesis. We show that HyPRP is repressed by abscisic acid and stress treatments, with the exception of cold treatment. In contrast, HRGP is affected positively by specific stress treatments.
Resumo:
Immune cell-derived opioid peptides can activate opioid receptors on peripheral sensory nerves to inhibit inflammatory pain. The intrinsic mechanisms triggering this neuroimmune interaction are unknown. This study investigates the involvement of endogenous corticotropin-releasing factor (CRF) and interleukin-1beta (IL-1). A specific stress paradigm, cold water swim (CWS), produces potent opioid receptor-specific antinociception in inflamed paws of rats. This effect is dose-dependently attenuated by intraplantar but not by intravenous alpha-helical CRF. IL-1 receptor antagonist is ineffective. Similarly, local injection of antiserum against CRF, but not to IL-1, dose-dependently reverses this effect. Intravenous anti-CRF is only inhibitory at 10(4)-fold higher concentrations and intravenous CRF does not produce analgesia. Pretreatment of inflamed paws with an 18-mer 3'-3'-end inverted CRF-antisense oligodeoxynucleotide abolishes CWS-induced antinociception. The same treatment significantly reduces the amount of CRF extracted from inflamed paws and the number of CRF-immunostained cells without affecting gross inflammatory signs. A mismatch oligodeoxynucleotide alters neither the CWS effect nor CRF immunoreactivity. These findings identify locally expressed CRF as the predominant agent to trigger opioid release within inflamed tissue. Endogenous IL-1, circulating CRF or antiinflammatory effects, are not involved. Thus, an intact immune system plays an essential role in pain control, which is important for the understanding of pain in immunosuppressed patients with cancer or AIDS.
Resumo:
Plasmids encoding various external guide sequences (EGSs) were constructed and inserted into Escherichia coli. In strains harboring the appropriate plasmids, the expression of fully induced beta-galactosidase and alkaline phosphatase activity was reduced by more than 50%, while no reduction in such activity was observed in strains with non-specific EGSs. The inhibition of gene expression was virtually abolished at restrictive temperatures in strains that were temperature-sensitive for RNase P (EC 3.1.26.5). Northern blot analysis showed that the steady-state copy number of EGS RNAs was several hundred per cell in vivo. A plasmid that contained a gene for M1 RNA covalently linked to a specific EGS reduced the level of expression of a suppressor tRNA that was encoded by a separate plasmid. Similar methods can be used to regulate gene expression in E. coli and to mimic the properties of cold-sensitive mutants.
Resumo:
Plants can defend themselves from potential pathogenic microorganisms relying on a complex interplay of signaling pathways: activation of the MAPK cascade, transcription of defense related genes, production of reactive oxygen species, nitric oxide and synthesis of other defensive compounds such as phytoalexins. These events are triggered by the recognition of pathogen’s effectors (effector-triggered immunity) or PAMPs (PAMP-triggered immunity). The Cerato Platanin Family (CPF) members are Cys-rich proteins secreted and localized on fungal cell walls, involved in several aspects of fungal development and pathogen-host interactions. Although more than hundred genes of the CPF have been identified and analyzed, the structural and functional characterization of the expressed proteins has been restricted only to few members of the family. Interestingly, those proteins have been shown to bind chitin with diverse affinity and after foliar treatment they elicit defensive mechanisms in host and non-host plants. This property turns cerato platanins into interesting candidates, worth to be studied to develop new fungal elicitors with applications in sustainable agriculture. This study focus on cerato-platanin (CP), core member of the family and on the orthologous cerato-populin (Pop1). The latter shows an identity of 62% and an overall homology of 73% with respect to CP. Both proteins are able to induce MAPKs phosphorylation, production of reactive oxygen species and nitric oxide, overexpression of defense’s related genes, programmed cell death and synthesis of phytoalexins. CP, however, when compared to Pop1, induces a faster response and, in some cases, a stronger activity on plane leaves. Aim of the present research is to verify if the dissimilarities observed in the defense elicitation activity of these proteins can be associated to their structural and dynamic features. Taking advantage of the available CP NMR structure, Pop1’s 3D one was obtained by homology modeling. Experimental residual dipolar couplings and 1H, 15N, 13C resonance assignments were used to validate the model. Previous works on CPF members, addressed the highly conserved random coil regions (loops b1-b2 and b2-b3) as sufficient and necessary to induce necrosis in plants’ leaves: that region was investigated in both Pop1 and CP. In the two proteins the loops differ, in their primary sequence, for few mutations and an insertion with a consequent diversification of the proteins’ electrostatic surface. A set of 2D and 3D NMR experiments was performed to characterize both the spatial arrangement and the dynamic features of the loops. NOE data revealed a more extended network of interactions between the loops in Pop1 than in CP. In addition, in Pop1 we identified a salt bridge Lys25/Asp52 and a strong hydrophobic interaction between Phe26/Trp53. These structural features were expected not only to affect the loops’ spatial arrangement, but also to reduce the degree of their conformational freedom. Relaxation data and the order parameter S2 indeed highlighted reduced flexibility, in particular for loop b1-b2 of Pop1. In vitro NMR experiments, where Pop1 and CP were titrated with oligosaccharides, supported the hypothesis that the loops structural and dynamic differences may be responsible for the different chitin-binding properties of the two proteins: CP selectively binds tetramers of chitin in a shallow groove on one side of the barrel defined by loops b1-b2, b2-b3 and b4-b5, Pop1, instead, interacts in a non-specific fashion with oligosaccharides. Because the region involved in chitin-binding is also responsible for the defense elicitation activity, possibly being recognized by plant's receptors, it is reasonable to expect that those structural and dynamic modifications may also justify the different extent of defense elicitation. To test that hypothesis, the initial steps of a protocol aimed to the identify a receptor for CP, in silico, are presented.
Resumo:
Toxoplasma gondii is a coccidian parasite with a global distribution. The definitive host is the cat (and other felids). All warm-blooded animals can act as intermediate hosts, including humans. Sexual reproduction (gametogony) takes place in the final host and oocysts are released in the environment, where they then sporulate to become infective. In intermediate hosts the cycle is extra-intestinal and results in the formation of tachyzoites and bradyzoites. Tachyzoites represent the invasive and proliferative stage and on entering a cell it multiplies asexually by endodyogeny. Bradyzoites within tissue cysts are the latent form. T. gondii is a food-borne parasite causing toxoplasmosis, which can occur in both animals and humans. Infection in humans is asymptomatic in more than 80% of cases in Europe and North-America. In the remaining cases patients present fever, cervical lymphadenopathy and other non-specific clinical signs. Nevertheless, toxoplasmosis is life threatening if it occurs in immunocompromised subjects. The main organs involved are brain (toxoplasmic encephalitis), heart (myocarditis), lungs (pulmonary toxoplasmosis), eyes, pancreas and parasite can be isolated from these tissues. Another aspect is congenital toxoplasmosis that may occur in pregnant women and the severity of the consequences depends on the stage of pregnancy when maternal infection occurs. Acute toxoplasmosis in developing foetuses may result in blindness, deformation, mental retardation or even death. The European Food Safety Authority (EFSA), in recent reports on zoonoses, highlighted that an increasing numbers of animals resulted infected with T. gondii in EU (reported by the European Member States for pigs, sheep, goats, hunted wild boar and hunted deer, in 2011 and 2012). In addition, high prevalence values have been detected in cats, cattle and dogs, as well as several other animal species, indicating the wide distribution of the parasite among different animal and wildlife species. The main route of transmission is consumption of food and water contaminated with sporulated oocysts. However, infection through the ingestion of meat contaminated with tissue cysts is frequent. Finally, although less frequent, other food products contaminated with tachyzoites such as milk, may also pose a risk. The importance of this parasite as a risk for human health was recently highlighted by EFSA’s opinion on modernization of meat inspection, where Toxoplasma gondii was identified as a relevant hazard to be addressed in revised meat inspection systems for pigs, sheep, goats, farmed wild boar and farmed deer (Call for proposals -GP/EFSA/BIOHAZ/2013/01). The risk of infection is more highly associated to animals reared outside, also in free-range or organic farms, where biohazard measure are less strict than in large scale, industrial farms. Here, animals are kept under strict biosecurity measures, including barriers, which inhibit access by cats, thus making soil contamination by oocysts nearly impossible. A growing demand by the consumer for organic products, coming from free-range livestock, in respect of animal-welfare, and the desire for the best quality of derived products, have all led to an increase in the farming of free-range animals. The risk of Toxoplasma gondii infection increases when animals have access to environment and the absence of data in Italy, together with need for in depth study of both the prevalence and genotypes of Toxoplasma gondii present in our country were the main reasons for the development of this thesis project. A total of 152 animals have been analyzed, including 21 free-range pigs (Suino Nero race), 24 transhumant Cornigliese sheep, 77 free-range chickens and 21 wild animals. Serology (on meat juice) and identification of T. gondii DNA through PCR was performed on all samples, except for wild animals (no serology). An in-vitro test was also applied with the aim to find an alternative and valid method to bioassay, actually the gold standard. Meat samples were digested and seeded onto Vero cells, checked every day and a RT-PCR protocol was used to determine an eventual increase in the amount of DNA, demonstrating the viability of the parasite. Several samples were alos genetically characterized using a PCR-RFLP protocol to define the major genotypes diffused in the geographical area studied. Within the context of a project promoted by Istituto Zooprofilattico of Pavia and Brescia (Italy), experimentally infected pigs were also analyzed. One of the aims was to verify if the production process of cured “Prosciutto di Parma” is able to kill the parasite. Our contribution included the digestion and seeding of homogenates on Vero cells and applying the Elisa test on meat juice. This thesis project has highlighted widespread diffusion of T. gondii in the geographical area taken into account. Pigs, sheep, chickens and wild animals showed high prevalence of infection. The data obtained with serology were 95.2%, 70.8%, 36.4%, respectively, indicating the spread of the parasite among numerous animal species. For wild animals, the average value of parasite infection determined through PCR was 44.8%. Meat juice serology appears to be a very useful, rapid and sensitive method for screening carcasses at slaughterhouse and for marketing “Toxo-free” meat. The results obtained on fresh pork meat (derived from experimentally infected pigs) before (on serum) and after (on meat juice) slaughter showed a good concordance. The free-range farming put in evidence a marked risk for meat-producing animals and as a consequence also for the consumer. Genotyping revealed the diffusion of Type-II and in a lower percentage of Type-III. In pigs is predominant the Type-II profile, while in wildlife is more diffused a Type-III and mixed profiles (mainly Type-II/III). The mixed genotypes (Type-II/III) could be explained by the presence of mixed infections. Free-range farming and the contact with wildlife could facilitate the spread of the parasite and the generation of new and atypical strains, with unknown consequences on human health. The curing process employed in this study appears to produce hams that do not pose a serious concern to human health and therefore could be marketed and consumed without significant health risk. Little is known about the diffusion and genotypes of T. gondii in wild animals; further studies on the way in which new and mixed genotypes may be introduced into the domestic cycle should be very interesting, also with the use of NGS techniques, more rapid and sensitive than PCR-RFLP. Furthermore wildlife can become a valuable indicator of environmental contamination with T. gondii oocysts. Other future perspectives regarding pigs include the expansion of the number of free-range animals and farms and for Cornigliese sheep the evaluation of other food products as raw milk and cheeses. It should be interesting to proceed with the validation of an ELISA test for infection in chickens, using both serum and meat juice on a larger number of animals and the same should be done also for wildlife (at the moment no ELISA tests are available and MAT is the reference method for them). Results related to Parma ham do not suggest a concerning risk for consumers. However, further studies are needed to complete the risk assessment and the analysis of other products cured using technological processes other than those investigated in the present study. For example, it could be interesting to analyze products such as salami, produced with pig meat all over the Italian country, with very different recipes, also in domestic and rural contexts, characterized by a very short period of curing (1 to 6 months). Toxoplasma gondii is one of the most diffuse food-borne parasites globally. Public health safety, improved animal production and protection of endangered livestock species are all important goals of research into reliable diagnostic tools for this infection. Future studies into the epidemiology, parasite survival and genotypes of T. gondii in meat producing animals should continue to be a research priority.
Resumo:
O aumento no consumo energético e a crescente preocupação ambiental frente à emissão de gases poluentes criam um apelo mundial favorável para pesquisas de novas tecnologias não poluentes de fontes de energia. Baterias recarregáveis de lítio-ar em solventes não aquosos possuem uma alta densidade de energia teórica (5200 Wh kg-1), o que as tornam promissoras para aplicação em dispositivos estacionários e em veículos elétricos. Entretanto, muitos problemas relacionados ao cátodo necessitam ser contornados para permitir a aplicação desta tecnologia, por exemplo, a baixa reversibilidade das reações, baixa potência e instabilidades dos materiais empregados nos eletrodos e dos solventes eletrolíticos. Assim, neste trabalho um modelo cinético foi empregado para os dados experimentais de espectroscopia de impedância eletroquímica, para a obtenção das constantes cinéticas das etapas elementares do mecanismo da reação de redução de oxigênio (RRO), o que permitiu investigar a influência de parâmetros como o tipo e tamanho de partícula do eletrocatalisador, o papel do solvente utilizado na RRO e compreender melhor as reações ocorridas no cátodo dessa bateria. A investigação inicial se deu com a utilização de sistemas menos complexos como uma folha de platina ou eletrodo de carbono vítreo como eletrodos de trabalho em 1,2-dimetoxietano (DME)/perclorato de lítio (LiClO4). A seguir, sistemas complexos com a presença de nanopartículas de carbono favoreceu o processo de adsorção das moléculas de oxigênio e aumentou ligeiramente (uma ordem de magnitude) a etapa de formação de superóxido de lítio (etapa determinante de reação) quando comparada com os eletrodos de platina e carbono vítreo, atribuída à presença dos grupos laterais mediando à transferência eletrônica para as moléculas de oxigênio. No entanto, foi observada uma rápida passivação da superfície eletrocatalítica através da formação de filmes finos de Li2O2 e Li2CO3 aumentando o sobrepotencial da bateria durante a carga (diferença de potencial entre a carga e descarga > 1 V). Adicionalmente, a incorporação das nanopartículas de platina (Ptnp), ao invés da folha de platina, resultou no aumento da constante cinética da etapa determinante da reação em duas ordens de magnitude, o qual pode ser atribuído a uma mudança das propriedades eletrônicas na banda d metálica em função do tamanho nanométrico das partículas, e estas modificações contribuíram para uma melhor eficiência energética quando comparado ao sistema sem a presença de eletrocatalisador. Entretanto, as Ptnp se mostraram não específicas para a RRO, catalisando as reações de degradação do solvente eletrolítico e diminuindo rapidamente a eficiência energética do dispositivo prático, devido ao acúmulo de material no eletrodo. O emprego de líquido iônico como solvente eletrolítico, ao invés de DME, promoveu uma maior estabilização do intermediário superóxido formado na primeira etapa de transferência eletrônica, devido à interação com os cátions do líquido iônico em solução, o qual resultou em um valor de constante cinética da formação do superóxido de três ordens de magnitude maior que o obtido com o mesmo eletrodo de carbono vítreo em DME, além de diminuir as reações de degradação do solvente. Estes fatores podem contribuir para uma maior potência e ciclabilidade da bateria de lítio-ar operando com líquidos iônicos.
Resumo:
O ingresso no Ensino Fundamental - EF tem sido visto como um momento de transição devido às novas demandas que apresenta para a criança. Neste contexto, parece haver um aumento da vulnerabilidade das crianças ao estresse, principalmente daquelas com maior dificuldade de adaptação a estas demandas. Esse estudo teve como objetivo amplo investigar o estresse da transição no contexto do EF de nove anos, partindo de uma visão desenvolvimentista aliada a uma perspectiva de exposição a estressores cotidianos. Especificamente, o estudo investigou a relação entre competências e sintomas de estresse no 1º ano do EF, o curso desenvolvimental dos sintomas e das percepções de estresse nos dois anos inicias do EF, suas associações com as tarefas adaptativas da transição e a influência da escola nos indicadores de estresse. Finalmente, exploraram-se modelos explicativos para indicadores de estresse apresentados no 2º ano. Seguindo metodologia prospectiva, avaliaram-se indicadores de ajustamento e competências relacionadas ao desempenho acadêmico, social e comportamental das crianças no 1º ano, estresse nos dois primeiros anos e características da escola (localização e IDEB). Participaram da pesquisa 157 alunos do 1º ano do EF, sendo 85 meninos e 72 meninas, com idade média de 6 anos e 10 meses no início da pesquisa. Todos tinham experiência de dois anos na Educação Infantil e estavam matriculados em escolas municipais de diferentes regiões de uma cidade do interior de São Paulo. Também participaram do estudo, como informantes, seus respectivos professores do 1º ano, num total de 25. As crianças responderam à Escala de Stress Infantil, ao Inventário de Estressores Escolares e a uma avaliação objetiva de desempenho acadêmico (Provinha Brasil). Os professores avaliaram as habilidades sociais, os problemas de comportamento externalizantes e internalizantes e a competência acadêmica dos seus alunos por meio do Social Skills Rating System Professores. A análise dos dados compreendeu estatísticas descritivas, comparações, correlações e regressões. Nos resultados, 57% dos alunos no 1º ano e 72% no 2º ano relataram sintomas de estresse pelo menos na fase de alerta. Crianças com estresse no 1º ano apresentaram menores índices de ajustamento e competência e perceberam suas escolas como mais estressantes em relação ao seu papel de estudante e nas relações interpessoais. Correlações moderadas entre medidas de indicadores de estresse tomadas no 1º e no 2º ano sugerem estabilidade. A presença de sintomas de estresse aumentou do 1º para o 2º ano, enquanto a percepção de estressores escolares não variou. Crianças com maiores médias de estresse são provenientes de escolas situadas em regiões periféricas e com classificação mais baixa no IDEB. As análises de predição evidenciaram a habilidade social de responsabilidade e cooperação avaliada no 1º ano como importante fator de proteção contra sintomas de estresse no 2º ano, ao passo que a percepção da criança de tensões nas relações interpessoais no 1º ano foi o principal fator de risco para futura sintomatologia de estresse. Nesse sentido, intervenções com ênfase na promoção de habilidades sociais das crianças podem ser profícuas na prevenção do estresse.