966 resultados para CONTROLLED ENVIRONMENT


Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Bulimia nervosa (BN) has been associated with dysregulation of the central catecholaminergic system. An instructive way to investigate the relationship between catecholaminergic function and psychiatric disorder has involved behavioral responses to experimental catecholamine depletion (CD). The purpose of this study was to examine a possible catecholaminergic dysfunction in the pathogenesis of bulimia nervosa. METHODS: CD was achieved by oral administration of alpha-methyl-para-tyrosine (AMPT) in 18 remitted female subjects with BN (rBN) and 31 healthy female control subjects. The study design consisted of a randomized, double blind, placebo-controlled crossover, single-site experimental trial. The main outcome measures were bulimic symptoms assessed by the Eating Disorder Examination-Questionnaire. Measures were assessed before and 26, 30, 54, 78, 102 hours after the first AMPT or placebo administration. RESULTS: In the experimental environment (controlled environment with a low level of food cues) rBN subjects had a greater increase in eating disorder symptoms during CD compared with healthy control subjects (condition × diagnosis interaction, p < .05). In the experimental environment, rBN subjects experienced fewer bulimic symptoms than in the natural environment (uncontrolled environment concerning food cues) 36 hours after the first AMPT intake (environment × diagnosis interaction, p < .05). Serum prolactin levels increased significantly, and to a comparable degree across groups, after AMPT administration. CONCLUSIONS: This study suggests that rBN is associated with vulnerability for developing eating disorder symptoms in response to reduced catecholamine neurotransmission after CD. The findings support the notion of catecholaminergic dysfunction as a possible trait abnormality in BN.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of infrared thermography for the identification of lameness in cattle has increased in recent years largely because of its non-invasive properties, ease of automation and continued cost reductions. Thermography can be used to identify and determine thermal abnormalities in animals by characterizing an increase or decrease in the surface temperature of their skin. The variation in superficial thermal patterns resulting from changes in blood flow in particular can be used to detect inflammation or injury associated with conditions such as foot lesions. Thermography has been used not only as a diagnostic tool, but also to evaluate routine farm management. Since 2000, 14 peer reviewed papers which discuss the assessment of thermography to identify and manage lameness in cattle have been published. There was a large difference in thermography performance in these reported studies. However, thermography was demonstrated to have utility for the detection of contralateral temperature difference and maximum foot temperature on areas of interest. Also apparent in these publications was that a controlled environment is an important issue that should be considered before image scanning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Time variable gravity fields, reflecting variations of mass distribution in the system Earth is one of the key parameters to understand the changing Earth. Mass variations are caused either by redistribution of mass in, on or above the Earth's surface or by geophysical processes in the Earth's interior. The first set of observations of monthly variations of the Earth gravity field was provided by the US/German GRACE satellite mission beginning in 2002. This mission is still providing valuable information to the science community. However, as GRACE has outlived its expected lifetime, the geoscience community is currently seeking successor missions in order to maintain the long time series of climate change that was begun by GRACE. Several studies on science requirements and technical feasibility have been conducted in the recent years. These studies required a realistic model of the time variable gravity field in order to perform simulation studies on sensitivity of satellites and their instrumentation. This was the primary reason for the European Space Agency (ESA) to initiate a study on ''Monitoring and Modelling individual Sources of Mass Distribution and Transport in the Earth System by Means of Satellites''. The goal of this interdisciplinary study was to create as realistic as possible simulated time variable gravity fields based on coupled geophysical models, which could be used in the simulation processes in a controlled environment. For this purpose global atmosphere, ocean, continental hydrology and ice models were used. The coupling was performed by using consistent forcing throughout the models and by including water flow between the different domains of the Earth system. In addition gravity field changes due to solid Earth processes like continuous glacial isostatic adjustment (GIA) and a sudden earthquake with co-seismic and post-seismic signals were modelled. All individual model results were combined and converted to gravity field spherical harmonic series, which is the quantity commonly used to describe the Earth's global gravity field. The result of this study is a twelve-year time-series of 6-hourly time variable gravity field spherical harmonics up to degree and order 180 corresponding to a global spatial resolution of 1 degree in latitude and longitude. In this paper, we outline the input data sets and the process of combining these data sets into a coherent model of temporal gravity field changes. The resulting time series was used in some follow-on studies and is available to anybody interested.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Higher education students demand fast feedback about their assignments and the opportunity to repeat them in case they do in a wrong way. Here a computer based trainer for Signals and Systems students is presented. An application, that automatically generates and assesses thousands of numerically different versions of several Signals and Systems problems have been developed. This applet guides the students to find the solution and automatically assesses and grades the students proposed solution. The students can use the application to practice in solving several types of Signals and Systems basic problems. After selecting the problem type, the student introduces a seed and the application generates a numerical version of the selected problem. Then the application presents a sequence of questions that the students must solve and the application automatically assess their answers. After solving a given problem, the students can repeat the same numerical variation of the problem by introducing the same seed to the application. In this way, they can review their solution with the help of the hints given by the application for wrong solutions. This application can also be used as an automatic assessment tool by the instructor. When the assessment is made in a controlled environment (examination classroom or laboratory) the instructor can use the same seed for all students. Otherwise, different seeds can be assigned to different students and in this way they solve different numerical variation of the proposed problem, so cheating becomes an arduous task. Given a problem type, the mathematical or conceptual difficulty of the problem can vary depending on the numerical values of the parameters of the problem. The application permits to easily select groups of seeds that yield to numerical variations with similar mathematical or conceptual difficulty. This represents an advantage over a randomised task assignment where students are asked to solve tasks with different difficulty.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thermography for scientific research and practical purposes requires a series of procedures to obtain images that should be standardized; one of the most important is the time required for acclimatization in the controlled environment. Thus, the objective of this study was to identify the appropriate acclimatization time in rest to reach a thermal balance on young people skin. Forty-four subjects participated in the study, 18 men (22.3 ± 3.1 years) and 26 women (21.7 ± 2.5 years). Thermographic images were collected using a thermal imager (Fluke ®), totaling 44 images over a period of 20 minutes. The skin temperature (TSK) was measured at the point of examination which included the 0 minute, 2, 4, 6, 8, 10, 12, 14, 16, 18 and 20. The body regions of interest (ROI) analyzed included the hands, forearms, arms, thighs, legs, chest and abdomen. We used the Friedman test with post hoc Dunn?s in order to establish the time at rest required to obtain a TSK balance and the Mann-Whitney test was used to compare age, BMI, body fat percentage and temperature variations between men and women, considering always a significance level of pmenor que0.05. Results showed that women had significantly higher temperature variations than men (pmenor que0.01) along the time. In men, only the body region of the abdomen obtained a significant variance (pmenor que0.05) on the analyzed period, both in the anterior and posterior part. In women, the anterior abdomen and thighs, and the posterior part of the hands, forearms and abdomen showed significant differences (pmenor que0.05). Based on our results, it can be concluded that the time in rest condition required reaching a TSK balance in young men and women is variable, but for whole body analysis it is recommended at least 10 minutes for both sexes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La importancia de conocer bien el entorno para un proyecto arquitectónico es que podemos adaptarlo a nuestras necesidades fisiológicas de Confort Térmico. Podemos decir entonces que el edificio juega un papel fundamental como técnica de control de nuestro entorno. El edificio nos debería entregar un entorno controlado para que nos sintamos bien térmicamente, considerando además, que la arquitectura por sí misma puede lograr dicho confort la mayor parte de las veces. De no ser así, los usuarios tienden a colocar elementos mecánicos, para generar frío o calor artificialmente. Es fundamental entonces que nuestros edificios, tengan una correcta interacción con los recursos naturales del lugar para lograr dicho confort térmico. Pero lograr el Confort Térmico en todos los edificios de una ciudad como unidad, no logrará que la ciudad entera sea confortable térmicamente, ya que las complejas interacciones hacen que la problemática se deba enfrentar como algo sistémico. Esto quiere decir, que para que una ciudad o un conjunto logren la Confortabilidad Térmica deseada por sus habitantes debiera haber sido planificada conforme a variables urbanas que interactúen con el medio natural en forma eficiente. Con la observación de ciertos conjuntos habitacionales antiguos en el interior del Valle del Elqui, Chile y de sus relaciones entre variables urbanas y naturales, queda de manifiesto ciertas características que conllevan a pensar que existió una planificación ambiental en éstos que llevaron a lograr un conjunto con características bioclimáticas. Las evidencias de la existencia en primer lugar de un patrón urbanístico en dichos conjuntos habitacionales antiguos, hacen pensar que dicho patrón se trataría de un patrón bioclimático rural planificado, lo que hace que exista un gran interés por el estudio de estos conjuntos. Hasta ahora, en Chile, los pocos estudios de Confort Térmico que existen, están orientados a edificaciones aisladas, al Confort térmico interior de la edificación en el ámbito urbano, y en nada a Patrones Bioclimáticos de Conjuntos Habitacionales en una situación de ruralidad como a la referida en esta investigación. Además, los estudios referidos al clima urbano, difieren a los del clima rural, por lo que se necesitan mayores estudios aún para comprender mejor la problemática. Es por esto, que la mayoría de los casos mencionados en este estudio son contextualizados al ámbito urbano por carecer de otros estudios rurales. Es en este sentido que esta investigación cobra real importancia y pretende establecer la relación existente entre las variables morfológicas rurales y los recursos naturales del lugar y que generan un confort térmico ideal para sus habitantes, al mismo tiempo, se analiza la existencia de un Patrón Bioclimático en un poblado denominado Algarrobito ubicado en la cuenca del Valle del Elqui, Chile. Es en este sentido que el propósito principal de este trabajo es determinar la real existencia de un Patrón Bioclimático que relacione la morfología rural y edificada de los antiguos poblados pertenecientes a la cuenca del Valle de Elqui Chile con el microclima del lugar. La metodología empleada se basa en realizar primeramente el estudio del microclima del lugar a través de las Cartas Bioclimáticas. Para ello se obtuvo información de datos climatológicos de las estaciones meteorológicas ubicadas en la cuenca del Valle de Elqui, principalmente las más cercanas al lugar de estudio. Mediante una revisión exhaustiva de la información arquitectónica, así como de una labor de reconocimiento en terreno realizada en el poblado seleccionado y de la aplicación del Climograma local, se identificaron las diferentes zonas bioclimáticas del poblado antiguo y potenciales áreas de estudio en el conjunto. Esta actividad incluyó un estudio preliminar de la energía solar local, vientos, humedad, temperaturas y su interacción con el conjunto, permitiendo una primera aproximación a la problemática del espacio exterior y las viviendas. Esto permitió en base a las condicionantes del lugar, la arquitectura vernácula y los materiales descubrir un Patrón en el antiguo conjunto que permitía entregar confortabilidad térmica a sus habitantes y darse cuenta también, que el nuevo conjunto emplazado en el sector no seguía ese patrón con las disfuncionalidades que ello llevaba. Con esto quedó demostrado en primer lugar la existencia de un Patrón Bioclimático rural, los beneficios del patrón, la importancia de éste como causante de Confortabilidad Térmica del conjunto, y por ende de mejor eficiencia energética, así como también, que el nuevo conjunto no sigue para nada este Patrón, pero que existe también la posibilidad de rectificación y por supuesto, que los nuevos desarrollos residenciales del Valle del Elqui, puedan planificarse en base al patrón bioclimático descubierto. ABSTRACT Knowing the environment of an architectonic proyect is really important for adjusting it to our physiological needs of Thermal Comfort. So we can say that the building plays a key role as a technique of control of our environment. The building should give us a controlled environment to make us feel good thermally, and it usually can reach pleasurable temperatures by itself. If it isn't like that, people cooled or heated the ambience with mechanical elements. So a correct interaction between the buildings and natural resources is important to reach a thermal comfort. But achieving Thermal Comfort in all the buildings of a city as a unit will not achieve the whole city is thermally comfortable, because the complex interactions cause the problem needs to be solved as something systemic. This means that for a city or a set reach the Thermal Comfortability desired by its inhabitants, it should have been planned according to the urban variables that interact with the natural environment efficiently. Observing some old housing complexes in Elqui Valley, Chile, and the relationships between their natural and urban variables, some features lead to think that the environmental planning in these led to achieve a set with bioclimatic features. First, the evidences about the existence of an urban pattern in those old housing complexes, make thinking that the pattern would be a planned urban pattern, which generates interest in its study. In Chile, there have been few studies about Thermal Comfort, oriented to isolated buildings and indoor thermal comfort, but Bioclimatic Urban Patterns haven't been studied at all. In this sense, this investigation acquires a real importance and pretends to establish the relationship between urban variables and natural resources of the place that generates a good thermal comfort for its habitants. At the same time, the existence of a Bioclimatic Urban Pattern in Algarrobito, located in Elqui Valley basin, Chile, is analized. It is in this sense that the main purpose of this work is to determine the real existence of a Bioclimatic Urban Pattern, that links the urban and constructive form of the old villages of it with its microclimate. The methodology used is based on performing first the study of the microclimate of the place through the Bioclimatic Cards. To do this, weather stations, located in Elqui valley, near the place that was studied, were used to obtain information of climatological data. The different bioclimatic zones to the old town and potential areas of study in the set were identified, through an exhaustive review of the architectural information, a field reconnaissance work performed on the selected town and the application of the Local Climograph. This activity included a preliminary study of the local solar energy, the winds, the moisture, the temperatures, and their interaction with the set, allowing a first aproximation to troubles of outer space and housing. This allowed, based on the conditions of the place, vernacular architecture and materials, discovering an urban pattern in the old set, which allowed to give thermal comfort to its inhabitants and realize that the new set of the place did not follow this pattern, with the dysfunctions that it carried. These points demonstrated, in first place, the existence of a Bioclimatic Urban Pattern, the benefits of it, the importance of it as a cause of Thermal Comfortability, and therefore a better efficiency of energy, also that the new set doesn’t follow this Pattern at all, but that the posibility of rectification exists and, of course, that the new residencial development in Elqui Valley can be planned based on bioclimatic pattern discovered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis se centra en desarrollo de tecnologías para la interacción hombre-robot en entornos nucleares de fusión. La problemática principal del sector de fusión nuclear radica en las condiciones ambientales tan extremas que hay en el interior del reactor, y la necesidad de que los equipos cumplan requisitos muy restrictivos para poder aguantar esos niveles de radiación, magnetismo, ultravacío, temperatura... Como no es viable la ejecución de tareas directamente por parte de humanos, habrá que utilizar dispositivos de manipulación remota para llevar a cabo los procesos de operación y mantenimiento. En las instalaciones de ITER es obligatorio tener un entorno controlado de extrema seguridad, que necesita de estándares validados. La definición y uso de protocolos es indispensable para regir su buen funcionamiento. Si nos centramos en la telemanipulación con algo grado de escalado, surge la necesidad de definir protocolos para sistemas abiertos que permitan la interacción entre equipos y dispositivos de diversa índole. En este contexto se plantea la definición del Protocolo de Teleoperación que permita la interconexión entre dispositivos maestros y esclavos de distinta tipología, pudiéndose comunicar bilateralmente entre sí y utilizar distintos algoritmos de control según la tarea a desempeñar. Este protocolo y su interconectividad se han puesto a prueba en la Plataforma Abierta de Teleoperación (P.A.T.) que se ha desarrollado e integrado en la ETSII UPM como una herramienta que permita probar, validar y realizar experimentos de telerrobótica. Actualmente, este Protocolo de Teleoperación se ha propuesto a través de AENOR al grupo ISO de Telerobotics como una solución válida al problema existente y se encuentra bajo revisión. Con el diseño de dicho protocolo se ha conseguido enlazar maestro y esclavo, sin embargo con los niveles de radiación tan altos que hay en ITER la electrónica del controlador no puede entrar dentro del tokamak. Por ello se propone que a través de una mínima electrónica convenientemente protegida se puedan multiplexar las señales de control que van a través del cableado umbilical desde el controlador hasta la base del robot. En este ejercicio teórico se demuestra la utilidad y viabilidad de utilizar este tipo de solución para reducir el volumen y peso del cableado umbilical en cifras aproximadas de un 90%, para ello hay que desarrollar una electrónica específica y con certificación RadHard para soportar los enormes niveles de radiación de ITER. Para este manipulador de tipo genérico y con ayuda de la Plataforma Abierta de Teleoperación, se ha desarrollado un algoritmo que mediante un sensor de fuerza/par y una IMU colocados en la muñeca del robot, y convenientemente protegidos ante la radiación, permiten calcular las fuerzas e inercias que produce la carga, esto es necesario para poder transmitirle al operador unas fuerzas escaladas, y que pueda sentir la carga que manipula, y no otras fuerzas que puedan influir en el esclavo remoto, como ocurre con otras técnicas de estimación de fuerzas. Como el blindaje de los sensores no debe ser grande ni pesado, habrá que destinar este tipo de tecnología a las tareas de mantenimiento de las paradas programadas de ITER, que es cuando los niveles de radiación están en sus valores mínimos. Por otro lado para que el operador sienta lo más fielmente posible la fuerza de carga se ha desarrollado una electrónica que mediante el control en corriente de los motores permita realizar un control en fuerza a partir de la caracterización de los motores del maestro. Además para aumentar la percepción del operador se han realizado unos experimentos que demuestran que al aplicar estímulos multimodales (visuales, auditivos y hápticos) aumenta su inmersión y el rendimiento en la consecución de la tarea puesto que influyen directamente en su capacidad de respuesta. Finalmente, y en referencia a la realimentación visual del operador, en ITER se trabaja con cámaras situadas en localizaciones estratégicas, si bien el humano cuando manipula objetos hace uso de su visión binocular cambiando constantemente el punto de vista adecuándose a las necesidades visuales de cada momento durante el desarrollo de la tarea. Por ello, se ha realizado una reconstrucción tridimensional del espacio de la tarea a partir de una cámara-sensor RGB-D, lo cual nos permite obtener un punto de vista binocular virtual móvil a partir de una cámara situada en un punto fijo que se puede proyectar en un dispositivo de visualización 3D para que el operador pueda variar el punto de vista estereoscópico según sus preferencias. La correcta integración de estas tecnologías para la interacción hombre-robot en la P.A.T. ha permitido validar mediante pruebas y experimentos para verificar su utilidad en la aplicación práctica de la telemanipulación con alto grado de escalado en entornos nucleares de fusión. Abstract This thesis focuses on developing technologies for human-robot interaction in nuclear fusion environments. The main problem of nuclear fusion sector resides in such extreme environmental conditions existing in the hot-cell, leading to very restrictive requirements for equipment in order to deal with these high levels of radiation, magnetism, ultravacuum, temperature... Since it is not feasible to carry out tasks directly by humans, we must use remote handling devices for accomplishing operation and maintenance processes. In ITER facilities it is mandatory to have a controlled environment of extreme safety and security with validated standards. The definition and use of protocols is essential to govern its operation. Focusing on Remote Handling with some degree of escalation, protocols must be defined for open systems to allow interaction among different kind of equipment and several multifunctional devices. In this context, a Teleoperation Protocol definition enables interconnection between master and slave devices from different typologies, being able to communicate bilaterally one each other and using different control algorithms depending on the task to perform. This protocol and its interconnectivity have been tested in the Teleoperation Open Platform (T.O.P.) that has been developed and integrated in the ETSII UPM as a tool to test, validate and conduct experiments in Telerobotics. Currently, this protocol has been proposed for Teleoperation through AENOR to the ISO Telerobotics group as a valid solution to the existing problem, and it is under review. Master and slave connection has been achieved with this protocol design, however with such high radiation levels in ITER, the controller electronics cannot enter inside the tokamak. Therefore it is proposed a multiplexed electronic board, that through suitable and RadHard protection processes, to transmit control signals through an umbilical cable from the controller to the robot base. In this theoretical exercise the utility and feasibility of using this type of solution reduce the volume and weight of the umbilical wiring approximate 90% less, although it is necessary to develop specific electronic hardware and validate in RadHard qualifications in order to handle huge levels of ITER radiation. Using generic manipulators does not allow to implement regular sensors for force feedback in ITER conditions. In this line of research, an algorithm to calculate the forces and inertia produced by the load has been developed using a force/torque sensor and IMU, both conveniently protected against radiation and placed on the robot wrist. Scaled forces should be transmitted to the operator, feeling load forces but not other undesirable forces in slave system as those resulting from other force estimation techniques. Since shielding of the sensors should not be large and heavy, it will be necessary to allocate this type of technology for programmed maintenance periods of ITER, when radiation levels are at their lowest levels. Moreover, the operator perception needs to feel load forces as accurate as possible, so some current control electronics were developed to perform a force control of master joint motors going through a correct motor characterization. In addition to increase the perception of the operator, some experiments were conducted to demonstrate applying multimodal stimuli (visual, auditory and haptic) increases immersion and performance in achieving the task since it is directly correlated with response time. Finally, referring to the visual feedback to the operator in ITER, it is usual to work with 2D cameras in strategic locations, while humans use binocular vision in direct object manipulation, constantly changing the point of view adapting it to the visual needs for performing manipulation during task procedures. In this line a three-dimensional reconstruction of non-structured scenarios has been developed using RGB-D sensor instead of cameras in the remote environment. Thus a mobile virtual binocular point of view could be generated from a camera at a fixed point, projecting stereoscopic images in 3D display device according to operator preferences. The successful integration of these technologies for human-robot interaction in the T.O.P., and validating them through tests and experiments, verify its usefulness in practical application of high scaling remote handling at nuclear fusion environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We used a pale-green maize (Zea mays L.) mutant that fails to accumulate ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) to test the working hypothesis that the regulatory phosphorylation of C4 phosphoenolpyruvate carboxylase (PEPC) by its Ca2+-insensitive protein-serine/threonine kinase (PEPC kinase) in the C4 mesophyll cytosol depends on cross-talk with a functional Calvin cycle in the bundle sheath. Wild-type (W22) and bundle sheath defective2-mutable1 (bsd2-m1) seeds were grown in a controlled environment chamber at 100 to 130 μmol m−2 s−1 photosynthetic photon flux density, and leaf tissue was harvested 11 d after sowing, following exposure to various light intensities. Immunoblot analysis showed no major difference in the amount of polypeptide present for several mesophyll- and bundle-sheath-specific photosynthetic enzymes apart from Rubisco, which was either completely absent or very much reduced in the mutant. Similarly, leaf net CO2-exchange analysis and in vitro radiometric Rubisco assays showed that no appreciable carbon fixation was occurring in the mutant. In contrast, the sensitivity of PEPC to malate inhibition in bsd2-m1 leaves decreased significantly with an increase in light intensity, and there was a concomitant increase in PEPC kinase activity, similar to that seen in wild-type leaf tissue. Thus, although bsd2-m1 mutant plants lack an operative Calvin cycle, light activation of PEPC kinase and its target enzyme are not grossly perturbed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Carbon dioxide (CO2) has been increasing in atmospheric concentration since the Industrial Revolution. A decreasing number of stomata on leaves of land plants still provides the only morphological evidence that this man-made increase has already affected the biosphere. The current rate of CO2 responsiveness in individual long-lived species cannot be accurately determined from field studies or by controlled-environment experiments. However, the required long-term data sets can be obtained from continuous records of buried leaves from living trees in wetland ecosystems. Fine-resolution analysis of the lifetime leaf record of an individual birch (Betula pendula) indicates a gradual reduction of stomatal frequency as a phenotypic acclimation to CO2 increase. During the past four decades, CO2 increments of 1 part per million by volume resulted in a stomatal density decline of approximately 0.6%. It may be hypothesized that this plastic stomatal frequency response of deciduous tree species has evolved in conjunction with the overall Cenozoic reduction of atmospheric CO2 concentrations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho são apresentados os resultados da determinação, por ICP OES, de elementos tóxicos e/ou potencialmente tóxicos (AI, Cr, Ni, Cu, Cd, As, Co e Pb) em tintas de acabamento imobiliário à base de água ou de solvente orgânico. Foram desenvolvidas e comparadas método de digestão de amostras utilizando diferentes misturas ácidas em bombas de decomposição, em fomo de microondas (sistema com radiação focalizada e com cavidade) e método de cinzas. O Método de digestão utilizando fomo de microondas com cavidade permitiu solubilização rápida e eficiente de todos os tipos de tintas testados, em tempo inferior a 35 minutos. As método apresentaram valores aceitáveis para a maioria dos elementos nos testes de adição e recuperação dos analitos. Os resíduos resultantes da digestão foram avaliados por MEV-EDS e não apresentaram os elementos estudados, comprovando a eficiência da metodologia. Mercúrio foi determinado usando um Analisador Direto de Mercúrio (DMA) e apresentou valores entre 43,0 ± 4,5 e 188 ± 9 µg Kg-1, valor considerado baixo quando comparado ao limite de 100 mg Kg-1, estabelecido na norma NRR 10004 para disposição de resíduos sólidos sem instalações especiais. O estudo da migração dos elementos para o ambiente após a exposição da tinta a agentes \"agressores\" , como raios ultravioletas e umidade, foi realizado usando câmara do tipo \"Weather-Ometer\" (envelhecimento acelerado). A avaliação dos resultados foi feita por MEV EDS e ICP OES. As micrografias de MEV mostraram que houve mudança na morfologia do polímero que foi submetido ao intemperismo acelerado. Os resultados obtidos não foram conclusivos quanto à migração dos analitos em função da baixa razão entre as massas degradada e não degradada das amostras. Uma proposta de metodologia para a avaliação por ICP OES das tintas e da disponibilidade de elementos tóxicos e potencialmente tóxicos, baseada na lixiviação de amostras secas em ambiente controlado é apresentada. São mostrados resultados de lixiviação de AI, Cr, Ni, Cu, Cd, As, Co e Pb com vários extratores e tempos diferentes de extração. Os resultados mostram que ocorre a migração de alguns elementos para as soluções estudadas e que, dos extratores avaliados, a chuva ácida apresentou maior potencial de lixiviação.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A conditioning procedure is proposed allowing to install into the concrete specimens any selected value of water saturation degree with homogeneous moisture distribution. This is achieved within the least time and the minimum alteration of the concrete specimens. The protocol has the following steps: obtaining basic drying data at 50 °C (water absorption capacity and drying curves); unidirectional drying of the specimens at 50 °C until reaching the target saturation degree values; redistribution phase in closed containers at 50 °C (with measurement of the quasi-equilibrium relative humidities); storage into controlled environment chambers until and during mass transport tests, if necessary. A water transport model is used to derive transport parameters of the tested materials from the drying data, i.e., relative permeabilities and apparent water diffusion coefficients. The model also allows calculating moisture profiles during isothermal drying and redistribution phases, thus allowing optimization of the redistribution times for obtaining homogeneous moisture distributions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Briarwood, Ann Arbor, Michigan, with its graceful fountains and lovely floral plantings, is a regional retail development offering more than 100 stores in a comfortable controlled environment at I-94 and State Street.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Briarwood, Ann Arbor, Michigan, is a regional retail development which offers more than 100 stores and a friendly, comfortable controlled environment at I-94 and State Street.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Briarwood - Ann Arbor, Michigan, is a regional retail development which offers more than 100 stores & a friendly, comfort-controlled environment at I-94 and State Street.