943 resultados para Approximate Bayesian computation, Posterior distribution, Quantile distribution, Response time data
Resumo:
There are few in situ studies showing how net community calcification (Gnet) of coral reefs is related to carbonate chemistry, and the studies to date have demonstrated different predicted rates of change. In this study, we measured net community production (Pnet), Gnet, and carbonate chemistry of a reef flat at One Tree Island, Great Barrier Reef. Diurnal pCO2 variability of 289-724 µatm was driven primarily by photosynthesis and respiration. The reef flat was found to be net autotrophic, with daily production of ? 35 mmol C/m**2/d and net calcification of ? 33 mmol C/m**2/d . Gnet was strongly related to Pnet, which drove a hysteresis pattern in the relationship between Gnet and aragonite saturation state (Omega ar). Although Pnet was the main driver of Gnet, Omega ar was still an important factor, where 95% of the variance in Gnet could be described by Pnet and Omega ar. Based on the observed in situ relationship, Gnet would be expected to reach zero when Omega ar is 2.5. It is unknown what proportion of a decline in Gnet would be through reduced calcification and what would occur through increased dissolution, but the results here support predictions that overall calcium carbonate production will decline in coral reefs as a result of ocean acidification.
Resumo:
Future oceans are predicted to contain less oxygen than at present. This is because oxygen is less soluble in warmer water and predicted stratification will reduce mixing. Hypoxia in marine environments is thus likely to become more widespread in marine environments and understanding species-responses is important to predicting future impacts on biodiversity. This study used a tractable model, the Antarctic clam, Laternula elliptica, which can live for 36 years, and has a well-characterized ecology and physiology to understand responses to hypoxia and how the effect varied with age. Younger animals had a higher condition index, higher adenylate energy charge and transcriptional profiling indicated that they were physically active in their response to hypoxia, whereas older animals were more sedentary, with higher levels of oxidative damage and apoptosis in the gills. These effects could be attributed, in part, to age-related tissue scaling; older animals had proportionally less contractile muscle mass and smaller gills and foot compared with younger animals, with consequential effects on the whole-animal physiological response. The data here emphasize the importance of including age effects, as large mature individuals appear to be less able to resist hypoxic conditions and this is the size range that is the major contributor to future generations. Thus, the increased prevalence of hypoxia in future oceans may have marked effects on benthic organisms' abilities to persist and this is especially so for long-lived species when predicting responses to environmental perturbation.
Resumo:
To identify the properties of taxa sensitive and resistant to ocean acidification (OA), we tested the hypothesis that coral reef calcifiers differ in their sensitivity to OA as predictable outcomes of functional group alliances determined by conspicuous traits. We contrasted functional groups of eight corals and eight calcifying algae defined by morphology in corals and algae, skeletal structure in corals, spatial location of calcification in algae, and growth rate in corals and algae. The responses of calcification to OA were unrelated to morphology and skeletal structure in corals; they were, however, affected by growth rate in corals and algae (fast calcifiers were more sensitive than slow calcifiers), and by the site of calcification and morphology in algae. Species assemblages characterized by fast growth, and for algae, also cell-wall calcification, are likely to be ecological losers in the future ocean. This shift in relative success will affect the relative and absolute species abundances as well as the goods and services provided by coral reefs.
Resumo:
The microbial oxidation of methane controls the emission of the greenhouse gas methane from the ocean floor. However, some seabed structures such as mud volcanoes have leaky microbial methane filters and can be important sources of methane. We investigated the disturbance and recovery of a methanotrophic mud volcano microbiome (Håkon Mosby mud volcano, 1250 m water depth), to assess time scales of community succession and function in the natural deep-sea environment. We analyzed 10 surface and 5 subsurface sediment samples across HMMV mud flows from most recently discharged subsurface muds towards old consolidated muds as well as one reference site (REF) located approximately 0.5 km outside of the HMMV. Surface samples were obtained in 2003, 2009 and 2010. The surface of the new mud flows at the geographical center was sampled in 2009 and 2010. Around 100 m south of the center, we sampled more consolidated aged muds in 2003 and 2010. Old mud flows were sampled around 300 m southeast and 100 m north of the geographical center in 2003, 2009 and 2010. Surface sediment samples (0-20 cm) were recovered either by TV-guided Multicorer or by push cores using the remotely operated vehicle Quest (Marum, University Bremen). Subsurface sediments of all zones (>2 m below sea floor) were obtained in 2003 by gravity corer. After recovery, sediments were immediately subsampled in a refrigerated container (0°C) and further processed for biogeochemical analyses or preserved at -20°C for later DNA analyses. Our study show that freshly erupted muds hosted heterotrophic deep subsurface communities, which were replaced by surface communities within a few years of exposure. Aerobic methanotrophy was established at the top surface layer within less than a year, followed by anaerobic methanotrophy, sulfate reduction and finally thiotrophy. Our data indicate that it takes decades in cold environments before efficient methanotrophic communities establish to control methane emission. The observed succession provides insights to the response time of complex deep-sea communities to seafloor disturbances.
Resumo:
Quantitative X-Ray Diffraction (qXRD) analysis of the <2 mm sediment fraction from surface (sea floor) samples, and marine sediment cores that span the last 10-12 cal ka BP, are used to describe spatial and temporal variations in non-clay mineral compositions for an area between Kangerlussuaq Trough and Scoresby Sund (?67°-70°N), East Greenland. Bedrock consists primarily of an early Tertiary alkaline complex with high weight% of pyroxene and plagioclase. Farther inland and to the north, the bedrock is dominantly felsic with a high fraction of quartz and potassium feldspars. Principal Component (PC) analysis of the non-clay sediment compositions indicates the importance of quartz and pyroxene as compositional end members, with an abrupt shift from quartz and k-feldspar dominated sediments north of Scoresby Sund to sediments rich in pyroxene and plagioclase feldspars offshore from the early Tertiary basaltic outcrop. Coarse (<2 mm or <1 mm) ice-rafted sediments are largely absent from the trough sediments between ?8 and 5 cal ka BP, but then increase in the last 4 cal ka BP. Compositional unmixing of the sediments in Grivel Basin and Kangerlussuaq Trough indicate the dominance of local over long distance sediment sources, with pulses of sediment from tidewater glaciers in Kangerlussuaq and Nansen fjords reaching the inner shelf during the Neoglaciation. The change in IRD is more dramatic in the sediment grain-size proxies than in the quartz wt%. Forty to seventy percent of the variance in the quartz records from either side of Denmark Strait is explained by low frequency trends, but the data from the Grivel Basin, East Greenland, are distinctly different, with an approximate 2500 yr periodicity.
Resumo:
Ocean acidification, caused by rising concentrations of carbon dioxide (CO2), is widely considered to be a major global threat to marine ecosystems. To investigate the potential effects of ocean acidification on the early life stages of a commercially important fish species, European sea bass (Dicentrarchus labrax), 12 000 larvae were incubated from hatch through metamorphosis under a matrix of two temperatures (17 and 19 °C) and two seawater pCO2 levels (ambient and 1,000 µatm) and sampled regularly for 42 days. Calculated daily mortality was significantly affected by both temperature and pCO2, with both increased temperature and elevated pCO2 associated with lower daily mortality and a significant interaction between these two factors. There was no significant pCO2 effect noted on larval morphology during this period but larvae raised at 19 °C possessed significantly larger eyes and lower carbon:nitrogen ratios at the end of the study compared to those raised under 17 °C. Similarly, when the incubation was continued to post-metamorphic (juvenile) animals (day 67-69), fish raised under a combination of 19 °C and 1000 µatm pCO2 were significantly heavier. However, juvenile D. labrax raised under this combination of 19 °C and 1000 µatm pCO2 also exhibited lower aerobic scopes than those incubated at 19 °C and ambient pCO2. Most studies investigating the effects of near-future oceanic conditions on the early life stages of marine fish have used incubations of relatively short durations and suggested that these animals are resilient to ocean acidification. Whilst the increased survival and growth observed in this study supports this view, we conclude that more work is required to investigate whether the differences in juvenile physiology observed in this study manifest as negative impacts in adult fish.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
Virtualized Infrastructures are a promising way for providing flexible and dynamic computing solutions for resourceconsuming tasks. Scientific Workflows are one of these kind of tasks, as they need a large amount of computational resources during certain periods of time. To provide the best infrastructure configuration for a workflow it is necessary to explore as many providers as possible taking into account different criteria like Quality of Service, pricing, response time, network latency, etc. Moreover, each one of these new resources must be tuned to provide the tools and dependencies required by each of the steps of the workflow. Working with different infrastructure providers, either public or private using their own concepts and terms, and with a set of heterogeneous applications requires a framework for integrating all the information about these elements. This work proposes semantic technologies for describing and integrating all the information about the different components of the overall system and a set of policies created by the user. Based on this information a scheduling process will be performed to generate an infrastructure configuration defining the set of virtual machines that must be run and the tools that must be deployed on them.
Resumo:
We present a fast, highly sensitive, and efficient potentiometric glucose biosensor based on functionalized InN quantum-dots (QDs). The InN QDs are grown by molecular beam epitaxy. The InN QDs are bio-chemically functionalized through physical adsorption of glucose oxidase (GOD). GOD enzyme-coated InN QDs based biosensor exhibits excellent linear glucose concentration dependent electrochemical response against an Ag/AgCl reference electrode over a wide logarithmic glucose concentration range (1 × 10−5 M to 1 × 10−2 M) with a high sensitivity of 80 mV/decade. It exhibits a fast response time of less than 2 s with good stability and reusability and shows negligible response to common interferents such as ascorbic acid and uric acid. The fabricated biosensor has full potential to be an attractive candidate for blood sugar concentration detection in clinical diagnoses.
Resumo:
A notorious advantage of wireless transmission is a significant reduction and simplification in wiring and harness. There are a lot of applications of wireless systems, but in many occasions sensor nodes require a specific housing to protect the electronics from hush environmental conditions. Nowadays the information is scarce and nonspecific on the dynamic behaviour of WSN and RFID. Therefore the purpose of this study is to evaluate the dynamic behaviour of the sensors. A series of trials were designed and performed covering temperature steps between cold room (5 °C), room temperature (23 °C) and heated environment (35 °C). As sensor nodes: three Crossbow motes, a surface mounted Nlaza module (with sensor Sensirion located on the motherboard), an aerial mounted Nlaza where the Sensirion sensor stayed at the end of a cable), and four tags RFID Turbo Tag (T700 model with and without housing), and 702-B (with and without housing). To assess the dynamic behaviour a first order response approach is used and fitted with dedicated optimization tools programmed in Matlab that allow extracting the time response (?) and corresponding determination coefficient (r2) with regard to experimental data. The shorter response time (20.9 s) is found for the uncoated T 700 tag which encapsulated version provides a significantly higher response (107.2 s). The highest ? corresponds to the Crossbow modules (144.4 s), followed by the surface mounted Nlaza module (288.1 s), while the module with aerial mounted sensor gives a response certainly close above to the T700 without coating (42.8 s). As a conclusion, the dynamic response of temperature sensors within wireless and RFID nodes is dramatically influenced by the way they are housed (to protect them from the environment) as well as by the heat released by the node electronics itself; its characterization is basic to allow monitoring of high rate temperature changes and to certify the cold chain. Besides the time to rise and to recover is significantly different being mostly higher for the latter than for the former.
Resumo:
En este proyecto se van a aplicar las técnicas de análisis de ruido para caracterizar la respuesta dinámica de varios sensores de temperatura, tanto termorresistencias de platino como de termopares. Estos sensores son imprescindibles para él correcto funcionamiento de las centrales nucleares y requieren vigilancia para garantizar la exactitud de las medidas. Las técnicas de análisis de ruido son técnicas pasivas, es decir, no afectan a la operación de la planta y permiten realizar una vigilancia in situ de los sensores. Para el caso de los sensores de temperatura, dado que se pueden asimilar a sistemas de primer orden, el parámetro fundamental a vigilar es el tiempo de respuesta. Éste puede obtenerse para cada una de las sondas por medio de técnicas en el dominio de la frecuencia (análisis espectral) o por medio de técnicas en el dominio del tiempo (modelos autorregresivos). Además de la estimación del tiempo de respuesta, se realizará una caracterización estadística de las sondas. El objetivo es conocer el comportamiento de los sensores y vigilarlos de manera que se puedan diagnosticar las averías aunque éstas estén en una etapa incipiente. ABSTRACT In this project we use noise analysis technique to study the dynamic response of RTDs (Resistant temperature detectors) and thermocouples. These sensors are essential for the proper functioning of nuclear power plants and therefore need to be monitored to guarantee accurate measurements. The noise analysis techniques do not affect plant operation and allow in situ monitoring of the sensors. Temperature sensors are equivalent to first order systems. In these systems the main parameter to monitor is the response time which can be obtained by means of techniques in the frequency domain (spectral analysis) as well as time domain (autoregressive models). Besides response time estimation the project will also include a statistical study of the probes. The goal is to understand the behavior of the sensors and monitor them in order to detect any anomalies or malfunctions even if they occur in an early stage.
Resumo:
La esgrima es un deporte de combate en el que todas las acciones están dirigidas a conseguir el objetivo de la competición, que es el de tocar al contrario sin ser tocado, y para ello los tiradores/as se sirven de todas las herramientas, técnicas, tácticas y de condición física posibles a su alcance. La calidad de las acciones de los esgrimistas en competición dependerá fundamentalmente de dos de los sistemas involucrados en el proceso. Por un lado, la activación del sistema nervioso, y por otro lado la respuesta del sistema muscular. Una de las herramientas utilizadas en los últimos años para estimular el sistema neuromuscular, y que ha reportado en muchas investigaciones resultados positivos, ha sido la plataforma de vibraciones vertical. Por esto, decidimos llevar a cabo un estudio con deportistas de esgrima de competición con un protocolo de exposición aguda a las vibraciones, y medir el efecto producido por esta herramienta sobre el sistema neuromuscular a través del estudio de los cambios experimentados en los tiempos de reacción simple, tiempos de respuesta electiva y tiempos de movimiento, así como los efectos en la eficacia del tocado de esgrima analizados antes y después de recibir el estímulo vibratorio. El estudio se desarrolló con tiradores/as de competición de nivel nacional y pertenecientes al Centro de Tecnificación de la Federación de Esgrima de Castilla y León (n=38; Edad: 22 ±9,08). La muestra estaba compuesta por 12 mujeres y 26 hombres, de categorías, cadetes (13), júnior (12), y absolutos (13). El protocolo elegido fue realizado por cada participante en tres ocasiones, una inicial, otra tras pasar por una carga de estimulación en la plataforma de vibraciones, y una fase final, pasado un tiempo de recuperación de 10 minutos para comprobar el grado de disipación del efecto vibratorio. El estímulo elegido para la estimulación sobre la plataforma de vibraciones fue de una frecuencia de 50 Hz, durante un periodo de 60 segundos y con una amplitud de 4 mm. Los resultados se analizaron en función de las variables dependientes tiempo de reacción, tiempo de respuesta electiva, tiempo de movimiento, y precisión y eficacia. Estos datos se cruzaron con las variables independientes sexo, categoría, nivel deportivo y años de experiencia. El propósito del presente estudio fue el de analizar los efectos producidos por una intervención de estimulación neuromuscular mecánica en fase aguda en un grupo de tiradores/as de esgrima de competición. Los resultados mostraron que la carga de estimulación neuromuscular mecánica (ENM) aplicada en nuestro estudio provocó un discreto efecto en la respuesta neuromuscular de los tiradores participantes en la investigación. Se encontraron efectos significativos provocados por el estímulo vibratorio en las siguientes variables: en el tiempo de reacción simple se registró una mejora del 8,1%, en el tiempo de respuesta electiva a pie parado un 10%, en la precisión a pie parado un 7%, y en la eficacia a pie parado un 18,5%. Igualmente se observaron ligeras diferencias por sexos, encontrando un efecto de mayor tamaño en el grupo femenino. Es necesario resaltar que las características particulares de la muestra parecen haber influido en los resultados encontrados de forma importante. Por último, debemos destacar que el efecto residual producido por la estimulación aplicada en nuestra investigación en algún caso superó los diez minutos, ya que se hallaron efectos positivos de la estimulación neuromuscular mecánica en varios de los registros finales como hemos visto. ABSTRACT Fencing is a combat sport in which all actions are aimed at achieving the objective of the competition, which is that of touching the opponent without being touched and for this the fencers make use of all the tools, techniques, tactics and physical condition possible at their disposal. The quality of the fencers´ actions in competition will depend primarily on two of the systems involved in the process: on the one hand, the activation of the nervous system and, on the other, the response of the muscular system. One of the tools recently used to stimulate the neuromuscular system, and which has produced positive results in many research studies, has been the vertical vibration platform. Therefore, we decided to conduct a study with competition fencers with a protocol of acute exposure to vibration and to measure the effect produced by this tool on the neuromuscular system by means of the study of changes experienced in simple reaction times, the elective response times and movement times, as well as the effects on the efficacy of the fencing touch analyzed before and after receiving the vibratory stimulus. The study was conducted with fencers in national competitions and belonging to the Technification Center of the Fencing Federation of Castilla y León (n = 38), with a mean age of 22 years (SD 9.08). The sample was composed of 12 women and 26 men, by categories cadets (13), juniors (12), and seniors (13). The protocol chosen was performed by each participant on three occasions: an initial one, another after going through a stimulation load on the vibration platform, and a final phase, after a 10 minute recovery time to assess the degree of dissipation of the vibratory effect. The stimulus chosen on the vibration platform was a frequency of 50 Hz, for a period of 60 seconds and an amplitude of 4 mm. The results were analyzed according to the following dependent variables: the reaction time, the response time, the accuracy and the efficiency. This data was crossed with the independent variables: gender, category, sporting level and the years of experience. The purpose of this study was analyze the effects of neuromuscular stimulation intervention in a group of fencing competition shooters. The results showed that the mechanical neuromuscular stimulation (MNS) load applied in our study led to a modest effect on the neuromuscular response of the fencers involved in the research. Significant effects were found caused by the vibratory stimulus on the simple reaction time 8,1%, elective response time 10%, as well as the accuracy 7%, and efficiency 18,5%. We should also point out that slight differences by gender were observed, finding a greater effect in females. It should be emphasized that the specific characteristics of the sample appear to have significantly influenced the results. Finally, we should note that the residual effect produced by the stimulation applied in our research in some cases exceeded ten minutes, given that positive effects of the mechanical neuromuscular stimulation were found in several of the final scores.
Resumo:
In this work, a methodology is proposed to find the dynamic poles of a capacitive pressure transmitter in order to enhance and extend the online surveillance of this type of sensor based on the response time measurement by applying noise analysis techniques and the dynamic data system procedure. Several measurements taken from a pressurized water reactor have been analyzed. The methodology proposes an autoregressive fit whose order is determined by the sensor dynamic poles. Nevertheless, the signals that have been analyzed could not be filtered properly in order to remove the plant noise; thus, this was considered as an additional pair of complex conjugate poles. With this methodology we have come up with the numerical value of the sensor second real pole in spite of its low influence on the sensor dynamic response. This opens up a more accurate online sensor surveillance since the previous methods were achieved by considering one real pole only.
Resumo:
It is a known fact that noise analysis is a suitable method for sensor performance surveillance. In particular, controlling the response time of a sensor is an efficient way to anticipate failures and to have the opportunity to prevent them. In this work the response times of several sensors of Trillo NPP are estimated by means of noise analysis. The procedure applied consists of modeling each sensor with autoregressive methods and getting the searched parameter by analyzing the response of the model when a ramp is simulated as the input signal. Core exit thermocouples and in core self-powered neutron detectors are the main sensors analyzed but other plant sensors are studied as well. Since several measurement campaigns have been carried out, it has been also possible to analyze the evolution of the estimated parameters during more than one fuel cycle. Some sensitivity studies for the sample frequency of the signals and its influence on the response time are also included. Calculations and analysis have been done in the frame of a collaboration agreement between Trillo NPP operator (CNAT) and the School of Mines of Madrid.
Resumo:
In this work, a methodology is proposed to find the dynamics poles of a capacitive pressure transmitter in order to enhance and extend the on line surveillance of this type of sensors based on the response time measurement by applying noise analysis techniques and the Dynamic Data System. Several measurements have been analyzed taken from a Pressurized Water Reactor. The methodology proposes an autoregressive fit whose order is determined by the sensor dynamics poles. Nevertheless, the signals that have been analyzed, could not be filtered properly in order to remove the plant noise, thus, this was considered as an additional pair of complex conjugate poles. With this methodology we have come up with the numerical value of the sensor second real pole in spite of its low influence on the sensor dynamic response. This opens up a more accurate on line sensor surveillance since the previous methods were achieved by considering one real pole only.