973 resultados para Word error rate
Resumo:
This study tested the hypothesis that the response of corals to temperature and pCO2 is consistent between taxa. Juvenile massive Porites spp. and branches of P. rus from the back reef of Moorea were incubated for 1 month under combinations of temperature (29.3 °C and 25.6 °C) and pCO2 (41.6 Pa and 81.5 Pa) at an irradiance of 599 µmol quanta/m/s. Using microcosms and CO2 gas mixing technology, treatments were created in a partly nested design (tanks) with two between-plot factors (temperature and pCO2), and one within-plot factor (taxon); calcification was used as a dependent variable. pCO2 and temperature independently affected calcification, but the response differed between taxa. Massive Porites spp. was largely unaffected by the treatments, but P. rus grew 50% faster at 29.3 °C compared with 25.6 °C, and 28% slower at 81.5 Pa vs. 41.6 Pa CO2. A compilation of studies placed the present results in a broader context and tested the hypothesis that calcification for individual coral genera is independent of pH, [HCO3]-, and [CO3]2-. Unlike recent reviews, this analysis was restricted to studies reporting calcification in units that could be converted to nmol CaCO3/cm**2/h. The compilation revealed a high degree of variation in calcification as a function of pH, [HCO3]-, and [CO3]2-, and supported three conclusions: (1) studies of the effects of ocean acidification on corals need to pay closer attention to reducing variance in experimental outcomes to achieve stronger synthetic capacity, (2) coral genera respond in dissimilar ways to pH, [HCO3]-, and [CO3]2-, and (3) calcification of massive Porites spp. is relatively resistant to short exposures of increased pCO2, similar to that expected within 100 y.
Resumo:
Ocean warming and acidification are serious threats to marine life. While each stressor alone has been studied in detail, their combined effects on the outcome of ecological interactions are poorly understood. We measured predation rates and predator selectivity of two closely related species of damselfish exposed to a predatory dottyback. We found temperature and CO2 interacted synergistically on overall predation rate, but antagonistically on predator selectivity. Notably, elevated CO2 or temperature alone reversed predator selectivity, but the interaction between the two stressors cancelled selectivity. Routine metabolic rates of the two prey showed strong species differences in tolerance to CO2 and not temperature, but these differences did not correlate with recorded mortality. This highlights the difficulty of linking species-level physiological tolerance to resulting ecological outcomes. This study is the first to document both synergistic and antagonistic effects of elevated CO2 and temperature on a crucial ecological process like predator-prey dynamics.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Effects of considering the comminution rate -kc- and the correction of microbial contamination -using 15N techniques- of particles in the rumen on estimates of ruminally undegraded fractions and their intestinal digestibility were examined generating composite samples -from rumen-incubated residues- representative of the undegraded feed rumen outflow. The study used sunflower meal -SFM- and Italian ryegrass hay -RGH- and three rumen and duodenum cannulated wethers fed with a 40:60 RGH to concentrate diet -75 g DM/kgBW0.75-. Transit studies up to the duodenum with Yb-SFM and Eu-RGH marked samples showed higher kc values -/h- in SFM than in RGH -0.577 vs. 0.0892, p = 0.034-, whereas similar values occurred for the rumen passage rate -kp-. Estimates of ruminally undegraded and intestinal digestibility of all tested fractions decreased when kc was considered and also applying microbial correction. Thus, microbial uncorrected kp-based proportions of intestinal digested undegraded crude protein overestimated those corrected and kc-kp-based by 39% in SFM -0.146 vs. 0.105- and 761% in RGH -0.373 vs. 0.0433-. Results show that both kc and microbial contamination correction should be considered to obtain accurate in situ estimates in grasses, whereas in protein concentrates not considering kc is an important source of error.
Resumo:
En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.
Resumo:
This paper presents a new methodology for measurement of the instantaneous average exhaust mass flow rate in reciprocating internal combustion engines to be used to determinate real driving emissions on light duty vehicles, as part of a Portable Emission Measurement System (PEMS). Firstly a flow meter, named MIVECO flow meter, was designed based on a Pitot tube adapted to exhaust gases which are characterized by moisture and particle content, rapid changes in flow rate and chemical composition, pulsating and reverse flow at very low engine speed. Then, an off-line methodology was developed to calculate the instantaneous average flow, considering the ?square root error? phenomenon. The paper includes the theoretical fundamentals, the developed flow meter specifications, the calibration tests, the description of the proposed off-line methodology and the results of the validation test carried out in a chassis dynamometer, where the validity of the mass flow meter and the methodology developed are demonstrated.
Resumo:
The mutagenic effect of low linear energy transfer ionizing radiation is reduced for a given dose as the dose rate (DR) is reduced to a low level, a phenomenon known as the direct DR effect. Our reanalysis of published data shows that for both somatic and germ-line mutations there is an opposite, inverse DR effect, with reduction from low to very low DR, the overall dependence of induced mutations being parabolically related to DR, with a minimum in the range of 0.1 to 1.0 cGy/min (rule 1). This general pattern can be attributed to an optimal induction of error-free DNA repair in a DR region of minimal mutability (MMDR region). The diminished activation of repair at very low DRs may reflect a low ratio of induced (“signal”) to spontaneous background DNA damage (“noise”). Because two common DNA lesions, 8-oxoguanine and thymine glycol, were already known to activate repair in irradiated mammalian cells, we estimated how their rates of production are altered upon radiation exposure in the MMDR region. For these and other abundant lesions (abasic sites and single-strand breaks), the DNA damage rate increment in the MMDR region is in the range of 10% to 100% (rule 2). These estimates suggest a genetically programmed optimatization of response to radiation in the MMDR region.
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
We studied the global and local ℳ-Z relation based on the first data available from the CALIFA survey (150 galaxies). This survey provides integral field spectroscopy of the complete optical extent of each galaxy (up to 2−3 effective radii), with a resolution high enough to separate individual H II regions and/or aggregations. About 3000 individual H II regions have been detected. The spectra cover the wavelength range between [OII]3727 and [SII]6731, with a sufficient signal-to-noise ratio to derive the oxygen abundance and star-formation rate associated with each region. In addition, we computed the integrated and spatially resolved stellar masses (and surface densities) based on SDSS photometric data. We explore the relations between the stellar mass, oxygen abundance and star-formation rate using this dataset. We derive a tight relation between the integrated stellar mass and the gas-phase abundance, with a dispersion lower than the one already reported in the literature (σ_Δlog (O/H) = 0.07 dex). Indeed, this dispersion is only slightly higher than the typical error derived for our oxygen abundances. However, we found no secondary relation with the star-formation rate other than the one induced by the primary relation of this quantity with the stellar mass. The analysis for our sample of ~3000 individual H II regions confirms (i) a local mass-metallicity relation and (ii) the lack of a secondary relation with the star-formation rate. The same analysis was performed with similar results for the specific star-formation rate. Our results agree with the scenario in which gas recycling in galaxies, both locally and globally, is much faster than other typical timescales, such like that of gas accretion by inflow and/or metal loss due to outflows. In essence, late-type/disk-dominated galaxies seem to be in a quasi-steady situation, with a behavior similar to the one expected from an instantaneous recycling/closed-box model.