886 resultados para Type of error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study proposed a novel statistical method that modeled the multiple outcomes and missing data process jointly using item response theory. This method follows the "intent-to-treat" principle in clinical trials and accounts for the correlation between outcomes and missing data process. This method may provide a good solution to chronic mental disorder study. ^ The simulation study demonstrated that if the true model is the proposed model with moderate or strong correlation, ignoring the within correlation may lead to overestimate of the treatment effect and result in more type I error than specified level. Even if the within correlation is small, the performance of proposed model is as good as naïve response model. Thus, the proposed model is robust for different correlation settings if the data is generated by the proposed model.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to examine whether the paleoceanographic nutrient proxies, d13C and cadmium/calcium in foraminiferal calcite, are well coupled to nutrients in the region of North Atlantic Deep Water formation, we present da ta from two transects of the Greenland-Iceland-Norwegian Seas. Along Transect A (74.3°N, 18.3°E to 75.0°N, 12.5°W, 15 stations), we measured phosphate and Cd concentrations of modern surface sea water. Along Transect B (64.5°N, 0.7°W to 70.4°N, 18.2°W, 14 stations) we measured Cd/Ca ratios and d13C of the planktonic foraminifera Neogloboquadrina pachyderma sinistral in core top sediments. Our results indicate that Cd and phosphate both vary with surface water mass and are well correlated along Transect A. Our planktonic foraminiferal d13C data indicate similar nutrient variation with water mass along Transect B. Our Cd/Ca data hint at the same type of nutrient variability, but interpretations are hampered by low values close to the detection limit of this technique and therefore relatively large error bars. We also measured Cd and phosphate concentrations in water depth profiles at three sites along Transect A and the d13C of the benthic foraminifera Cibicidoides wuellerstorfi along Transect B. Modern sea water depth profiles along Transect A have nutrient depletions at the surface and then constant values at depths greater than 100 meters. The d13C of planktonic and benthic foraminifera from Transect B plotted versus depth also reflect surface nutrient depletion and deep nutrient enrichment as seen at Transect A, with a small difference between intermediate and deep waters. Overall we see no evidence for decoupling of Cd/Ca ratio and d13C in foraminiferal calcite from water column nutrient concentrations along these transects in a region of North Atlantic Deep Water formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the investigation of samples recovered during Cruise 25 of the R/V ''Akademik Nikolai Strakhov'', the character of magmatism was determined in the flank parts of the rift zone at the 74°05'N and 73°50'N region, where the direction of the rift valley changes from the north-northwest in the Knipovich Ridge to the northeast-trending structures of the Mohns Ridge. It was shown that the tholeiitic magmas of this region shows all the geochemical characteristics of TOR-2, which is typical of the Mohns Ridge and most oceanic rift zones worldwide, and differ from the basalts of the Knipovich Ridge, which are assigned to a shallower type of tholeiitic magmatism (Na-TOR). The persistent depletion of the magmas in terms of lithophile element contents and radiogenic isotope ratios of Sr, Nd, and Pb reflects the conditions of their formation during the ascent of the depleted oceanic mantle, which has occurred without significant complications since the early stages of the formation of the Mohns Ridge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peridotites (diopside-bearing harzburgites) found at 13°N of the Mid-Atlantic Ridge fall into two compositional groups. Peridotites P1 are plagioclase-free rocks with minerals of uniform composition and Ca-pyroxene strongly depleted in highly incompatible elements. Peridotites P2 bear evidence of interaction with basic melt: mafic veinlets; wide variations in mineral composition; enrichment of minerals in highly incompatible elements (Na, Zr, and LREE); enrichment of minerals in moderately incompatible elements (Ti, Y, and HREE) from P1 level to abundances 4-10 times higher toward the contacts with mafic aggregates; and exotic mineral assemblages Cr-spinel + rutile and Cr-spinel + ilmenite in peridotite and pentlandite + rutile in mafic veinlets. Anomalous incompatible-element enrichment of minerals from peridotites P2 occurred at the spinel-plagioclase facies boundary, which corresponds to pressure of about 0.8-0.9 GPa. Temperature and oxygen fugacity were estimated from spinel-orthopyroxene-olivine equilibria. Peridotites P1 with uniform mineral composition record temperature of the last complete recrystallization at 940-1050°C and FMQ buffer oxygen fugacity within the calculation error. In peridotites P2, local assemblages have different compositions of coexisting minerals, which reflects repeated partial recrystallization during heating to magmatic temperatures (above 1200°C) and subsequent reequilibration at temperatures decreasing to 910°C and oxygen fugacity significantly higher than FMQ buffer (delta log fO2 = 1.3-1.9). Mafic veins are considered to be a crystallization product from basic melt enriched in Mg and Ni via interaction with peridotite. The geochemical type of melt reconstructed by the equilibrium with Ca-pyroxene is defined as T-MORB: (La/Sm)_N~1.6 and (Ce/Yb) )_N~2.3 that is well consistent with compositional variations of modern basaltic lavas in this segment of the Mid-Atlantic Ridge, including new data on quenched basaltic glasses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Central American Volcanic Arc (CAVA) has been the subject of intensive research over the past few years, leading to a variety of distinct models for the origin of CAVA lavas with various source components. We present a new model for the NW Central American Volcanic Arc based on a comprehensive new geochemical data set (major and trace element and Sr-Nd-Pb-Hf-O isotope ratios) of mafic volcanic front (VF), behind the volcanic front (BVF) and back-arc (BA) lava and tephra samples from NW Nicaragua, Honduras, El Salvador and Guatemala. Additionally we present data on subducting Cocos Plate sediments (from DSDP Leg 67 Sites 495 and 499) and igneous oceanic crust (from DSDP Leg 67 Site 495), and Guatemalan (Chortis Block) granitic and metamorphic continental basement. We observe systematic variations in trace element and isotopic compositions both along and across the arc. The data require at least three different endmembers for the volcanism in NW Central America. (1) The NW Nicaragua VF lavas require an endmember with very high Ba/(La, Th) and U/Th, relatively radiogenic Sr, Nd and Hf but unradiogenic Pb and low d18O, reflecting a largely serpentinite-derived fluid/hydrous melt flux from the subducting slab into a depleted N-MORB type of mantle wedge. (2) The Guatemala VF and BVF mafic lavas require an enriched endmember with low Ba/(La, Th), U/Th, high d18O and radiogenic Sr and Pb but unradiogenic Nd and Hf isotope ratios. Correlations of Hf with both Nd and Pb isotopic compositions are not consistent with this endmember being subducted sediments. Granitic samples from the Chiquimula Plutonic Complex in Guatemala have the appropriate isotopic composition to serve as this endmember, but the large amounts of assimilation required to explain the isotope data are not consistent with the basaltic compositions of the volcanic rocks. In addition, mixing regressions on Nd vs. Hf and the Sr and O isotope plots do not go through the data. Therefore, we propose that this endmember could represent pyroxenites in the lithosphere (mantle and possibly lower crust), derived from parental magmas for the plutonic rocks. (3) The Honduras and Caribbean BA lavas define an isotopically depleted endmember (with unradiogenic Sr but radiogenic Nd, Hf and Pb isotope ratios), having OIB-like major and trace element compositions (e.g. low Ba/(La, Th) and U/Th, high La/Yb). This endmember is possibly derived from melting of young, recycled oceanic crust in the asthenosphere upwelling in the back-arc. Mixing between these three endmember types of magmas can explain the observed systematic geochemical variations along and across the NW Central American Arc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The AND-2A drillcore (Antarctic Drilling Program-ANDRILL) was successfully completed in late 2007 on the Antarctic continental margin (Southern McMurdo Sound, Ross Sea) with the aim of tracking ice proximal to shallow marine environmental fluctuations and to document the 20-Ma evolution of the Erebus Volcanic Province. Lava clasts and tephra layers from the AND-2A drillcore were investigated from a petrographic and stratigraphic point of view and analyzed by the 40Ar-39Ar laser technique in order to constrain the age model of the core and to gain information on the style and nature of sediment deposition in the Victoria Land Basin since Early Miocene. Ten out of 17 samples yielded statistically robust 40Ar-39Ar ages, indicating that the AND-2A drillcore recovered <230 m of Middle Miocene (~128-358 m below sea floor, ~11.5-16.0 Ma) and >780 m of Early Miocene (~358-1093 m below sea floor, ~16.0-20.1 Ma). Results also highlight a nearly continuous stratigraphic record from at least 358 m below sea floor down hole, characterized by a mean sedimentation rate of ~19 cm/ka, possible oscillations of no more than a few hundreds of ka and a break within ~17.5-18.1 Ma. Comparison with available data from volcanic deposits on land, suggests that volcanic rocks within the AND-2A core were supplied from the south, possibly with source areas closer to the drill site for the upper core levels, and from 358 m below sea floor down hole, with the 'proto-Mount Morning' as the main source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here we present the first radiometric age data and a comprehensive geochemical data set (including major and trace element and Sr-Nd-Pb-Hf isotope ratios) for samples from the Hikurangi Plateau basement and seamounts on and adjacent to the plateau obtained during the R/V Sonne 168 cruise, in addition to age and geochemical data from DSDP Site 317 on the Manihiki Plateau. The 40Ar/39Ar age and geochemical data show that the Hikurangi basement lavas (118-96 Ma) have surprisingly similar major and trace element and isotopic characteristics to the Ontong Java Plateau lavas (ca. 120 and 90 Ma), primarily the Kwaimbaita-type composition, whereas the Manihiki DSDP Site 317 lavas (117 Ma) have similar compositions to the Singgalo lavas on the Ontong Java Plateau. Alkalic, incompatible-element-enriched seamount lavas (99-87 Ma and 67 Ma) on the Hikurangi Plateau and adjacent to it (Kiore Seamount), however, were derived from a distinct high time-integrated U/Pb (HIMU)-type mantle source. The seamount lavas are similar in composition to similar-aged alkalic volcanism on New Zealand, indicating a second wide-spread event from a distinct source beginning ca. 20 Ma after the plateau-forming event. Tholeiitic lavas from two Osbourn seamounts on the abyssal plain adjacent to the northeast Hikurangi Plateau margin have extremely depleted incompatible element compositions, but incompatible element characteristics similar to the Hikurangi and Ontong Java Plateau lavas and enriched isotopic compositions intermediate between normal mid-ocean-ridge basalt (N-MORB) and the plateau basement. These younger (~52 Ma) seamounts may have formed through remelting of mafic cumulate rocks associated with the plateau formation. The similarity in age and geochemistry of the Hikurangi, Ontong Java and Manihiki Plateaus suggest derivation from a common mantle source. We propose that the Greater Ontong Java Event, during which ?1% of the Earth's surface was covered with volcanism, resulted from a thermo-chemical superplume/dome that stalled at the transition zone, similar to but larger than the structure imaged presently beneath the South Pacific superswell. The later alkalic volcanism on the Hikurangi Plateau and the Zealandia micro-continent may have been part of a second large-scale volcanic event that may have also triggered the final breakup stage of Gondwana, which resulted in the separation of Zealandia fragments from West Antarctica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pollen and macrofossil evidence for the nature of the vegetation during glacial and interglacial periods in the regions south of the Wisconsinan ice margin is still very scarce. Modern opinions concerning these problems are therefore predominantly derived from geological evidence only or are extrapolated from pollen studies of late Wisconsinan deposits. Now for the first time pollen and macrofossil analyses are available from south-central Illinois covering the Holocene, the entire Wisconsinan, and most probably also Sangamonian and late Illinoian time. The cores studied came from three lakes, which originated as kettle holes in glacial drift of Illinoian age near Vandalia, Fayette County. The Wisconsinan ice sheet approached the sites from the the north to within about 60 km distance only. One of the profiles (Pittsburg Basin) probably reaches back to the late Illinoian (zone 1), which was characterized by forests with much Picea. Zone 2, most likely of Sangamonian age, represents a period of species-rich deciduous forests, which must have been similar to the ones that thrive today south and southeast of the prairie peninsula. During the entire Wisconsinan (14C dates ranging from 38,000 to 21,000 BP) thermophilous deciduous trees like Quercus, Carya, and Ulmus occurred in the region, although temporarily accompanied by tree genera with a more northerly modern distribution, such as Picea, which entered and then left south-central Illinois during the Woodfordian. Thus it is evident that arctic climatic conditions did not prevail in the lowlands of south-central Illinois (about 38°30' lat) during the Wisconsinan, even at the time of the maximum glaciation, the Woodfordian. The Wisconsinan was, however, not a period of continuous forest. The pollen assemblages of zone 3 (Altonian) indicate prairie with stands of trees, and in zone 4 the relatively abundant Artemisia pollen indicates the existence of open vegetation and stands of deciduous trees, Picea, and Pinus. True tundra may have existed north of the sites, but if so its pollen rain apparently is marked by pollen from nearby stands of trees. After the disappearance of Pinus and Picea at about 14,000 BP (estimated!), there developed a mosaic of prairies and stands of Quercus, Carya, and other deciduous tree genera (zone 5). This type of vegetation persisted until it was destroyed by cultivation during the 19th and 20th century. Major vegetational changes are not indicated in the pollen diagram for the late Wisconsinan and the Holocene. The dating of zones 1 and 2 is problematical because the sediments are beyond the14C range and because of the lack of stratigraphic evidence. The zones dated as Illinoian and Sangamonian could also represent just a Wisconsinan stadial and interstadial. This possibility, however, seems to be contradicted by the late glacial and interglacial character of the forest vegetation of that time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The carbonate chemistry of seawater from the Ria Formosa lagoon was experimentally manipulated, by diffusing pure CO2, to attain two reduced pH levels, by -0.3 and -0.6 pH units, relative to unmanipulated seawater. After 84 days of exposure, no differences were detected in terms of growth (somatic or shell) or mortality of juvenile mussels Mytilus galloprovincialis. The naturally elevated total alkalinity of the seawater (= 3550 µmol/kg) prevented under-saturation of CaCO3, even under pCO2 values exceeding 4000 µatm, attenuating the detrimental effects on the carbonate supply-side. Even so, variations in shell weight showed that net calcification was reduced under elevated CO2 and reduced pH, although the magnitude and significance of this effect varied among size-classes. Most of the loss of shell material probably occurred as post-deposition dissolution in the internal aragonitic nacre layer. Our results show that, even when reared under extreme levels of CO2-induced acidification, juvenile M. galloprovincialis can continue to calcify and grow in this coastal lagoon environment. The complex responses of bivalves to ocean acidification suggest a large degree of interspecific and intraspecific variability in their sensitivity to this type of perturbation. Further research is needed to assess the generality of these patterns and to disentangle the relative contributions of acclimation to local variations in seawater chemistry and genetic adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that the evaluation of the influence matrices in the boundary-element method requires the computation of singular integrals. Quadrature formulae exist which are especially tailored to the specific nature of the singularity, i.e. log(*- x0)9 Ijx- JC0), etc. Clearly the nodes and weights of these formulae vary with the location Xo of the singular point. A drawback of this approach is that a given problem usually includes different types of singularities, and therefore a general-purpose code would have to include many alternative formulae to cater for all possible cases. Recently, several authors1"3 have suggested a type independent alternative technique based on the combination of standard Gaussian rules with non-linear co-ordinate transformations. The transformation approach is particularly appealing in connection with the p.adaptive version, where the location of the collocation points varies at each step of the refinement process. The purpose of this paper is to analyse the technique in eference 3. We show that this technique is asymptotically correct as the number of Gauss points increases. However, the method possesses a 'hidden' source of error that is analysed and can easily be removed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two sheep and two goats, fitted with a ruminal cannula, received two diets composed of 30% concentrate and 70% of either alfalfa hay (AL) or grass hay (GR) as forage in a two-period crossover design. Solid and liquid phases of the rumen were sampled from each animal immediately before feeding and 4 h post-feeding. Pellets containing solid associated bacteria (SAB) and liquid associated bacteria (LAB) were isolated from the corresponding ruminal phase and composited by time to obtain 2 pellets per animal (one SAB and one LAB) before DNA extraction. Denaturing gradient gel electrophoresis (DGGE) analysis of 16S ribosomal DNA was used to analyze bacterial diversity. A total of 78 and 77 bands were detected in the DGGE gel from sheep and goats samples, respectively. There were 18 bands only found in the pellets from sheep fed AL-fed sheep and 7 found exclusively in samples from sheep fed the GR diet. In goats, 21 bands were found only in animals fed the AL diet and 17 were found exclusively in GR-fed ones. In all animals, feeding AL diet tended (P < 0.10) to promote greater NB and SI in LAB and SAB pellets compared with the GR diet. The dendrogram generated by the cluster analysis showed that in both animal species all samples can be included in two major clusters. The four SAB pellets within each animal species clustered together and the four LAB pellets grouped in a different cluster. Moreover, SAB and LAB clusters contained two clear subclusters according to forage type. Results show that in all animals bacterial diversity was more markedly affected by the ruminal phase (solid vs. liquid) than by the type of forage in the diet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a fuzzy logic controller (FLC) based variable structure control (VSC) is presented. The main objective is to obtain an improved performance of highly non-linear unstable systems. New functions for chattering reduction and error convergence without sacrificing invariant properties are proposed. The main feature of the proposed method is that the switching function is added as an additional fuzzy variable and will be introduced in the premise part of the fuzzy rules; together with the state variables. In this work, a tuning of the well known weighting parameters approach is proposed to optimize local and global approximation and modelling capability of the Takagi-Sugeno (T-S) fuzzy model to improve the choice of the performance index and minimize it. The main problem encountered is that the T-S identification method can not be applied when the membership functions are overlapped by pairs. This in turn restricts the application of the T-S method because this type of membership function has been widely used in control applications. The approach developed here can be considered as a generalized version of the T-S method. An inverted pendulum mounted on a cart is chosen to evaluate the robustness, effectiveness, accuracy and remarkable performance of the proposed estimation approach in comparison with the original T-S model. Simulation results indicate the potential, simplicity and generality of the estimation method and the robustness of the chattering reduction algorithm. In this paper, we prove that the proposed estimation algorithm converge the very fast, thereby making it very practical to use. The application of the proposed FLC-VSC shows that both alleviation of chattering and robust performance are achieved.