935 resultados para low-energy ion implantation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The brown alga Ascophyllum nodosum is a dominant rocky intertidal organism throughout much of the North Atlantic Ocean, yet its inability to colonize exposed or denuded shores is well recognized. Our experimental data show that wave action is a major source of mortality to recently settled zygotes. Artificially recruited zygotes consistently exhibited a Type IV survivorship curve in the presence of moving water. As few as 10, but often only 1 relatively low energy wave removed 85 to 99% of recently settled zygotes. Increasing the setting time for attachment of zygotes (prior to disturbance from water movement) had a positive effect on survival. However, survival was significantly lower at high densities, and decreased at long (24 h) setting times, probably as a result of bacteria on the surface of zygotes. Spatial refuges provided significant protection from gentle water movement but relatively little protection from waves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To address concerns expressed about the possible effect of drilling mud discharges on shallow, low-energy estuarine ecosystems, a 12 month study was designed to detect alterations in water quality and sediment geochemistry. Each drilling mud used in the study and sediments from the study site were analyzed in the laboratory for chemical and physical characteristics. Potential water quality impacts were simulated by the EPA-COE elutriation test procedure. Mud toxicity was measured by acute and chronic bioassays with Mysidopsis bahia, Mercenaria mercenaria, and Nereis virens.^ For the field study, a relatively pristine, shallow (1.2 m) estuary (Christmas Bay, TX) without any drilling activity for the last 30 years was chosen for the study site. After a three month baseline study, three stations were selected. Station 1 was an external control. At each treatment station (2, 3), mesocosms were constructed to enclose a 3.5 m$\sp3$ water column. Each treatment station included an internal control site also. Each in situ mesocosm, except the controls, was successively dosed at a mesocosm-specific dose (1:100; 1:1,000; or 1:10,000 v/v) with 4 field collected drilling muds (spud, nondispersed, lightly-treated, and heavily-treated lignosulfonate) in sequential order over 1.5 months. Twenty-four hours after each dose, water exchange was allowed until the next treatment. Station 3 was destroyed by a winter storm. After the last treatment, the enclosures were removed and the remaining sites monitored for 6 months. One additional site was similarly dosed (1:100 v/v) with clean dredged sediment from Christmas Bay for comparison between dredged sediments and drilling muds.^ Results of the analysis of the water samples and field measurements showed that water quality was impacted during the discharges, primarily at the highest dose (1:100 v/v), but that elevated levels of C, Cr (T,F), Cr$\sp{+3}$ (T, F), N, Pb, and Zn returned to ambient levels before the end of the 24 hour exposure period or immediately after water exchange was allowed (Al, Ba(T), Chlorophyll ABC, SS, %T). Barium, from the barite, was used as a geochemical tracer in the sediments to confirm estimated doses by mass balance calculations. Barium reached a maximum of 166x background levels at the high dose mesocosm. Barium levels returned to ambient or only slightly elevated levels at the end of the 6 month monitoring period due to sediment deposition, resuspension, and bioturbation. QA/QC results using blind samples consisting of lab standards and spiked samples for both water and sediment matrices were within acceptable coefficients of variation.^ In order to avoid impacts on water quality and sediment geochemistry in a shallow estuarine ecosystem, this study concluded that a minimal dilution of 1:1,000 (v/v) would be required in addition to existing regulatory constraints. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HANES 1 detailed sample data were used to operationalize a definition of health in the absence of disease and to describe and compare the characteristics of the normal (healthy) group versus an abnormal (unhealthy) group.^ Parallel screening gave a 3.8 percent prevalence proportion of physical health, with a female:male ratio of 2:1 and younger ages in the healthy group. Statistically significant Mantel-Haenszel gender-age-adjusted odds ratios (MHOR) were estimated among abnormal non-migrants (1.53), skilled workers/unemployed (1.76), annual family incomes of less than $10,000 (1.56), having ever smoked (1.58), and started smoking before 18 years of age (1.58). Significant MHOR were also found for abnormals for health promoting measures: non-iodized salt use (1.94), needed dental care (1.91); and for fair to poor perceived health (4.28), perceiving health problems (2.52), and low energy level (1.68). Significant protective effects for much to moderate recreational exercise (MHOR 0.42) and very active to moderate non-recreational activity (MHOR 0.49) were also obtained. Covariance analysis additive models detected statistically significant higher mean values for abnormals than normals for serum magnesium, hemoglobin, hematocrit, urinary creatinine, and systolic and diastolic blood pressures, and lower values for abnormals than normals for serum iron. No difference was detected for serum cholesterol. Significant non-additive joint effects were found for body mass index.^ The results suggest positive physical health can be measured with cross-sectional survey data. Gender differentials, and associations between ecologic, socioeconomic, hazardous risk factors, health promoting activities and physical health are in general agreement with published findings on studies of morbidity. Longitudinal prospective studies are suggested to establish the direction of the associations and to enhance present knowledge of health and its promoting factors. ^

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on sedimentological and geochemical data, the Upper Cretaceous and Tertiary sequence at Ocean Drilling Program Site 661 was subdivided into four intervals: Interval I (Campanian age) is characterized by sediments deposited below the calcite compensation depth (CCD) inside a high-productivity area and well-oxygenated bottom waters, indicated by the absence of carbonate, the major occurrence of zeolites and opal-CT, and intense bioturbation. Very fine-grained siliciclastic sediments and the lack of any erosional features suggest a low-energy environment. The terrigenous fraction was probably supplied by winds from the nontropical areas in South Africa. Interval II (Maestrichtian age) is characterized by high-amplitude variations in the carbonate content indicative of a deposition above the CCD, superimposed by (climate-controlled) short-term fluctuations of the CCD. The absence of both zeolites and opal-CT imply a position of Site 661 outside high-productivity areas. The first occurrence of higher amounts of kaolinite (especially during the middle Maestrichtian) suggests the onset of a terrigenous sediment supply from tropical areas. Interval III (between uppermost Cretaceous to early Tertiary) is characterized by the absence of carbonate and zeolites, interpreted as deposition below the CCD and outside an oceanic high-productivity belt. The kaolinite-over-illite dominance suggests a terrigenous sediment supply from tropical areas. Interval IV (between early Tertiary and Miocene age) is characterized by the occurrence of black manganeserich layers, major nodules/pebbles, and erosional surfaces, indicating phases of extremely reduced sediment accumulation and bottom-current activities. In the lower part of this interval (?Eocene age), higher amounts of zeolites occur, which suggest a higher oceanic productivity caused by equatorial upwelling. The source area of the terrigenous sediment fraction at Site 661 was the tropical region of northwest Africa, as suggested by the kaolinite-over-illite dominance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sedimentary architecture of polar gravel-beach ridges is presented and it is shown that ridge internal geometries reflect past wave-climate conditions. Ground-penetrating radar (GPR) data obtained along the coasts of Potter Peninsula (King George Island) show that beach ridges unconformably overlie the prograding strand plain. Development of individual ridges is seen to result from multiple storms in periods of increased storm-wave impact on the coast. Strand-plain progradation, by contrast, is the result of swash sedimentation at the beach-face under persistent calm conditions. The sedimentary architecture of beach ridges in sheltered parts of the coast is characterized by seaward-dipping prograding beds, being the result of swash deposition under stormy conditions, or aggrading beds formed by wave overtopping. By contrast, ridges exposed to high-energy waves are composed of seaward- as well as landward-dipping strata, bundled by numerous erosional unconformities. These erosional unconformities are the result of sediment starvation or partial reworking of ridge material during exceptional strong storms. The number of individual ridges which are preserved from a given time interval varies along the coast depending on the morphodynamic setting: sheltered coasts are characterized by numerous small ridges, whereas fewer but larger ridges develop on exposed beaches. The frequency of ridge building ranges from decades in the low-energy settings up to 1600 years under high-energy conditions. Beach ridges in the study area cluster at 9.5, 7.5, 5.5, and below 3.5 m above the present-day storm beach. Based on radiocarbon data, this is interpreted to reflect distinct periods of increased storminess and/or shortened annual sea-ice coverage in the area of the South Shetland Islands for the times around 4.3, c. 3.1, 1.9 ka cal BP, and after 0.65 ka cal BP. Ages further indicate that even ridges at higher elevations can be subject to later reactivation and reworking. A careful investigation of the stratigraphic architecture is therefore essential prior to sampling for dating purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Composition, grain-size distribution, and areal extent of Recent sediments from the Northern Adriatic Sea along the Istrian coast have been studied. Thirty one stations in four sections vertical to the coast were investigated; for comparison 58 samples from five small bays were also analyzed. Biogenic carbonate sediments are deposited on the shallow North Adriatic shelf off the Istrian coast. Only at a greater distance from the coast are these carbonate sediments being mixed with siliceous material brought in by the Alpine rivers Po, Adige, and Brenta. Graphical analysis of grain-size distribution curves shows a sediment composition of normally three, and only in the most seaward area, of four major constituents. Constituent 1 represents the washed-in terrestrial material of clay size (Terra Rossa) from the Istrian coastal area. Constituent 2 consists of fine to medium sand. Constituent 3 contains the heterogeneous biogenic material. Crushing by organisms and by sediment eaters reduces the coarse biogenic material into small pieces generating constituent 2. Between these two constituents there is a dynamic equilibrium. Depending upon where the equilibrium is, between the extremes of production and crushing, the resulting constituent 2 is finer or coarser. Constituent 4 is composed of the fine sandy material from the Alpine rivers. In the most seaward area constituents 2 and 4 are mixed. The total carbonate content of the samples depends on the distance from the coast. In the near coastal area in high energy environments, the carbonate content is about 80 %. At a distance of 2 to 3 km from the coast there is a carbonate minimum because of the higher rate of sedimentation of clay-sized terrestrial, noncarbonate material at extremely low energy environments. In an area between 5 and 20 km off the coast, the carbonate content is about 75 %. More than 20 km from the shore, the carbonate content diminishes rapidly to values of about 30 % through mixing with siliceous material from the Alpine rivers. The carbonate content of the individual fractions increases with increasing grain-size to a maximum of about 90 % within the coarse sand fractions. Beyond 20 km from the coast the samples show a carbonate minimum of about 13 % within the sand-size classes from 1.5 to 0.7 zeta¬? through mixing with siliceous material from the alpine rivers. By means of grain-size distribution and carbonate content, four sediment zones parallel to the coast were separated. Genetically they are closely connected with the zonation of the benthic fauna. Two cores show a characteristic vertical distribution of the sediment. The surface zone is inversely graded, that means the coarse fractions are at the top and the fine fractions are at the bottom. This is the effect of crushing of the biogenic material produced at the surface by predatory organisms and by sediment eaters. lt is proposed that at a depth of about 30 cm a chemical solution process begins which leads to diminution of the original sediment from a fine to medium sand to a silt. The carbonate content decreases from about 75 % at the surface to 65 % at a depth of 100 cm. The increase of the noncarbonate components by 10 % corresponds to a decrease in the initial amount of sediment (CaC03=75 %) by roughly 30 % through solution. With increasing depth the carbonate content of the individual fractions becomes more and more uniform. At the surface the variation is from 30 % to 90 %, at the bottom it varies only between 50 % and 75 %. Comparable investigations of small-bay sediments showed a c1ear dependence of sediment/faunal zonation from the energy of the environment. The investigations show that the composition and three-dimensional distribution of the Istrian coastal sediments can not be predicted only from one or a few measurable factors. Sedimentation and syngenetic changes must be considered as a complex interaction between external factors and the actions of producing and destroying organisms that are in dynamic equilibrium. The results obtained from investigations of these recent sediments may be of value for interpreting fossil sediments only with strong limitations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Textural and compositional differences were found between gravity-flow sheets in an open-ocean environment on the northern slope of Little Bahama Bank (Site 628, Pliocene turbidite sequence) and in a closed-basin depositional setting (Site 632, Quaternary turbidite sequence). Mud-supported debris-flow sheets were cored at Site 628. Average mean grain size of the turbidite samples was lower, mud content was higher, and sorting was poorer than in comparable samples from Site 632. This reflects the deposition of proximal, low-energy turbidity currents and debris flows on a base-ofslope carbonate apron. No mud-supported debris-flow sheets were deposited in the investigated sediment sequence of Hole 632A. Many larger turbidity currents from around the margins of Exuma Sound may have reached this central basin setting, depositing sediments that had been transported over longer distances. Planktonic components dominate in the grain-sized fraction (500-1000 µm) of turbidite samples from Hole 628A, while platform detritus is rare. We interpreted this as resulting from the erosion and reworking of a large area of open-ocean slope sediments by gravity flows. In contrast, large amounts of benthic and platform components were found in the turbidite samples of Hole 632A. This may be explained by the fact that the slopes of the enclosed Exuma Sound are steep, and turbidity currents bypassed much of these slopes through pronounced channels, delivering more shallow-water detritus to the deep basin. Erosion of slope sediments, a possible source area of planktonic detritus, is assumed to be low. The small slope area in relation to the larger surrounding platform areas and lower production of planktonic components in the enclosed waters of Exuma Sound may also explain the observed low number of planktonic components at Hole 632A. Turbidite material from both open-ocean and enclosed-basin environments was deposited at Site 635.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2008, the City Council of Rivas-Vaciamadrid (Spain) decided to promote the construction of “Rivasecopolis”, a complex of sustainable buildings in which a new prototype of a zero-energy house would become the office of the Energy Agency. According to the initiative of the City Council, it was decided to recreate the dwelling prototype “Magic-box” which entered the 2005 Solar Decathlon Competition. The original project has been adapted to a new necessities programme, by adding the necessary spaces that allows it to work as an office. A team from university has designed and carried out the direction of the construction site. The new Solar House is conceived as a “testing building”. It is going to become the space for attending citizens in all questions about saving energy, energy efficiency and sustainable construction, having a permanent small exhibition space additional to the working places for the information purpose. At the same time, the building includes the use of experimental passive architecture systems and a monitoring and control system. Collected data will be sent to University to allow developing research work about the experimental strategies included in the building. This paper will describe and analyze the experience of transforming a prototype into a real durable building and the benefits for both university and citizens in learning about sustainability with the building

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This work is a contribution to the research and development of the intermediate band solar cell (IBSC), a high efficiency photovoltaic concept that features the advantages of both low and high bandgap solar cells. The resemblance with a low bandgap solar cell comes from the fact that the IBSC hosts an electronic energy band -the intermediate band (IB)- within the semiconductor bandgap. This IB allows the collection of sub-bandgap energy photons by means of two-step photon absorption processes, from the valence band (VB) to the IB and from there to the conduction band (CB). The exploitation of these low energy photons implies a more efficient use of the solar spectrum. The resemblance of the IBSC with a high bandgap solar cell is related to the preservation of the voltage: the open-circuit voltage (VOC) of an IBSC is not limited by any of the sub-bandgaps (involving the IB), but only by the fundamental bandgap (defined from the VB to the CB). Nevertheless, the presence of the IB allows new paths for electronic recombination and the performance of the IBSC is degraded at 1 sun operation conditions. A theoretical argument is presented regarding the need for the use of concentrated illumination in order to circumvent the degradation of the voltage derived from the increase in the recombi¬nation. This theory is supported by the experimental verification carried out with our novel characterization technique consisting of the acquisition of photogenerated current (IL)-VOC pairs under low temperature and concentrated light. Besides, at this stage of the IBSC research, several new IB materials are being engineered and our novel character¬ization tool can be very useful to provide feedback on their capability to perform as real IBSCs, verifying or disregarding the fulfillment of the “voltage preservation” principle. An analytical model has also been developed to assess the potential of quantum-dot (QD)-IBSCs. It is based on the calculation of band alignment of III-V alloyed heterojunc-tions, the estimation of the confined energy levels in a QD and the calculation of the de¬tailed balance efficiency. Several potentially useful QD materials have been identified, such as InAs/AlxGa1-xAs, InAs/GaxIn1-xP, InAs1-yNy/AlAsxSb1-x or InAs1-zNz/Alx[GayIn1-y]1-xP. Finally, a model for the analysis of the series resistance of a concentrator solar cell has also been developed to design and fabricate IBSCs adapted to 1,000 suns. Resumen Este trabajo contribuye a la investigación y al desarrollo de la célula solar de banda intermedia (IBSC), un concepto fotovoltaico de alta eficiencia que auna las ventajas de una célula solar de bajo y de alto gap. La IBSC se parece a una célula solar de bajo gap (o banda prohibida) en que la IBSC alberga una banda de energía -la banda intermedia (IB)-en el seno de la banda prohibida. Esta IB permite colectar fotones de energía inferior a la banda prohibida por medio de procesos de absorción de fotones en dos pasos, de la banda de valencia (VB) a la IB y de allí a la banda de conducción (CB). El aprovechamiento de estos fotones de baja energía conlleva un empleo más eficiente del espectro solar. La semejanza antre la IBSC y una célula solar de alto gap está relacionada con la preservación del voltaje: la tensión de circuito abierto (Vbc) de una IBSC no está limitada por ninguna de las fracciones en las que la IB divide a la banda prohibida, sino que está únicamente limitada por el ancho de banda fundamental del semiconductor (definido entre VB y CB). No obstante, la presencia de la IB posibilita nuevos caminos de recombinación electrónica, lo cual degrada el rendimiento de la IBSC a 1 sol. Este trabajo argumenta de forma teórica la necesidad de emplear luz concentrada para evitar compensar el aumento de la recom¬binación de la IBSC y evitar la degradación del voltage. Lo anterior se ha verificado experimentalmente por medio de nuestra novedosa técnica de caracterización consistente en la adquisicin de pares de corriente fotogenerada (IL)-VOG en concentración y a baja temperatura. En esta etapa de la investigación, se están desarrollando nuevos materiales de IB y nuestra herramienta de caracterizacin está siendo empleada para realimentar el proceso de fabricación, comprobando si los materiales tienen capacidad para operar como verdaderas IBSCs por medio de la verificación del principio de preservación del voltaje. También se ha desarrollado un modelo analítico para evaluar el potencial de IBSCs de puntos cuánticos. Dicho modelo está basado en el cálculo del alineamiento de bandas de energía en heterouniones de aleaciones de materiales III-V, en la estimación de la energía de los niveles confinados en un QD y en el cálculo de la eficiencia de balance detallado. Este modelo ha permitido identificar varios materiales de QDs potencialmente útiles como InAs/AlxGai_xAs, InAs/GaxIni_xP, InAsi_yNy/AlAsxSbi_x ó InAsi_zNz/Alx[GayIni_y]i_xP. Finalmente, también se ha desarrollado un modelado teórico para el análisis de la resistencia serie de una célula solar de concentración. Gracias a dicho modelo se han diseñado y fabricado IBSCs adaptadas a 1.000 soles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

GaInP nucleation on Ge(100) often starts by annealing of the Ge(100) substrates under supply of phosphorus precursors. However, the influence on the Ge surface is not well understood. Here, we studied vicinal Ge(100) surfaces annealed under tertiarybutylphosphine (TBP) supply in MOVPE by in situ reflection anisotropy spectroscopy (RAS), X-ray photoelectron spectroscopy (XPS), and low energy electron diffraction (LEED). While XPS reveals a P termination and the presence of carbon on the Ge surface, LEED patterns indicate a disordered surface probably due to by-products of the TBP pyrolysis. However, the TBP annealed Ge(100) surface exhibits a characteristic RA spectrum, which is related to the P termination. RAS allows us to in situ control phosphorus desorption dependent on temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated the preparation of single domain Ge(100):As surfaces in a metal-organic vapor phase epitaxy reactor. In situ reflection anisotropy spectra (RAS) of vicinal substrates change when arsenic is supplied either by tertiarybutylarsine or by background As4 during annealing. Low energy electron diffraction shows mutually perpendicular orientations of dimers, scanning tunneling microscopy reveals distinct differences in the step structure, and x-ray photoelectron spectroscopy confirms differences in the As coverage of the Ge(100): As samples. Their RAS signals consist of contributions related to As dimer orientation and to step structure, enabling precise in situ control over preparation of single domain Ge(100): As surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En las últimas décadas el mundo ha sufrido un aumento exponencial en la utilización de soluciones tecnológicas, lo que ha desembocado en la necesidad de medir situaciones o estados de los distintos objetos que nos rodean. A menudo, no es posible cablear determinados sensores por lo que ese aumento en la utilización de soluciones tecnológicas, se ha visto traducido en un aumento de la necesidad de utilización de sensórica sin cables para poder hacer telemetrías correctas. A nivel social, el aumento de la demografía mundial está estrechamente ligado al aumento de la necesidad de servicios tecnológicos, por lo que es lógico pensar que a más habitantes, más tecnología será consumida. El objetivo de este Proyecto Final de Carrera está basado en la utilización de diversos nodos o también llamados motas capaces de realizar transferencia de datos en modo sin cables, permitiendo así realizar una aplicación real que solvente problemas generados por el aumento de la densidad de población. En concreto se busca la realización de un sistema de aparcamiento inteligente para estacionamientos en superficie, ayudando por tanto a las tareas de ordenación vehicular dentro del marco de las Smart cities. El sistema está basado en el protocolo de comunicaciones 802.15.4 (ZigBee) cuyas características fundamentales radican en el bajo consumo de energía de los componentes hardware asociados. En primer lugar se realizará un Estado del Arte de las Redes Inalámbricas de Sensores, abordando tanto la arquitectura como el estándar Zigbee y finalmente los componentes XBee que se van a utilizar en este Proyecto. Seguidamente se realizará la algoritmia necesaria para el buen funcionamiento del sistema inteligente de estacionamiento y finalmente se realizará un piloto demostrador del correcto funcionamiento de la tecnología. ABSTRACT In the last decades the world has experienced an exponential increase in the use of technological solutions, which has resulted in the need to measure situations or states of the objects around us. Often, wired sensors cannot be used at many situations, so the increase in the use of technological solutions, has been translated into a increase of the need of using wireless sensors to make correct telemetries. At the social level, the increase in global demographics is closely linked to the increased need for technological services, so it is logical that more people, more technology will be consumed. The objective of this Final Project is based on the use of various nodes or so-called motes, capable of performing data transfer in wireless mode, thereby allowing performing a real application solving problems generated by the increase of population densities. Specifically looking for the realization of a smart outdoor parking system, thus helping to vehicular management tasks within the framework of the Smart Cities. The system is based on the communication protocol 802.15.4 (ZigBee) whose main characteristics lie in the low energy consumption associated to the hardware components. First there will be a State of the Art of Wireless Sensor Networks, addressing both architecture and finally the Zigbee standard XBee components to be used in this project. Then the necessary algorithms will be developed for the proper working of the intelligent parking system and finally there will be a pilot demonstrator validating the whole system.