837 resultados para MICROFLUIDIC CHIPS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large organic food falls to the deep sea - such as whale carcasses and wood logs - support the development of reduced, sulfidic niches in an otherwise oxygenated, oligotrophic deep-sea environment. These transient hot spot ecosystems may serve the dispersal of highly adapted chemosynthetic organisms such as thiotrophic bivalves and siboglinid worms. Here we investigated the biogeochemical and microbiological processes leading to the development of sulfidic niches. Wood colonization experiments were carried out for the duration of one year in the vicinity of a cold seep area in the Nile deep-sea fan (Eastern Mediterranean) at depths of 1690 m. Wood logs were deployed in 2006 during the BIONIL cruise (RV Meteor M70/2 with ROV Quest, Marum, Germany) and sampled in 2007 during the Medeco-2 cruise (RV Pourquoi Pas? with ROV Victor 6000, Ifremer, France). Wood-boring bivalves played a key role in the initial degradation of the wood, the dispersal of wood chips and fecal matter around the wood log, and the provision of colonization surfaces to other organisms. Total oxygen uptake measured with a ROV-operated benthic chamber module was higher at the wood (0.5 m away) in contrast to 10 m away at a reference site (25 mmol m-2 d-1 and 1 mmol m-2 d-1, respectively), indicating an increased activity of sedimentary communities around the wood falls. Bacterial cell numbers associated with wood increased substantially from freshly submerged wood to the wood chip/fecal matter layer next to the wood experiments, as determined with Acridine Orange Direct Counts (AODC) and DAPI-stained counts. Microsensor measurements of sulfide, oxygen and pH were conducted ex situ. Sulfide fluxes were higher at the wood experiments when compared to reference measurements (19 and 32 mmol m-2 d-1 vs. 0 and 16 mmol -2 d-1, respectively). Sulfate reduction (SR) rates at the wood experiments were determined in ex situ incubations (1.3 and 2.0 mmol m-2 d-1) and fell into the lower range of SR rates previously observed from other chemosynthetic habitats at cold seeps. There was no influence of wood deposition on phosphate, silicate and nitrate concentrations, but ammonium concentrations were elevated at the wood chip-sediment boundary layer. Concentrations of dissolved organic carbon were much higher at the wood experiments (wood chip-sediment boundary layer) in comparison to measurements at the reference sites, which may indicate that cellulose degradation was highest under anoxic conditions and hence enabled by anaerobic benthic bacteria, e.g. fermenters and sulfate reducers. Our observations demonstrate that, after one year, the presence of wood at the seafloor had led to the creation of sulfidic niches, comparable to what has been observed at whale falls, albeit at lower rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Siliceous sediments and sedimentary rocks occur as chert and silicified chalk, limestone, and claystone in Site 585 lower Miocene to Campanian sediments, with one older occurrence of chert near the Cenomanian/Turonian boundary. The recovered drill breccia in the Miocene to middle Eocene interval is dominated by bright red, orange, yellow, and brown chips and fragments of chert. In early Eocene and older sediments gray silicified limestone and yellowish brown chert fragments predominate. Recovery is poor in cores with chert because chert tends to fracture into smaller pieces that escape the drill and because the hard chert fragments grind away other sediments during rotary drilling. Thin-section and hand-sample studies show complex diagenetic histories of silicification (silica pore infill) and chertification (silica replacement of host rock). Multiple events of silicification can occur in the same rocks, producing chert from silicified limestone. Despite some prior silicification, silicified limestone is porous enough to provide conduits for dissolved silica-charged pore waters. Silicification and chert are more abundant in the coarser parts of the sedimentary section. These factors reflect the importance of porosity and permeability as well as chemical and lithologic controls in the process of silica diagenesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many marine biogeographic realms, bioeroding sponges dominate the internal bioerosion of calcareous substrates such as mollusc beds and coral reef framework. They biochemically dissolve part of the carbonate and liberate so-called sponge chips, a process that is expected to be facilitated and accelerated in a more acidic environment inherent to the present global change. The bioerosion capacity of the demosponge Cliona celata Grant, 1826 in subfossil oyster shells was assessed via alkalinity anomaly technique based on 4 days of experimental exposure to three different levels of carbon dioxide partial pressure (pCO2) at ambient temperature in the cold-temperate waters of Helgoland Island, North Sea. The rate of chemical bioerosion at present-day pCO2 was quantified with 0.08-0.1 kg/m**2/year. Chemical bioerosion was positively correlated with increasing pCO2, with rates more than doubling at carbon dioxide levels predicted for the end of the twenty-first century, clearly confirming that C. celata bioerosion can be expected to be enhanced with progressing ocean acidification (OA). Together with previously published experimental evidence, the present results suggest that OA accelerates sponge bioerosion (1) across latitudes and biogeographic areas, (2) independent of sponge growth form, and (3) for species with or without photosymbionts alike. A general increase in sponge bioerosion with advancing OA can be expected to have a significant impact on global carbonate (re)cycling and may result in widespread negative effects, e.g. on the stability of wild and farmed shellfish populations, as well as calcareous framework builders in tropical and cold-water coral reef ecosystems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we propose a method for cleaving silicon-based photonic chips by using a laser based micromachining system, consisting of a ND:YVO4laser emitting at 355 nm in nanosecond pulse regime and a micropositioning system. The laser makes grooved marks placed at the desired locations and directions where cleaves have to be initiated, and after several processing steps, a crack appears and propagate along the crystallographic planes of the silicon wafer. This allows cleavage of the chips automatically and with high positioning accuracy, and provides polished vertical facets with better quality than the obtained with other cleaving process, which eases the optical characterization of photonic devices. This method has been found to be particularly useful when cleaving small-sized chips, where manual cleaving is hard to perform; and also for polymeric waveguides, whose facets get damaged or even destroyed with polishing or manual cleaving processing. Influence of length of the grooved line and speed of processing is studied for a variety of silicon chips. An application for cleaving and characterizing sol–gel waveguides is presented. The total amount of light coupled is higher than when using any other procedure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se trata de estudiar el comportamiento de un sistema basado en el chip CC1110 de Texas Instruments, para aplicaciones inalámbricas. Los dispositivos basados en este tipo de chips tienen actualmente gran profusión, dada la demanda cada vez mayor de aplicaciones de gestión y control inalámbrico. Por ello, en la primera parte del proyecto se presenta el estado del arte referente a este aspecto, haciendo mención a los sistemas operativos embebidos, FPGAs, etc. También se realiza una introducción sobre la historia de los aviones no tripulados, que son el vehículo elegido para el uso del enlace de datos. En una segunda parte se realiza el estudio del dispositivo mediante una placa de desarrollo, verificando y comprobando mediante el software suministrado, el alcance del mismo. Cabe resaltar en este punto que el control con la placa mencionada se debe hacer mediante programación de bajo nivel (lenguaje C), lo que aporta gran versatilidad a las aplicaciones que se pueden desarrollar. Por ello, en una tercera parte se realiza un programa funcional, basado en necesidades aportadas por la empresa con la que se colabora en el proyecto (INDRA). Este programa es realizado sobre el entorno de Matlab, muy útil para este tipo de aplicaciones, dada su versatilidad y gran capacidad de cálculo con variables. Para terminar, con la realización de dichos programas, se realizan pruebas específicas para cada uno de ellos, realizando pruebas de campo en algunas ocasiones, con vehículos los más similares a los del entorno real en el que se prevé utilizar. Como implementación al programa realizado, se incluye un manual de usuario con un formato muy gráfico, para que la toma de contacto se realice de una manera rápida y sencilla. Para terminar, se plantean líneas futuras de aplicación del sistema, conclusiones, presupuesto y un anexo con los códigos de programación más importantes. Abstract In this document studied the system behavior based on chip CC1110 of Texas Instruments, for wireless applications. These devices currently have profusion. Right the increasing demand for control and management wireless applications. In the first part of project presents the state of art of this aspect, with reference to the embedded systems, FPGAs, etc. It also makes a history introduction of UAVs, which are the vehicle for use data link. In the second part is studied the device through development board, verifying and checking with provided software the scope. The board programming is C language; this gives a good versatility to develop applications. Thus, in third part performing a functionally program, it based on requirements provided by company with which it collaborates, INDRA Company. This program is developed with Matlab, very useful for such applications because of its versatility and ability to use variables. Finally, with the implementation of such programs, specific tests are performed for each of them, field tests are performed in several cases, and vehicles used for this are the most similar to the actual environment plain to use. Like implementing with the program made, includes a graphical user manual, so your understanding is conducted quickly and easily. Ultimately, present future targets for system applications, conclusions, budget and annex of the most important programming codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de la presente tesis doctoral es el desarrollo de un nuevo concepto de biosensor óptico sin marcado, basado en una combinación de técnicas de caracterización óptica de interrogación vertical y estructuras sub-micrométricas fabricadas sobre chips de silicio. Las características más importantes de dicho dispositivo son su simplicidad, tanto desde el punto de vista de medida óptica como de introducción de las muestras a medir en el área sensible, aspectos que suelen ser críticos en la mayoría de sensores encontrados en la literatura. Cada uno de los aspectos relacionados con el diseño de un biosensor, que son fundamentalmente cuatro (diseño fotónico, caracterización óptica, fabricación y fluídica/inmovilización química) son desarrollados en detalle en los capítulos correspondientes. En la primera parte de la tesis se hace una introducción al concepto de biosensor, en qué consiste, qué tipos hay y cuáles son los parámetros más comunes usados para cuantificar su comportamiento. Posteriormente se realiza un análisis del estado del arte en la materia, enfocado en particular en el área de biosensores ópticos sin marcado. Se introducen también cuáles son las reacciones bioquímicas a estudiar (inmunoensayos). En la segunda parte se describe en primer lugar cuáles son las técnicas ópticas empleadas en la caracterización: Reflectometría, Elipsometría y Espectrometría; además de los motivos que han llevado a su empleo. Posteriormente se introducen diversos diseños de las denominadas "celdas optofluídicas", que son los dispositivos en los que se va a producir la interacción bioquímica. Se presentan cuatro dispositivos diferentes, y junto con ellos, se proponen diversos métodos de cálculo teórico de la respuesta óptica esperada. Posteriormente se procede al cálculo de la sensibilidad esperada para cada una de las celdas, así como al análisis de los procesos de fabricación de cada una de ellas y su comportamiento fluídico. Una vez analizados todos los aspectos críticos del comportamiento del biosensor, se puede realizar un proceso de optimización de su diseño. Esto se realiza usando un modelo de cálculo simplificado (modelo 1.5-D) que permite la obtención de parámetros como la sensibilidad y el límite de detección de un gran número de dispositivos en un tiempo relativamente reducido. Para este proceso se escogen dos de las celdas optofluídicas propuestas. En la parte final de la tesis se muestran los resultados experimentales obtenidos. En primer lugar, se caracteriza una celda basada en agujeros sub-micrométricos como sensor de índice de refracción, usando para ello diferentes líquidos orgánicos; dichos resultados experimentales presentan una buena correlación con los cálculos teóricos previos, lo que permite validar el modelo conceptual presentado. Finalmente, se realiza un inmunoensayo químico sobre otra de las celdas propuestas (pilares nanométricos de polímero SU-8). Para ello se utiliza el inmunoensayo de albumina de suero bovino (BSA) y su anticuerpo (antiBSA). Se detalla el proceso de obtención de la celda, la funcionalización de la superficie con los bioreceptores (en este caso, BSA) y el proceso de biorreconocimiento. Este proceso permite dar una primera estimación de cuál es el límite de detección esperable para este tipo de sensores en un inmunoensayo estándar. En este caso, se alcanza un valor de 2.3 ng/mL, que es competitivo comparado con otros ensayos similares encontrados en la literatura. La principal conclusión de la tesis es que esta tipología de dispositivos puede ser usada como inmunosensor, y presenta ciertas ventajas respecto a los actualmente existentes. Estas ventajas vienen asociadas, de nuevo, a su simplicidad, tanto a la hora de medir ópticamente, como dentro del proceso de introducción de los bioanalitos en el área sensora (depositando simplemente una gota sobre la micro-nano-estructura). Los cálculos teorícos realizados en los procesos de optimización sugieren a su vez que el comportamiento del sensor, medido en magnitudes como límite de detección biológico puede ser ampliamente mejorado con una mayor compactación de pilares, alcanzandose un valor mínimo de 0.59 ng/mL). The objective of this thesis is to develop a new concept of optical label-free biosensor, based on a combination of vertical interrogation optical techniques and submicron structures fabricated over silicon chips. The most important features of this device are its simplicity, both from the point of view of optical measurement and regarding to the introduction of samples to be measured in the sensing area, which are often critical aspects in the majority of sensors found in the literature. Each of the aspects related to the design of biosensors, which are basically four (photonic design, optical characterization, fabrication and fluid / chemical immobilization) are developed in detail in the relevant chapters. The first part of the thesis consists of an introduction to the concept of biosensor: which elements consists of, existing types and the most common parameters used to quantify its behavior. Subsequently, an analysis of the state of the art in this area is presented, focusing in particular in the area of label free optical biosensors. What are also introduced to study biochemical reactions (immunoassays). The second part describes firstly the optical techniques used in the characterization: reflectometry, ellipsometry and spectrometry; in addition to the reasons that have led to their use. Subsequently several examples of the so-called "optofluidic cells" are introduced, which are the devices where the biochemical interactions take place. Four different devices are presented, and their optical response is calculated by using various methods. Then is exposed the calculation of the expected sensitivity for each of the cells, and the analysis of their fabrication processes and fluidic behavior at the sub-micrometric range. After analyzing all the critical aspects of the biosensor, it can be performed a process of optimization of a particular design. This is done using a simplified calculation model (1.5-D model calculation) that allows obtaining parameters such as sensitivity and the detection limit of a large number of devices in a relatively reduced time. For this process are chosen two different optofluidic cells, from the four previously proposed. The final part of the thesis is the exposition of the obtained experimental results. Firstly, a cell based sub-micrometric holes is characterized as refractive index sensor using different organic fluids, and such experimental results show a good correlation with previous theoretical calculations, allowing to validate the conceptual model presented. Finally, an immunoassay is performed on another typology of cell (SU-8 polymer pillars). This immunoassay uses bovine serum albumin (BSA) and its antibody (antiBSA). The processes for obtaining the cell surface functionalization with the bioreceptors (in this case, BSA) and the biorecognition (antiBSA) are detailed. This immunoassay can give a first estimation of which are the expected limit of detection values for this typology of sensors in a standard immunoassay. In this case, it reaches a value of 2.3 ng/mL, which is competitive with other similar assays found in the literature. The main conclusion of the thesis is that this type of device can be used as immunosensor, and has certain advantages over the existing ones. These advantages are associated again with its simplicity, by the simpler coupling of light and in the process of introduction of bioanalytes into the sensing areas (by depositing a droplet over the micro-nano-structure). Theoretical calculations made in optimizing processes suggest that the sensor Limit of detection can be greatly improved with higher compacting of the lattice of pillars, reaching a minimum value of 0.59 ng/mL).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing a herd localization system capable to operate unattended in communication-challenged areas arises from the necessity of improving current systems in terms of cost, autonomy or any other facilities that a certain target group (or overall users) may demand. A network architecture of herd localization is proposed with its corresponding hardware and a methodology to assess performance in different operating conditions. The system is designed taking into account an eventual environmental impact hence most nodes are simple, cheap and kinetically powered from animal movements-neither batteries nor sophisticated processor chips are needed. Other network elements integrating GPS and batteries operate with selectable duty cycles, thus reducing maintenance duties. Equipment has been tested on Scandinavian reindeer in Lapland and its element modeling is integrated into a simulator to analyze such localization network applicability for different use cases. Performance indicators (detection frequency, localization accuracy and delay) are fitted to assess the overall performance; system relative costs are enclosed also for a range of deployments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

he simulation of complex LoC (Lab-on-a-Chip) devices is a process that requires solving computationally expensive partial differential equations. An interesting alternative uses artificial neural networks for creating computationally feasible models based on MOR techniques. This paper proposes an approach that uses artificial neural networks for designing LoC components considering the artificial neural network topology as an isomorphism of the LoC device topology. The parameters of the trained neural networks are based on equations for modeling microfluidic circuits, analogous to electronic circuits. The neural networks have been trained to behave like AND, OR, Inverter gates. The parameters of the trained neural networks represent the features of LoC devices that behave as the aforementioned gates. This would mean that LoC devices universally compute.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In SSL general illumination, there is a clear trend to high flux packages with higher efficiency and higher CRI addressed with the use of multiple color chips and phosphors. However, such light sources require the optics provide color mixing, both in the near-field and far-field. This design problem is specially challenging for collimated luminaries, in which diffusers (which dramatically reduce the brightness) cannot be applied without enlarging the exit aperture too much. In this work we present first injection molded prototypes of a novel primary shell-shaped optics that have microlenses on both sides to provide Köhler integration. This shell is design so when it is placed on top of an inhomogeneous multichip Lambertian LED, creates a highly homogeneous virtual source (i.e, spatially and angularly mixed), also Lambertian, which is located in the same position with only small increment of the size (about 10-20%, so the average brightness is similar to the brightness of the source). This shell-mixer device is very versatile and permits now to use a lens or a reflector secondary optics to collimate the light as desired, without color separation effects. Experimental measurements have shown optical efficiency of the shell of 95%, and highly homogeneous angular intensity distribution of collimated beams, in good agreement with the ray-tracing simulations.