939 resultados para Network on chip


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Urine is a preferred specimen for nucleic acid-based detection of sexually transmitted infections (STIs) but represents a challenge for microfluidic devices due to low analyte concentrations. We present an extraction methodology enabling rapid on-chip nucleic acid purification directly from clinically relevant sample volumes up to 1 ml and subsequent PCR amplification detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND:Guidelines for red blood cell (RBC) transfusions exist; however, transfusion practices vary among centers. This study aimed to analyze transfusion practices and the impact of patients and institutional characteristics on the indications of RBC transfusions in preterm infants.STUDY DESIGN and METHODS:RBC transfusion practices were investigated in a multicenter prospective cohort of preterm infants with a birth weight of less than 1500 g born at eight public university neonatal intensive care units of the Brazilian Network on Neonatal Research. Variables associated with any RBC transfusions were analyzed by logistic regression analysis.RESULTS:Of 952 very-low-birth-weight infants, 532 (55.9%) received at least one RBC transfusion. The percentages of transfused neonates were 48.9, 54.5, 56.0, 61.2, 56.3, 47.8, 75.4, and 44.7%, respectively, for Centers 1 through 8. The number of transfusions during the first 28 days of life was higher in Center 4 and 7 than in other centers. After 28 days, the number of transfusions decreased, except for Center 7. Multivariate logistic regression analysis showed higher likelihood of transfusion in infants with late onset sepsis (odds ratio [OR], 2.8; 95% confidence interval [CI], 1.8-4.4), intraventricular hemorrhage (OR, 9.4; 95% CI, 3.3-26.8), intubation at birth (OR, 1.7; 95% CI, 1.0-2.8), need for umbilical catheter (OR, 2.4; 95% CI, 1.3-4.4), days on mechanical ventilation (OR, 1.1; 95% CI, 1.0-1.2), oxygen therapy (OR, 1.1; 95% CI, 1.0-1.1), parenteral nutrition (OR, 1.1; 95% CI, 1.0-1.1), and birth center (p < 0.001).CONCLUSIONS:The need of RBC transfusions in very-low-birth-weight preterm infants was associated with clinical conditions and birth center. The distribution of the number of transfusions during hospital stay may be used as a measure of neonatal care quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents specific cutting energy measurements as a function of the cutting speed and tool cutting edge geometry. The experimental work was carried out on a vertical CNC machining center with 7,500 rpm spindle rotation and 7.5 kW power. Hardened steels ASTM H13 (50 HRC) were machined at conventional cutting speed and high-speed cutting (HSC). TiN coated carbides with seven different geometries of chip breaker were applied on dry tests. A special milling tool holder with only one cutting edge was developed and the machining forces needed to calculate the specific cutting energy were recorded using a piezoelectric 4-component dynamometer. Workpiece roughness and chip formation process were also evaluated. The results showed that the specific cutting energy decreased 15.5% when cutting speed was increased up to 700%. An increase of 1 °in tool chip breaker chamfer angle lead to a reduction in the specific cutting energy about 13.7% and 28.6% when machining at HSC and conventional cutting speed respectively. Furthermore the workpiece roughness values evaluated in all test conditions were very low, closer to those of typical grinding operations (∼0.20 μm). Probable adiabatic shear occurred on chip segmentation at HSC Copyright © 2007 by ABCM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Preservation of rivers and water resources is crucial in most environmental policies and many efforts are made to assess water quality. Environmental monitoring of large river networks are based on measurement stations. Compared to the total length of river networks, their number is often limited and there is a need to extend environmental variables that are measured locally to the whole river network. The objective of this paper is to propose several relevant geostatistical models for river modeling. These models use river distance and are based on two contrasting assumptions about dependency along a river network. Inference using maximum likelihood, model selection criterion and prediction by kriging are then developed. We illustrate our approach on two variables that differ by their distributional and spatial characteristics: summer water temperature and nitrate concentration. The data come from 141 to 187 monitoring stations in a network on a large river located in the Northeast of France that is more than 5000 km long and includes Meuse and Moselle basins. We first evaluated different spatial models and then gave prediction maps and error variance maps for the whole stream network.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Background Saliva is a key element of interaction between hematophagous mosquitoes and their vertebrate hosts. In addition to allowing a successful blood meal by neutralizing or delaying hemostatic responses, the salivary cocktail is also able to modulate the effector mechanisms of host immune responses facilitating, in turn, the transmission of several types of microorganisms. Understanding how the mosquito uses its salivary components to circumvent host immunity might help to clarify the mechanisms of transmission of such pathogens and disease establishment. Methods Flow cytometry was used to evaluate if increasing concentrations of A. aegypti salivary gland extract (SGE) affects bone marrow-derived DC differentiation and maturation. Lymphocyte proliferation in the presence of SGE was estimated by a colorimetric assay. Western blot and Annexin V staining assays were used to assess apoptosis in these cells. Naïve and memory cells from mosquito-bite exposed mice or OVA-immunized mice and their respective controls were analyzed by flow cytometry. Results Concentration-response curves were employed to evaluate A. aegypti SGE effects on DC and lymphocyte biology. DCs differentiation from bone marrow precursors, their maturation and function were not directly affected by A. aegypti SGE (concentrations ranging from 2.5 to 40 μg/mL). On the other hand, lymphocytes were very sensitive to the salivary components and died in the presence of A. aegypti SGE, even at concentrations as low as 0.1 μg/mL. In addition, A. aegypti SGE was shown to induce apoptosis in all lymphocyte populations evaluated (CD4+ and CD8+ T cells, and B cells) through a mechanism involving caspase-3 and caspase-8, but not Bim. By using different approaches to generate memory cells, we were able to verify that these cells are resistant to SGE effects. Conclusion Our results show that lymphocytes, and not DCs, are the primary target of A. aegypti salivary components. In the presence of A. aegypti SGE, naïve lymphocyte populations die by apoptosis in a caspase-3- and caspase-8-dependent pathway, while memory cells are selectively more resistant to its effects. The present work contributes to elucidate the activities of A. aegypti salivary molecules on the antigen presenting cell-lymphocyte axis and in the biology of these cells.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as SiliconOn–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I moderni sistemi embedded sono equipaggiati con risorse hardware che consentono l’esecuzione di applicazioni molto complesse come il decoding audio e video. La progettazione di simili sistemi deve soddisfare due esigenze opposte. Da un lato è necessario fornire un elevato potenziale computazionale, dall’altro bisogna rispettare dei vincoli stringenti riguardo il consumo di energia. Uno dei trend più diffusi per rispondere a queste esigenze opposte è quello di integrare su uno stesso chip un numero elevato di processori caratterizzati da un design semplificato e da bassi consumi. Tuttavia, per sfruttare effettivamente il potenziale computazionale offerto da una batteria di processoriè necessario rivisitare pesantemente le metodologie di sviluppo delle applicazioni. Con l’avvento dei sistemi multi-processore su singolo chip (MPSoC) il parallel programming si è diffuso largamente anche in ambito embedded. Tuttavia, i progressi nel campo della programmazione parallela non hanno mantenuto il passo con la capacità di integrare hardware parallelo su un singolo chip. Oltre all’introduzione di multipli processori, la necessità di ridurre i consumi degli MPSoC comporta altre soluzioni architetturali che hanno l’effetto diretto di complicare lo sviluppo delle applicazioni. Il design del sottosistema di memoria, in particolare, è un problema critico. Integrare sul chip dei banchi di memoria consente dei tempi d’accesso molto brevi e dei consumi molto contenuti. Sfortunatamente, la quantità di memoria on-chip che può essere integrata in un MPSoC è molto limitata. Per questo motivo è necessario aggiungere dei banchi di memoria off-chip, che hanno una capacità molto maggiore, come maggiori sono i consumi e i tempi d’accesso. La maggior parte degli MPSoC attualmente in commercio destina una parte del budget di area all’implementazione di memorie cache e/o scratchpad. Le scratchpad (SPM) sono spesso preferite alle cache nei sistemi MPSoC embedded, per motivi di maggiore predicibilità, minore occupazione d’area e – soprattutto – minori consumi. Per contro, mentre l’uso delle cache è completamente trasparente al programmatore, le SPM devono essere esplicitamente gestite dall’applicazione. Esporre l’organizzazione della gerarchia di memoria ll’applicazione consente di sfruttarne in maniera efficiente i vantaggi (ridotti tempi d’accesso e consumi). Per contro, per ottenere questi benefici è necessario scrivere le applicazioni in maniera tale che i dati vengano partizionati e allocati sulle varie memorie in maniera opportuna. L’onere di questo compito complesso ricade ovviamente sul programmatore. Questo scenario descrive bene l’esigenza di modelli di programmazione e strumenti di supporto che semplifichino lo sviluppo di applicazioni parallele. In questa tesi viene presentato un framework per lo sviluppo di software per MPSoC embedded basato su OpenMP. OpenMP è uno standard di fatto per la programmazione di multiprocessori con memoria shared, caratterizzato da un semplice approccio alla parallelizzazione tramite annotazioni (direttive per il compilatore). La sua interfaccia di programmazione consente di esprimere in maniera naturale e molto efficiente il parallelismo a livello di loop, molto diffuso tra le applicazioni embedded di tipo signal processing e multimedia. OpenMP costituisce un ottimo punto di partenza per la definizione di un modello di programmazione per MPSoC, soprattutto per la sua semplicità d’uso. D’altra parte, per sfruttare in maniera efficiente il potenziale computazionale di un MPSoC è necessario rivisitare profondamente l’implementazione del supporto OpenMP sia nel compilatore che nell’ambiente di supporto a runtime. Tutti i costrutti per gestire il parallelismo, la suddivisione del lavoro e la sincronizzazione inter-processore comportano un costo in termini di overhead che deve essere minimizzato per non comprometterre i vantaggi della parallelizzazione. Questo può essere ottenuto soltanto tramite una accurata analisi delle caratteristiche hardware e l’individuazione dei potenziali colli di bottiglia nell’architettura. Una implementazione del task management, della sincronizzazione a barriera e della condivisione dei dati che sfrutti efficientemente le risorse hardware consente di ottenere elevate performance e scalabilità. La condivisione dei dati, nel modello OpenMP, merita particolare attenzione. In un modello a memoria condivisa le strutture dati (array, matrici) accedute dal programma sono fisicamente allocate su una unica risorsa di memoria raggiungibile da tutti i processori. Al crescere del numero di processori in un sistema, l’accesso concorrente ad una singola risorsa di memoria costituisce un evidente collo di bottiglia. Per alleviare la pressione sulle memorie e sul sistema di connessione vengono da noi studiate e proposte delle tecniche di partizionamento delle strutture dati. Queste tecniche richiedono che una singola entità di tipo array venga trattata nel programma come l’insieme di tanti sotto-array, ciascuno dei quali può essere fisicamente allocato su una risorsa di memoria differente. Dal punto di vista del programma, indirizzare un array partizionato richiede che ad ogni accesso vengano eseguite delle istruzioni per ri-calcolare l’indirizzo fisico di destinazione. Questo è chiaramente un compito lungo, complesso e soggetto ad errori. Per questo motivo, le nostre tecniche di partizionamento sono state integrate nella l’interfaccia di programmazione di OpenMP, che è stata significativamente estesa. Specificamente, delle nuove direttive e clausole consentono al programmatore di annotare i dati di tipo array che si vuole partizionare e allocare in maniera distribuita sulla gerarchia di memoria. Sono stati inoltre sviluppati degli strumenti di supporto che consentono di raccogliere informazioni di profiling sul pattern di accesso agli array. Queste informazioni vengono sfruttate dal nostro compilatore per allocare le partizioni sulle varie risorse di memoria rispettando una relazione di affinità tra il task e i dati. Più precisamente, i passi di allocazione nel nostro compilatore assegnano una determinata partizione alla memoria scratchpad locale al processore che ospita il task che effettua il numero maggiore di accessi alla stessa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Liquid crystalline elastomers (LCEs) are known to perform a reversible change of shape upon the phase transition from the semi-ordered liquid crystalline state to the chaotic isotropic state. This unique behavior of these “artificial muscles” arises from the self-organizing properties of liquid crystals (mesogens) in combination with the entropy-elasticity of the slightly crosslinked elastomer network. In this work, micrometer-sized LCE actuators are fabricated in a microfluidic setup. The microtubular shear flow provides for a uniform orientation of the mesogens during the crosslinking, a perquisite for obtaining actuating LCE samples. The scope of this work was to design different actuator geometries and to broaden the applicability of the microfluidic device for different types of liquid crystalline mesogens, ranging from side-chain to main-chain systems, as well as monomer and polymer precursors. For example, the thiol-ene “click” mechanism was used for the polymerization and crosslinking of main-chain LCE actuators. The main focus was, however, placed on acrylate monomers and polymers with LC side chains. A LC polymer precursor, comprising mesogenic and crosslinkable side-chains was synthesized. Used in combination with an LC monomer, the polymeric crosslinker promoted a stable LC phase, which allowed the mixture to be isothermally handled in the microfluidic reactor. If processed without the additional LC components, the polymer precursor yielded actuating fibers. A suitable co-flowing continuous phase facilitates the formation of a liquid jet and lowers the tendency for drop formation. By modification of the microfluidic device, it was further possible to prepare core-shell particles, comprised of an LCE shell and filled with an isotropic liquid. In analogy to the heart, a hollow muscle, the elastomer shell expels the inner liquid core upon its contraction. The feasibility of the core-shell particles as micropumps was demonstrated. In general, the synthesized LCE microactuators may be utilized as active components in micromechanical and lab-on-chip systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: This report presents data from the Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network on care of and morbidity and mortality rates for very low birth weight infants, according to gestational age (GA). METHODS: Perinatal/neonatal data were collected for 9575 infants of extremely low GA (22-28 weeks) and very low birth weight (401-1500 g) who were born at network centers between January 1, 2003, and December 31, 2007. RESULTS: Rates of survival to discharge increased with increasing GA (6% at 22 weeks and 92% at 28 weeks); 1060 infants died at CONCLUSION: Although the majority of infants with GAs of >or=24 weeks survive, high rates of morbidity among survivors continue to be observed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work is a multidisciplinary environmental study that provides new insights into the relationships between sediment-organic matter characteristics and polybrominated diphenyl ethers (PBDEs) concentration. The aim of the present multivariate study was to correlate factors influencing PBDEs accumulation in sediment by using principal component analysis (PCA). Organic matter studies by Fourier Transform-Infrared spectroscopy and physicochemical analyses (Total Organic Carbon, pH, electrical conductivity) of sediment samples were considered for PCA. Samples were collected from an artificial irrigation network on the Mendoza River irrigation areas. PCA provided a comprehensive analysis of the studied variables, identifying two components that explained 63% of the data variance. Those factors were mainly associated to organic matter degradation degree, which represent a new insight into the relationships between organic matter in sediments and PBDEs fate. In this sense it was possible to determine that not only the content but also the type of organic matter (chemical structure) could be relevant when evaluating PBDEs accumulation and transport in the environment. Typification of organic matter may be a useful tool to predict more feasible areas where PBDE, may accumulate, as well as sediment transportation capability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Se trata de estudiar el comportamiento de un sistema basado en el chip CC1110 de Texas Instruments, para aplicaciones inalámbricas. Los dispositivos basados en este tipo de chips tienen actualmente gran profusión, dada la demanda cada vez mayor de aplicaciones de gestión y control inalámbrico. Por ello, en la primera parte del proyecto se presenta el estado del arte referente a este aspecto, haciendo mención a los sistemas operativos embebidos, FPGAs, etc. También se realiza una introducción sobre la historia de los aviones no tripulados, que son el vehículo elegido para el uso del enlace de datos. En una segunda parte se realiza el estudio del dispositivo mediante una placa de desarrollo, verificando y comprobando mediante el software suministrado, el alcance del mismo. Cabe resaltar en este punto que el control con la placa mencionada se debe hacer mediante programación de bajo nivel (lenguaje C), lo que aporta gran versatilidad a las aplicaciones que se pueden desarrollar. Por ello, en una tercera parte se realiza un programa funcional, basado en necesidades aportadas por la empresa con la que se colabora en el proyecto (INDRA). Este programa es realizado sobre el entorno de Matlab, muy útil para este tipo de aplicaciones, dada su versatilidad y gran capacidad de cálculo con variables. Para terminar, con la realización de dichos programas, se realizan pruebas específicas para cada uno de ellos, realizando pruebas de campo en algunas ocasiones, con vehículos los más similares a los del entorno real en el que se prevé utilizar. Como implementación al programa realizado, se incluye un manual de usuario con un formato muy gráfico, para que la toma de contacto se realice de una manera rápida y sencilla. Para terminar, se plantean líneas futuras de aplicación del sistema, conclusiones, presupuesto y un anexo con los códigos de programación más importantes. Abstract In this document studied the system behavior based on chip CC1110 of Texas Instruments, for wireless applications. These devices currently have profusion. Right the increasing demand for control and management wireless applications. In the first part of project presents the state of art of this aspect, with reference to the embedded systems, FPGAs, etc. It also makes a history introduction of UAVs, which are the vehicle for use data link. In the second part is studied the device through development board, verifying and checking with provided software the scope. The board programming is C language; this gives a good versatility to develop applications. Thus, in third part performing a functionally program, it based on requirements provided by company with which it collaborates, INDRA Company. This program is developed with Matlab, very useful for such applications because of its versatility and ability to use variables. Finally, with the implementation of such programs, specific tests are performed for each of them, field tests are performed in several cases, and vehicles used for this are the most similar to the actual environment plain to use. Like implementing with the program made, includes a graphical user manual, so your understanding is conducted quickly and easily. Ultimately, present future targets for system applications, conclusions, budget and annex of the most important programming codes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel self-timed multi-purpose sensor especially conceived for Field Programmable Gate Arrays (FPGAs). The aim of the sensor is to measure performance variations during the life-cycle of the device, such as process variability, critical path timing and temperature variations. The proposed topology, through the use of both combinational and sequential FPGA elements, amplifies the time of a signal traversing a delay chain to produce a pulse whose width is the sensor’s measurement. The sensor is fully self-timed, avoiding the need for clock distribution networks and eliminating the limitations imposed by the system clock. One single off- or on-chip time-to-digital converter is able to perform digitization of several sensors in a single operation. These features allow for a simplified approach for designers wanting to intertwine a multi-purpose sensor network with their application logic. Employed as a temperature sensor, it has been measured to have an error of ±0.67 °C, over the range of 20–100 °C, employing 20 logic elements with a 2-point calibration.