979 resultados para Non-dispersive infrared sensor (NDIR)
Resumo:
In the last 20-30 years, the implementation of new technologies from the research centres to the food industry process was very fast. The infrared thermography is a tool used in many fields, including agriculture and food science technology, because of it's important qualities like non-destructive method, it is fast, it is accurate, it is repeatable and economical. Almost all the industrial food processors have to use the thermal process to obtain an optimal product respecting the quality and safety standards. The control of temperature of food products during the production, transportation, storage and sales is an essential process in the food industry network. This tool can minimize the human error during the control of heat operation, and reduce the costs with personal. In this thesis the application of infrared thermography (IRT) was studies for different products that need a thermal process during the food processing. The background of thermography was presented, and also some of its applications in food industry, with the benefits and limits of applicability. The measurement of the temperature of the egg shell during the heat treatment in natural convection and with hot-air treatment was compared with the calculated temperatures obtained by a simplified finite element model made in the past. The complete process shown a good results between calculated and observed temperatures and we can say that this technique can be useful to control the heat treatments for decontamination of egg using the infrared thermography. Other important application of IRT was to determine the evolution of emissivity of potato raw during the freezing process and the control non-destructive control of this process. We can conclude that the IRT can represent a real option for the control of thermal process from the food industry, but more researches on various products are necessary.
Resumo:
Cytochrom c Oxidase (CcO), der Komplex IV der Atmungskette, ist eine der Häm-Kupfer enthaltenden Oxidasen und hat eine wichtige Funktion im Zellmetabolismus. Das Enzym enthält vier prosthetische Gruppen und befindet sich in der inneren Membran von Mitochondrien und in der Zellmembran einiger aerober Bakterien. Die CcO katalysiert den Elektronentransfer (ET) von Cytochrom c zu O2, wobei die eigentliche Reaktion am binuklearen Zentrum (CuB-Häm a3) erfolgt. Bei der Reduktion von O2 zu zwei H2O werden vier Protonen verbraucht. Zudem werden vier Protonen über die Membran transportiert, wodurch eine elektrochemische Potentialdifferenz dieser Ionen zwischen Matrix und Intermembranphase entsteht. Trotz ihrer Wichtigkeit sind Membranproteine wie die CcO noch wenig untersucht, weshalb auch der Mechanismus der Atmungskette noch nicht vollständig aufgeklärt ist. Das Ziel dieser Arbeit ist, einen Beitrag zum Verständnis der Funktion der CcO zu leisten. Hierzu wurde die CcO aus Rhodobacter sphaeroides über einen His-Anker, der am C-Terminus der Untereinheit II angebracht wurde, an eine funktionalisierte Metallelektrode in definierter Orientierung gebunden. Der erste Elektronenakzeptor, das CuA, liegt dabei am nächsten zur Metalloberfläche. Dann wurde eine Doppelschicht aus Lipiden insitu zwischen die gebundenen Proteine eingefügt, was zur sog. proteingebundenen Lipid-Doppelschicht Membran (ptBLM) führt. Dabei musste die optimale Oberflächenkonzentration der gebundenen Proteine herausgefunden werden. Elektrochemische Impedanzspektroskopie(EIS), Oberflächenplasmonenresonanzspektroskopie (SPR) und zyklische Voltammetrie (CV) wurden angewandt um die Aktivität der CcO als Funktion der Packungsdichte zu charakterisieren. Der Hauptteil der Arbeit betrifft die Untersuchung des direkten ET zur CcO unter anaeroben Bedingungen. Die Kombination aus zeitaufgelöster oberflächenverstärkter Infrarot-Absorptionsspektroskopie (tr-SEIRAS) und Elektrochemie hat sich dafür als besonders geeignet erwiesen. In einer ersten Studie wurde der ET mit Hilfe von fast scan CV untersucht, wobei CVs von nicht-aktivierter sowie aktivierter CcO mit verschiedenen Vorschubgeschwindigkeiten gemessen wurden. Die aktivierte Form wurde nach dem katalytischen Umsatz des Proteins in Anwesenheit von O2 erhalten. Ein vier-ET-modell wurde entwickelt um die CVs zu analysieren. Die Methode erlaubt zwischen dem Mechanismus des sequentiellen und des unabhängigen ET zu den vier Zentren CuA, Häm a, Häm a3 und CuB zu unterscheiden. Zudem lassen sich die Standardredoxpotentiale und die kinetischen Koeffizienten des ET bestimmen. In einer zweiten Studie wurde tr-SEIRAS im step scan Modus angewandt. Dafür wurden Rechteckpulse an die CcO angelegt und SEIRAS im ART-Modus verwendet um Spektren bei definierten Zeitscheiben aufzunehmen. Aus diesen Spektren wurden einzelne Banden isoliert, die Veränderungen von Vibrationsmoden der Aminosäuren und Peptidgruppen in Abhängigkeit des Redoxzustands der Zentren zeigen. Aufgrund von Zuordnungen aus der Literatur, die durch potentiometrische Titration der CcO ermittelt wurden, konnten die Banden versuchsweise den Redoxzentren zugeordnet werden. Die Bandenflächen gegen die Zeit aufgetragen geben dann die Redox-Kinetik der Zentren wieder und wurden wiederum mit dem vier-ET-Modell ausgewertet. Die Ergebnisse beider Studien erlauben die Schlussfolgerung, dass der ET zur CcO in einer ptBLM mit größter Wahrscheinlichkeit dem sequentiellen Mechanismus folgt, was dem natürlichen ET von Cytochrom c zur CcO entspricht.
Resumo:
Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box. Output: dato proveniente dal sensore ottico (lettura espressa in mV) Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.
Resumo:
Ein wesentlicher Anteil an organischem Kohlenstoff, der in der Atmosphäre vorhanden ist, wird als leichtflüchtige organische Verbindungen gefunden. Diese werden überwiegend durch die Biosphäre freigesetzt. Solche biogenen Emissionen haben einen großen Einfluss auf die chemischen und physikalischen Eigenschaften der Atmosphäre, indem sie zur Bildung von bodennahem Ozon und sekundären organischen Aerosolen beitragen. Um die Bildung von bodennahem Ozon und von sekundären organischen Aerosolen besser zu verstehen, ist die technische Fähigkeit zur genauen Messung der Summe dieser flüchtigen organischen Substanzen notwendig. Häufig verwendete Methoden sind nur auf den Nachweis von spezifischen Nicht-Methan-Kohlenwasserstoffverbindungen fokussiert. Die Summe dieser Einzelverbindungen könnte gegebenenfalls aber nur eine Untergrenze an atmosphärischen organischen Kohlenstoffkonzentrationen darstellen, da die verfügbaren Methoden nicht in der Lage sind, alle organischen Verbindungen in der Atmosphäre zu analysieren. Einige Studien sind bekannt, die sich mit der Gesamtkohlenstoffbestimmung von Nicht-Methan-Kohlenwasserstoffverbindung in Luft beschäftigt haben, aber Messungen des gesamten organischen Nicht-Methan-Verbindungsaustauschs zwischen Vegetation und Atmosphäre fehlen. Daher untersuchten wir die Gesamtkohlenstoffbestimmung organische Nicht-Methan-Verbindungen aus biogenen Quellen. Die Bestimmung des organischen Gesamtkohlenstoffs wurde durch Sammeln und Anreichern dieser Verbindungen auf einem festen Adsorptionsmaterial realisiert. Dieser erste Schritt war notwendig, um die stabilen Gase CO, CO2 und CH4 von der organischen Kohlenstofffraktion zu trennen. Die organischen Verbindungen wurden thermisch desorbiert und zu CO2 oxidiert. Das aus der Oxidation entstandene CO2 wurde auf einer weiteren Anreicherungseinheit gesammelt und durch thermische Desorption und anschließende Detektion mit einem Infrarot-Gasanalysator analysiert. Als große Schwierigkeiten identifizierten wir (i) die Abtrennung von CO2 aus der Umgebungsluft von der organischen Kohlenstoffverbindungsfaktion während der Anreicherung sowie (ii) die Widerfindungsraten der verschiedenen Nicht-Methan-Kohlenwasserstoff-verbindungen vom Adsorptionsmaterial, (iii) die Wahl des Katalysators sowie (iiii) auftretende Interferenzen am Detektor des Gesamtkohlenstoffanalysators. Die Wahl eines Pt-Rd Drahts als Katalysator führte zu einem bedeutenden Fortschritt in Bezug auf die korrekte Ermittlung des CO2-Hintergrund-Signals. Dies war notwendig, da CO2 auch in geringen Mengen auf der Adsorptionseinheit während der Anreicherung der leichtflüchtigen organischen Substanzen gesammelt wurde. Katalytische Materialien mit hohen Oberflächen stellten sich als unbrauchbar für diese Anwendung heraus, weil trotz hoher Temperaturen eine CO2-Aufnahme und eine spätere Abgabe durch das Katalysatormaterial beobachtet werden konnte. Die Methode wurde mit verschiedenen leichtflüchtigen organischen Einzelsubstanzen sowie in zwei Pflanzenkammer-Experimenten mit einer Auswahl an VOC-Spezies getestet, die von unterschiedlichen Pflanzen emittiert wurden. Die Pflanzenkammer-messungen wurden durch GC-MS und PTR-MS Messungen begleitet. Außerdem wurden Kalibrationstests mit verschiedenen Einzelsubstanzen aus Permeations-/Diffusionsquellen durchgeführt. Der Gesamtkohlenstoffanalysator konnte den tageszeitlichen Verlauf der Pflanzenemissionen bestätigen. Allerdings konnten Abweichungen für die Mischungsverhältnisse des organischen Gesamtkohlenstoffs von bis zu 50% im Vergleich zu den begleitenden Standardmethoden beobachtet werden.
Resumo:
Wireless Sensor Networks (WSNs) offer a new solution for distributed monitoring, processing and communication. First of all, the stringent energy constraints to which sensing nodes are typically subjected. WSNs are often battery powered and placed where it is not possible to recharge or replace batteries. Energy can be harvested from the external environment but it is a limited resource that must be used efficiently. Energy efficiency is a key requirement for a credible WSNs design. From the power source's perspective, aggressive energy management techniques remain the most effective way to prolong the lifetime of a WSN. A new adaptive algorithm will be presented, which minimizes the consumption of wireless sensor nodes in sleep mode, when the power source has to be regulated using DC-DC converters. Another important aspect addressed is the time synchronisation in WSNs. WSNs are used for real-world applications where physical time plays an important role. An innovative low-overhead synchronisation approach will be presented, based on a Temperature Compensation Algorithm (TCA). The last aspect addressed is related to self-powered WSNs with Energy Harvesting (EH) solutions. Wireless sensor nodes with EH require some form of energy storage, which enables systems to continue operating during periods of insufficient environmental energy. However, the size of the energy storage strongly restricts the use of WSNs with EH in real-world applications. A new approach will be presented, which enables computation to be sustained during intermittent power supply. The discussed approaches will be used for real-world WSN applications. The first presented scenario is related to the experience gathered during an European Project (3ENCULT Project), regarding the design and implementation of an innovative network for monitoring heritage buildings. The second scenario is related to the experience with Telecom Italia, regarding the design of smart energy meters for monitoring the usage of household's appliances.
Antarctic cloud spectral emission from ground-based measurements, a focus on far infrared signatures
Resumo:
The present work belongs to the PRANA project, the first extensive field campaign of observation of atmospheric emission spectra covering the Far InfraRed spectral region, for more than two years. The principal deployed instrument is REFIR-PAD, a Fourier transform spectrometer used by us to study Antarctic cloud properties. A dataset covering the whole 2013 has been analyzed and, firstly, a selection of good quality spectra is performed, using, as thresholds, radiance values in few chosen spectral regions. These spectra are described in a synthetic way averaging radiances in selected intervals, converting them into BTs and finally considering the differences between each pair of them. A supervised feature selection algorithm is implemented with the purpose to select the features really informative about the presence, the phase and the type of cloud. Hence, training and test sets are collected, by means of Lidar quick-looks. The supervised classification step of the overall monthly datasets is performed using a SVM. On the base of this classification and with the help of Lidar observations, 29 non-precipitating ice cloud case studies are selected. A single spectrum, or at most an average over two or three spectra, is processed by means of the retrieval algorithm RT-RET, exploiting some main IR window channels, in order to extract cloud properties. Retrieved effective radii and optical depths are analyzed, to compare them with literature studies and to evaluate possible seasonal trends. Finally, retrieval output atmospheric profiles are used as inputs for simulations, assuming two different crystal habits, with the aim to examine our ability to reproduce radiances in the FIR. Substantial mis-estimations are found for FIR micro-windows: a high variability is observed in the spectral pattern of simulation deviations from measured spectra and an effort to link these deviations to cloud parameters has been performed.
Resumo:
Biosensors find wide application in clinical diagnostics, bioprocess control and environmental monitoring. They should not only show high specificity and reproducibility but also a high sensitivity and stability of the signal. Therefore, I introduce a novel sensor technology based on plasmonic nanoparticles which overcomes both of these limitations. Plasmonic nanoparticles exhibit strong absorption and scattering in the visible and near-infrared spectral range. The plasmon resonance, the collective coherent oscillation mode of the conduction band electrons against the positively charged ionic lattice, is sensitive to the local environment of the particle. I monitor these changes in the resonance wavelength by a new dark-field spectroscopy technique. Due to a strong light source and a highly sensitive detector a temporal resolution in the microsecond regime is possible in combination with a high spectral stability. This opens a window to investigate dynamics on the molecular level and to gain knowledge about fundamental biological processes.rnFirst, I investigate adsorption at the non-equilibrium as well as at the equilibrium state. I show the temporal evolution of single adsorption events of fibrinogen on the surface of the sensor on a millisecond timescale. Fibrinogen is a blood plasma protein with a unique shape that plays a central role in blood coagulation and is always involved in cell-biomaterial interactions. Further, I monitor equilibrium coverage fluctuations of sodium dodecyl sulfate and demonstrate a new approach to quantify the characteristic rate constants which is independent of mass transfer interference and long term drifts of the measured signal. This method has been investigated theoretically by Monte-Carlo simulations but so far there has been no sensor technology with a sufficient signal-to-noise ratio.rnSecond, I apply plasmonic nanoparticles as sensors for the determination of diffusion coefficients. Thereby, the sensing volume of a single, immobilized nanorod is used as detection volume. When a diffusing particle enters the detection volume a shift in the resonance wavelength is introduced. As no labeling of the analyte is necessary the hydrodynamic radius and thus the diffusion properties are not altered and can be studied in their natural form. In comparison to the conventional Fluorescence Correlation Spectroscopy technique a volume reduction by a factor of 5000-10000 is reached.
Resumo:
L'obiettivo su cui è stata basata questa Tesi di Laurea è stato quello di integrare la tecnologia delle Wireless Sensor Networks (WSN) al contesto dell'Internet delle cose (IoT). Per poter raggiungere questo obiettivo, il primo passo è stato quello di approfondire il concetto dell'Internet delle cose, in modo tale da comprendere se effettivamente fosse stato possibile applicarlo anche alle WSNs. Quindi è stata analizzata l'architettura delle WSNs e successivamente è stata fatta una ricerca per capire quali fossero stati i vari tipi di sistemi operativi e protocolli di comunicazione supportati da queste reti. Infine sono state studiate alcune IoT software platforms. Il secondo passo è stato quindi di implementare uno stack software che abilitasse la comunicazione tra WSNs e una IoT platform. Come protocollo applicativo da utilizzare per la comunicazione con le WSNs è stato usato CoAP. Lo sviluppo di questo stack ha consentito di estendere la piattaforma SensibleThings e il linguaggio di programmazione utilizzato è stato Java. Come terzo passo è stata effettuata una ricerca per comprendere a quale scenario di applicazione reale, lo stack software progettato potesse essere applicato. Successivamente, al fine di testare il corretto funzionamento dello stack CoAP, è stata sviluppata una proof of concept application che simulasse un sistema per la rilevazione di incendi. Questo scenario era caratterizzato da due WSNs che inviavano la temperatura rilevata da sensori termici ad un terzo nodo che fungeva da control center, il cui compito era quello di capire se i valori ricevuti erano al di sopra di una certa soglia e quindi attivare un allarme. Infine, l'ultimo passo di questo lavoro di tesi è stato quello di valutare le performance del sistema sviluppato. I parametri usati per effettuare queste valutazioni sono stati: tempi di durata delle richieste CoAP, overhead introdotto dallo stack CoAP alla piattaforma Sensible Things e la scalabilità di un particolare componente dello stack. I risultati di questi test hanno mostrato che la soluzione sviluppata in questa tesi ha introdotto un overheadmolto limitato alla piattaforma preesistente e inoltre che non tutte le richieste hanno la stessa durata, in quanto essa dipende dal tipo della richiesta inviata verso una WSN. Tuttavia, le performance del sistema potrebbero essere ulteriormente migliorate, ad esempio sviluppando un algoritmo che consenta la gestione concorrente di richieste CoAP multiple inviate da uno stesso nodo. Inoltre, poichè in questo lavoro di tesi non è stato considerato il problema della sicurezza, una possibile estensione al lavoro svolto potrebbe essere quello di implementare delle politiche per una comunicazione sicura tra Sensible Things e le WSNs.
Resumo:
Biological homochirality on earth and its tremendous consequences for pharmaceutical science and technology has led to an ever increasing interest in the selective production, the resolution and the detection of enantiomers of a chiral compound. Chiral surfaces and interfaces that can distinguish between enantiomers play a key role in this respect as enantioselective catalysts as well as for separation purposes. Despite the impressive progress in these areas in the last decade, molecular-level understanding of the interactions that are at the origin of enantiodiscrimination are lagging behind due to the lack of powerful experimental techniques to spot these interactions selectively with high sensitivity. In this article, techniques based on infrared spectroscopy are highlighted that are able to selectively target the chiral properties of interfaces. In particular, these methods are the combination of Attenuated Total Reflection InfraRed (ATR-IR) with Modulation Excitation Spectroscopy (MES) to probe enantiodiscriminating interactions at chiral solid-liquid interfaces and Vibrational Circular Dichroism (VCD), which is used to probe the structure of chirally-modified metal nanoparticles. The former technique aims at suppressing signals arising from non-selective interactions, which may completely hide the signals of interest due to enantiodiscriminating interactions. Recently, this method was successfully applied to investigate enantiodiscrimination at self-assembled monolayers of chiral thiols on gold surfaces. The nanometer size analogues of the latter--gold nanoparticles protected by a monolayer of a chiral thiol--are amenable to VCD spectroscopy. It is shown that this technique yields detailed structural information on the adsorption mode and the conformation of the adsorbed thiol. This may also turn out to be useful to clarify how chirality can be bestowed onto the metal core itself and the nature of the chirality of the latter, which is manifested in the metal-based circular dichroism activity of these nanoparticles.
Resumo:
BACKGROUND: In patients with coronary artery disease (CAD), a well grown collateral circulation has been shown to be important. The aim of this prospective study using peripheral blood monocytes was to identify marker genes for an extensively grown coronary collateral circulation. METHODS: Collateral flow index (CFI) was obtained invasively by angioplasty pressure sensor guidewire in 160 individuals (110 patients with CAD, and 50 individuals without CAD). RNA was extracted from monocytes followed by microarray-based gene-expression analysis. 76 selected genes were analysed by real-time polymerase chain reaction (PCR). A receiver operating characteristics analysis based on differential gene expression was then performed to separate individuals with poor (CFI<0.21) and well-developed collaterals (CFI>or=0.21) Thereafter, the influence of the chemokine MCP-1 on the expression of six selected genes was tested by PCR. RESULTS: The expression of 203 genes significantly correlated with CFI (p = 0.000002-0.00267) in patients with CAD and 56 genes in individuals without CAD (p = 00079-0.0430). Biological pathway analysis revealed 76 of those genes belonging to four different pathways: angiogenesis, integrin-, platelet-derived growth factor-, and transforming growth factor beta-signalling. Three genes in each subgroup differentiated with high specificity among individuals with low and high CFI (>or=0.21). Two out of these genes showed pronounced differential expression between the two groups after cell stimulation with MCP-1. CONCLUSIONS: Genetic factors play a role in the formation and the preformation of the coronary collateral circulation. Gene expression analysis in peripheral blood monocytes can be used for non-invasive differentiation between individuals with poorly and with well grown collaterals. MCP-1 can influence the arteriogenic potential of monocytes.
Resumo:
This work presents an innovative integration of sensing and nano-scaled fluidic actuation in the combination of pH sensitive optical dye immobilization with the electro-osmotic phenomena in polar solvents like water for flow-through pH measurements. These flow-through measurements are performed in a flow-through sensing device (FTSD) configuration that is designed and fabricated at MTU. A relatively novel and interesting material, through-wafer mesoporous silica substrates with pore diameters of 20 -200 nm and pore depths of 500 µm are fabricated and implemented for electro-osmotic pumping and flow-through fluorescence sensing for the first time. Performance characteristics of macroporous silicon (> 500 µm) implemented for electro-osmotic pumping include, a very large flow effciency of 19.8 µLmin-1V-1 cm-2 and maximum pressure effciency of 86.6 Pa/V in comparison to mesoporous silica membranes with 2.8 µLmin-1V-1cm-2 flow effciency and a 92 Pa/V pressure effciency. The electrical current (I) of the EOP system for 60 V applied voltage utilizing macroporous silicon membranes is 1.02 x 10-6A with a power consumption of 61.74 x 10-6 watts. Optical measurements on mesoporous silica are performed spectroscopically from 300 nm to 1000 nm using ellipsometry, which includes, angularly resolved transmission and angularly resolved reflection measurements that extend into the infrared regime. Refractive index (n) values for oxidized and un-oxidized mesoporous silicon sample at 1000 nm are found to be 1.36 and 1.66. Fluorescence results and characterization confirm the successful pH measurement from ratiometric techniques. The sensitivity measured for fluorescein in buffer solution is 0.51 a.u./pH compared to sensitivity of ~ 0.2 a.u./pH in the case of fluorescein in porous silica template. Porous silica membranes are efficient templates for immobilization of optical dyes and represent a promising method to increase sensitivity for small variations in chemical properties. The FTSD represents a device topology suitable for application to long term monitoring of lakes and reservoirs. Unique and important contributions from this work include fabrication of a through-wafer mesoporous silica membrane that has been thoroughly characterized optically using ellipsometry. Mesoporous silica membranes are tested as a porous media in an electro-osmotic pump for generating high pressure capacities due to the nanometer pore sizes of the porous media. Further, dye immobilized mesoporous silica membranes along with macroporous silicon substrates are implemented for continuous pH measurements using fluorescence changes in a flow-through sensing device configuration. This novel integration and demonstration is completely based on silicon and implemented for the first time and can lead to miniaturized flow-through sensing systems based on MEMS technologies.
Resumo:
As the demand for miniature products and components continues to increase, the need for manufacturing processes to provide these products and components has also increased. To meet this need, successful macroscale processes are being scaled down and applied at the microscale. Unfortunately, many challenges have been experienced when directly scaling down macro processes. Initially, frictional effects were believed to be the largest challenge encountered. However, in recent studies it has been found that the greatest challenge encountered has been with size effects. Size effect is a broad term that largely refers to the thickness of the material being formed and how this thickness directly affects the product dimensions and manufacturability. At the microscale, the thickness becomes critical due to the reduced number of grains. When surface contact between the forming tools and the material blanks occur at the macroscale, there is enough material (hundreds of layers of material grains) across the blank thickness to compensate for material flow and the effect of grain orientation. At the microscale, there may be under 10 grains across the blank thickness. With a decreased amount of grains across the thickness, the influence of the grain size, shape and orientation is significant. Any material defects (either natural occurring or ones that occur as a result of the material preparation) have a significant role in altering the forming potential. To date, various micro metal forming and micro materials testing equipment setups have been constructed at the Michigan Tech lab. Initially, the research focus was to create a micro deep drawing setup to potentially build micro sensor encapsulation housings. The research focus shifted to micro metal materials testing equipment setups. These include the construction and testing of the following setups: a micro mechanical bulge test, a micro sheet tension test (testing micro tensile bars), a micro strain analysis (with the use of optical lithography and chemical etching) and a micro sheet hydroforming bulge test. Recently, the focus has shifted to study a micro tube hydroforming process. The intent is to target fuel cells, medical, and sensor encapsulation applications. While the tube hydroforming process is widely understood at the macroscale, the microscale process also offers some significant challenges in terms of size effects. Current work is being conducted in applying direct current to enhance micro tube hydroforming formability. Initially, adding direct current to various metal forming operations has shown some phenomenal results. The focus of current research is to determine the validity of this process.
Resumo:
Infrared thermography is a well-recognized non-destructive testing technique for evaluating concrete bridge elements such as bridge decks and piers. However, overcoming some obstacles and limitations are necessary to be able to add this invaluable technique to the bridge inspector's tool box. Infrared thermography is based on collecting radiant temperature and presenting the results as a thermal infrared image. Two methods considered in conducting an infrared thermography test include passive and active. The source of heat is the main difference between these two approaches of infrared thermography testing. Solar energy and ambient temperature change are the main heat sources in conducting a passive infrared thermography test, while active infrared thermography involves generating a temperature gradient using an external source of heat other than sun. Passive infrared thermography testing was conducted on three concrete bridge decks in Michigan. Ground truth information was gathered through coring several locations on each bridge deck to validate the results obtained from the passive infrared thermography test. Challenges associated with data collection and processing using passive infrared thermography are discussed and provide additional evidence to confirm that passive infrared thermography is a promising remote sensing tool for bridge inspections. To improve the capabilities of the infrared thermography technique for evaluation of the underside of bridge decks and bridge girders, an active infrared thermography technique using the surface heating method was developed in the laboratory on five concrete slabs with simulated delaminations. Results from this study demonstrated that active infrared thermography not only eliminates some limitations associated with passive infrared thermography, but also provides information regarding the depth of the delaminations. Active infrared thermography was conducted on a segment of an out-of-service prestressed box beam and cores were extracted from several locations on the beam to validate the results. This study confirms the feasibility of the application of active infrared thermography on concrete bridges and of estimating the size and depth of delaminations. From the results gathered in this dissertation, it was established that applying both passive and active thermography can provide transportation agencies with qualitative and quantitative measures for efficient maintenance and repair decision-making.
Electroweak gauge-boson and Higgs production at Small qT: Infrared safety from the collinear anomaly
Resumo:
We discuss the differential cross sections for electroweak gauge-boson and Higgs production at small and very small transverse momentum q_T. Large logarithms are resummed using soft-collinear effective theory. The collinear anomaly generates a non-perturbative scale q^∗, which protects the processes from receiving large long-distance hadronic contributions. A numerical comparison of our predictions with data on the transverse-momentum distribution in Z-boson production at the Tevatron and LHC is given.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.