942 resultados para Non-uniform heat intensity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an experimental investigation of thermal hydraulic performance of the nanofluid composed by graphene nanoparticles dispersed in a mixture of water and ethylene glycol at a ratio of 70:30% by volume. The tests were carried out under forced convection inside a circular tube with uniform heat flux on the wall for the laminar-turbulent transition regime. The mass flow rate ranged from 40 to 70 g/s corresponding to Reynolds numbers between 3000 and 7500. The heat flux was maintained constant at values of 11, 16 and 21 kW/m², as well as the inlet temperature of 15, 20 and 25°C. Three samples were produced with the nanofluid volumetric concentration of 0.05%, 0.10% and 0.15%. Thermophysical properties were experimentaly measured for all samples that were critically compared and discussed with theoretical models most commonly used in the literature. Initially, experiments with distilled water confirmed the validity of the experimental equipment for the thermo-hydraulic tests. Therefore, nanofluid samples that showed the highest thermal conductivity, corresponding to the volumetric concentrations of 0.15% and 0.10%, were subjected to the tests. The thermal-hydraulic performance for both samples was unsatisfactory. The heat transfer coefficients for convection of nanofluids reduced 21% in average, for the sample with = 0.15% and 26% and for =0.10%. The pressure drop of the samples was higher than the base fluid. Finally, the pressure drop and heat transfer coefficient by convection of both samples were also compared to theoretical models. The models used for pressure drop showed an excellent agreement with experimental results, which is remarkable considering the transitional flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualization of infection and the associated host response has been challenging in adult vertebrates. Owing to their transparency, zebrafish larvae have been used to directly observe infection in vivo; however, such larvae have not yet developed a functional adaptive immune system. Cells involved in adaptive immunity mature later and have therefore been difficult to access optically in intact animals. Thus, the study of many aspects of vertebrate infection requires dissection of adult organs or ex vivo isolation of immune cells. Recently, CLARITY and PACT (passive clarity technique) methodologies have enabled clearing and direct visualization of dissected organs. Here, we show that these techniques can be applied to image host-pathogen interactions directly in whole animals. CLARITY and PACT-based clearing of whole adult zebrafish and Mycobacterium tuberculosis-infected mouse lungs enables imaging of mycobacterial granulomas deep within tissue to a depth of more than 1 mm. Using established transgenic lines, we were able to image normal and pathogenic structures and their surrounding host context at high resolution. We identified the three-dimensional organization of granuloma-associated angiogenesis, an important feature of mycobacterial infection, and characterized the induction of the cytokine tumor necrosis factor (TNF) within the granuloma using an established fluorescent reporter line. We observed heterogeneity in TNF induction within granuloma macrophages, consistent with an evolving view of the tuberculous granuloma as a non-uniform, heterogeneous structure. Broad application of this technique will enable new understanding of host-pathogen interactions in situ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oil spills in marine environments often damage marine and coastal life if not remediated rapidly and efficiently. In spite of the strict enforcement of environmental legislations (i.e., Oil Pollution Act 1990) following the Exxon Valdez oil spill (June 1989; the second biggest oil spill in U.S. history), the Macondo well blowout disaster (April 2010) released 18 times more oil. Strikingly, the response methods used to contain and capture spilled oil after both accidents were nearly identical, note that more than two decades separate Exxon Valdez (1989) and Macondo well (2010) accidents.

The goal of this dissertation was to investigate new advanced materials (mechanically strong aerogel composite blankets-Cabot® Thermal Wrap™ (TW) and Aspen Aerogels® Spaceloft® (SL)), and their applications for oil capture and recovery to overcome the current material limitations in oil spill response methods. First, uptake of different solvents and oils were studied to answer the following question: do these blanket aerogel composites have competitive oil uptake compared to state-of-the-art oil sorbents (i.e., polyurethane foam-PUF)? In addition to their competitive mechanical strength (766, 380, 92 kPa for Spaceloft, Thermal Wrap, and PUF, respectively), our results showed that aerogel composites have three critical advantages over PUF: rapid (3-5 min.) and high (more than two times of PUF’s uptake) oil uptake, reusability (over 10 cycles), and oil recoverability (up to 60%) via mechanical extraction. Chemical-specific sorption experiments showed that the dominant uptake mechanism of aerogels is adsorption to the internal surface, with some contribution of absorption into the pore space.

Second, we investigated the potential environmental impacts (energy and chemical burdens) associated with manufacturing, use, and disposal of SL aerogel and PUF to remove the oil (i.e., 1 m3 oil) from a location (i.e., Macondo well). Different use (single and multiple use) and end of life (landfill, incinerator, and waste-to-energy) scenarios were assessed, and our results demonstrated that multiple use, and waste-to-energy choices minimize the energy and material use of SL aerogel. Nevertheless, using SL once and disposing via landfill still offers environmental and cost savings benefits relative to PUF, and so these benefits are preserved irrespective of the oil-spill-response operator choices.

To inform future aerogel manufacture, we investigated the different laboratory-scale aerogel fabrication technologies (rapid supercritical extraction (RSCE), CO2 supercritical extraction (CSCE), alcohol supercritical extraction (ASCE)). Our results from anticipatory LCA for laboratory-scaled aerogel fabrication demonstrated that RSCE method offers lower cumulative energy and ecotoxicity impacts compared to conventional aerogel fabrication methods (CSCE and ASCE).

The final objective of this study was to investigate different surface coating techniques to enhance oil recovery by modifying the existing aerogel surface chemistries to develop chemically responsive materials (switchable hydrophobicity in response to a CO2 stimulus). Our results showed that studied surface coating methods (drop casting, dip coating, and physical vapor deposition) were partially successful to modify surface with CO2 switchable chemical (tributylpentanamidine), likely because of the heterogeneous fiber structure of the aerogel blankets. A possible solution to these non-uniform coatings would be to include switchable chemical as a precursor during the gel preparation to chemically attach the switchable chemical to the pores of the aerogel.

Taken as a whole, the implications of this work are that mechanical deployment and recovery of aerogel composite blankets is a viable oil spill response strategy that can be deployed today. This will ultimately enable better oil uptake without the uptake of water, potential reuse of the collected oil, reduced material and energy burdens compared to competitive sorbents (e.g., PUF), and reduced occupational exposure to oiled sorbents. In addition, sorbent blankets and booms could be deployed in coastal and open-ocean settings, respectively, which was previously impossible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stable carbon isotopic signature of carbon dioxide (d13CO2) measured in the air occlusions of polar ice provides important constraints on the carbon cycle in past climates. In order to exploit this information for previous glacial periods, one must use deep, clathrated ice, where the occluded air is preserved not in bubbles but in the form of air hydrates. Therefore, it must be established whether the original atmospheric d13CO2 signature can be reconstructed from clathrated ice. We present a comparative study using coeval bubbly ice from Berkner Island and ice from the bubble-clathrate transformation zone (BCTZ) of EPICA Dome C (EDC). In the EDC samples the gas is partitioned into clathrates and remaining bubbles as shown by erroneously low and scattered CO2 concentration values, presenting a worst-case test for d13CO2 reconstructions. Even so, the reconstructed atmospheric d13CO2 values show only slightly larger scatter. The difference to data from coeval bubbly ice is statistically significant. However, the 0.16 per mil magnitude of the offset is small for practical purposes, especially in light of uncertainty from non-uniform corrections for diffusion related fractionation that could contribute to the discrepancy. Our results are promising for palaeo-atmospheric studies of d13CO2 using a ball mill dry extraction technique below the BCTZ of ice cores, where gas is not subject to fractionation into microfractures and between clathrate and bubble reservoirs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty in decision-making for patients’ risk of re-admission arises due to non-uniform data and lack of knowledge in health system variables. The knowledge of the impact of risk factors will provide clinicians better decision-making and in reducing the number of patients admitted to the hospital. Traditional approaches are not capable to account for the uncertain nature of risk of hospital re-admissions. More problems arise due to large amount of uncertain information. Patients can be at high, medium or low risk of re-admission, and these strata have ill-defined boundaries. We believe that our model that adapts fuzzy regression method will start a novel approach to handle uncertain data, uncertain relationships between health system variables and the risk of re-admission. Because of nature of ill-defined boundaries of risk bands, this approach does allow the clinicians to target individuals at boundaries. Targeting individuals at boundaries and providing them proper care may provide some ability to move patients from high risk to low risk band. In developing this algorithm, we aimed to help potential users to assess the patients for various risk score thresholds and avoid readmission of high risk patients with proper interventions. A model for predicting patients at high risk of re-admission will enable interventions to be targeted before costs have been incurred and health status have deteriorated. A risk score cut off level would flag patients and result in net savings where intervention costs are much higher per patient. Preventing hospital re-admissions is important for patients, and our algorithm may also impact hospital income.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Radiotherapy is planned to achieve the optimal physical dose distribution to the target tumour volume whilst minimising dose to the surrounding normal tissue. Recent in vitro experimental evidence has demonstrated an important role for intercellular communication in radiobiological responses following non-uniform exposures. This study aimed to model the impact of these effects in the context of techniques involving highly modulated radiation fields or spatially fractionated treatments such as GRID therapy.

METHODS: Using the small animal radiotherapy research platform (SARRP) as a key enabling technology to deliver precision imaged-guided radiotherapy, it is possible to achieve spatially modulated dose distributions that model typical clinical scenarios. In this work, we planned uniform and spatially fractionated dose distributions using multiple isocentres with beam sizes of 0.5 - 5 mm to obtain 50% volume coverage in a subcutaneous murine tumour model, and applied a model of cellular response that incorporates intercellular communication to assess the potential impact of signalling effects with different ranges.

RESULTS: Models of GRID treatment plans which incorporate intercellular signalling showed increased cell killing within the low dose region. This results in an increase in the Equivalent Uniform Dose (EUD) for GRID exposures compared to standard models, with some GRID exposures being predicted to be more effective than uniform delivery of the same physical dose.

CONCLUSIONS: This study demonstrates the potential impact of radiation induced signalling on tumour cell response for spatially fractionated therapies and identifies key experiments to validate this model and quantify these effects in vivo.

ADVANCES IN KNOWLEDGE: This study highlights the unique opportunities now possible using advanced preclinical techniques to develop a foundation for biophysical optimisation in radiotherapy treatment planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon films were energetically deposited onto copper and nickel foil using a filtered cathodic vacuum arc deposition system. Raman spectroscopy, scanning electron microscopy, transmission electron microscopy and UV–visible spectroscopy showed that graphene films of uniform thickness with up to 10 layers can be deposited onto copper foil at moderate temperatures of 750 C. The resulting films, which can be prepared at high deposition rates, were comparable to graphene films grown at 1050 C using chemical vapour deposition (CVD). This difference in growth temperature is attributed to dynamic annealing which occurs as the film grows from the energetic carbon flux. In the case of nickel substrates, it was found that graphene films can also be prepared at moderate substrate temperatures. However much higher carbon doses were required, indicating that the growth mode differs between substrates as observed in CVD grown graphene. The films deposited onto nickel were also highly non uniform in thickness, indicating that the grain structure of the nickel substrate influenced the growth of graphene layers. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit werden optische Filterarrays für hochqualitative spektroskopische Anwendungen im sichtbaren (VIS) Wellenlängenbereich untersucht. Die optischen Filter, bestehend aus Fabry-Pérot (FP)-Filtern für hochauflösende miniaturisierte optische Nanospektrometer, basieren auf zwei hochreflektierenden dielektrischen Spiegeln und einer zwischenliegenden Resonanzkavität aus Polymer. Jeder Filter erlaubt einem schmalbandigem spektralen Band (in dieser Arbeit Filterlinie genannt) ,abhängig von der Höhe der Resonanzkavität, zu passieren. Die Effizienz eines solchen optischen Filters hängt von der präzisen Herstellung der hochselektiven multispektralen Filterfelder von FP-Filtern mittels kostengünstigen und hochdurchsatz Methoden ab. Die Herstellung der multiplen Spektralfilter über den gesamten sichtbaren Bereich wird durch einen einzelnen Prägeschritt durch die 3D Nanoimprint-Technologie mit sehr hoher vertikaler Auflösung auf einem Substrat erreicht. Der Schlüssel für diese Prozessintegration ist die Herstellung von 3D Nanoimprint-Stempeln mit den gewünschten Feldern von Filterkavitäten. Die spektrale Sensitivität von diesen effizienten optischen Filtern hängt von der Genauigkeit der vertikalen variierenden Kavitäten ab, die durch eine großflächige ‚weiche„ Nanoimprint-Technologie, UV oberflächenkonforme Imprint Lithographie (UV-SCIL), ab. Die Hauptprobleme von UV-basierten SCIL-Prozessen, wie eine nichtuniforme Restschichtdicke und Schrumpfung des Polymers ergeben Grenzen in der potenziellen Anwendung dieser Technologie. Es ist sehr wichtig, dass die Restschichtdicke gering und uniform ist, damit die kritischen Dimensionen des funktionellen 3D Musters während des Plasmaätzens zur Entfernung der Restschichtdicke kontrolliert werden kann. Im Fall des Nanospektrometers variieren die Kavitäten zwischen den benachbarten FP-Filtern vertikal sodass sich das Volumen von jedem einzelnen Filter verändert , was zu einer Höhenänderung der Restschichtdicke unter jedem Filter führt. Das volumetrische Schrumpfen, das durch den Polymerisationsprozess hervorgerufen wird, beeinträchtigt die Größe und Dimension der gestempelten Polymerkavitäten. Das Verhalten des großflächigen UV-SCIL Prozesses wird durch die Verwendung von einem Design mit ausgeglichenen Volumen verbessert und die Prozessbedingungen werden optimiert. Das Stempeldesign mit ausgeglichen Volumen verteilt 64 vertikal variierenden Filterkavitäten in Einheiten von 4 Kavitäten, die ein gemeinsames Durchschnittsvolumen haben. Durch die Benutzung der ausgeglichenen Volumen werden einheitliche Restschichtdicken (110 nm) über alle Filterhöhen erhalten. Die quantitative Analyse der Polymerschrumpfung wird in iii lateraler und vertikaler Richtung der FP-Filter untersucht. Das Schrumpfen in vertikaler Richtung hat den größten Einfluss auf die spektrale Antwort der Filter und wird durch die Änderung der Belichtungszeit von 12% auf 4% reduziert. FP Filter die mittels des Volumengemittelten Stempels und des optimierten Imprintprozesses hergestellt wurden, zeigen eine hohe Qualität der spektralen Antwort mit linearer Abhängigkeit zwischen den Kavitätshöhen und der spektralen Position der zugehörigen Filterlinien.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das Verfahren der Lebensmitteltrocknung wird häufig angewendet, um ein Produkt für längere Zeit haltbar zu machen. Obst und Gemüse sind aufgrund ihres hohen Wassergehalts leicht verderblich durch biochemische Vorgänge innerhalb des Produktes, nicht sachgemäße Lagerung und unzureichende Transportmöglichkeiten. Um solche Verluste zu vermeiden wird die direkte Trocknung eingesetzt, welche die älteste Methode zum langfristigen haltbarmachen ist. Diese Methode ist jedoch veraltet und kann den heutigen Herausforderungen nicht gerecht werden. In der vorliegenden Arbeit wurde ein neuer Chargentrockner, mit diagonalem Luftstömungskanal entlang der Länge des Trocknungsraumes und ohne Leitbleche entwickelt. Neben dem unbestreitbaren Nutzen der Verwendung von Leitblechen, erhöhen diese jedoch die Konstruktionskosten und führen auch zu einer Erhöhung des Druckverlustes. Dadurch wird im Trocknungsprozess mehr Energie verbraucht. Um eine räumlich gleichmäßige Trocknung ohne Leitbleche zu erreichen, wurden die Lebensmittelbehälter diagonal entlang der Länge des Trockners platziert. Das vorrangige Ziel des diagonalen Kanals war, die einströmende, warme Luft gleichmäßig auf das gesamte Produkt auszurichten. Die Simulation des Luftstroms wurde mit ANSYS-Fluent in der ANSYS Workbench Plattform durchgeführt. Zwei verschiedene Geometrien der Trocknungskammer, diagonal und nicht diagonal, wurden modelliert und die Ergebnisse für eine gleichmäßige Luftverteilung aus dem diagonalen Luftströmungsdesign erhalten. Es wurde eine Reihe von Experimenten durchgeführt, um das Design zu bewerten. Kartoffelscheiben dienten als Trocknungsgut. Die statistischen Ergebnisse zeigen einen guten Korrelationskoeffizienten für die Luftstromverteilung (87,09%) zwischen dem durchschnittlich vorhergesagten und der durchschnittlichen gemessenen Strömungsgeschwindigkeit. Um den Effekt der gleichmäßigen Luftverteilung auf die Veränderung der Qualität zu bewerten, wurde die Farbe des Produktes, entlang der gesamten Länge der Trocknungskammer kontaktfrei im on-line-Verfahren bestimmt. Zu diesem Zweck wurde eine Imaging-Box, bestehend aus Kamera und Beleuchtung entwickelt. Räumliche Unterschiede dieses Qualitätsparameters wurden als Kriterium gewählt, um die gleichmäßige Trocknungsqualität in der Trocknungskammer zu bewerten. Entscheidend beim Lebensmittel-Chargentrockner ist sein Energieverbrauch. Dafür wurden thermodynamische Analysen des Trockners durchgeführt. Die Energieeffizienz des Systems wurde unter den gewählten Trocknungsbedingungen mit 50,16% kalkuliert. Die durchschnittlich genutzten Energie in Form von Elektrizität zur Herstellung von 1kg getrockneter Kartoffeln wurde mit weniger als 16,24 MJ/kg und weniger als 4,78 MJ/kg Wasser zum verdampfen bei einer sehr hohen Temperatur von jeweils 65°C und Scheibendicken von 5mm kalkuliert. Die Energie- und Exergieanalysen für diagonale Chargentrockner wurden zudem mit denen anderer Chargentrockner verglichen. Die Auswahl von Trocknungstemperatur, Massenflussrate der Trocknungsluft, Trocknerkapazität und Heiztyp sind die wichtigen Parameter zur Bewertung der genutzten Energie von Chargentrocknern. Die Entwicklung des diagonalen Chargentrockners ist eine nützliche und effektive Möglichkeit um dei Trocknungshomogenität zu erhöhen. Das Design erlaubt es, das gesamte Produkt in der Trocknungskammer gleichmäßigen Luftverhältnissen auszusetzen, statt die Luft von einer Horde zur nächsten zu leiten.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il existe désormais une grande variété de lentilles panoramiques disponibles sur le marché dont certaines présentant des caractéristiques étonnantes. Faisant partie de cette dernière catégorie, les lentilles Panomorphes sont des lentilles panoramiques anamorphiques dont le profil de distorsion est fortement non-uniforme, ce qui cause la présence de zones de grandissement augmenté dans le champ de vue. Dans un contexte de robotique mobile, ces particularités peuvent être exploitées dans des systèmes stéréoscopiques pour la reconstruction 3D d’objets d’intérêt qui permettent à la fois une bonne connaissance de l’environnement, mais également l’accès à des détails plus fins en raison des zones de grandissement augmenté. Cependant, à cause de leur complexité, ces lentilles sont difficiles à calibrer et, à notre connaissance, aucune étude n’a réellement été menée à ce propos. L’objectif principal de cette thèse est la conception, l’élaboration et l’évaluation des performances de systèmes stéréoscopiques Panomorphes. Le calibrage a été effectué à l’aide d’une technique établie utilisant des cibles planes et d’une boîte à outils de calibrage dont l’usage est répandu. De plus, des techniques mathématiques nouvelles visant à rétablir la symétrie de révolution dans l’image (cercle) et à uniformiser la longueur focale (cercle uniforme) ont été développées pour voir s’il était possible d’ainsi faciliter le calibrage. Dans un premier temps, le champ de vue a été divisé en zones à l’intérieur desquelles la longueur focale instantanée varie peu et le calibrage a été effectué pour chacune d’entre elles. Puis, le calibrage général des systèmes a aussi été réalisé pour tout le champ de vue simultanément. Les résultats ont montré que la technique de calibrage par zone ne produit pas de gain significatif quant à la qualité des reconstructions 3D d’objet d’intérêt par rapport au calibrage général. Cependant, l’étude de cette nouvelle approche a permis de réaliser une évaluation des performances des systèmes stéréoscopiques Panomorphes sur tout le champ de vue et de montrer qu’il est possible d’effectuer des reconstructions 3D de qualité dans toutes les zones. De plus, la technique mathématique du cercle a produit des résultats de reconstructions 3D en général équivalents à l’utilisation des coordonnées originales. Puisqu’il existe des outils de calibrage qui, contrairement à celui utilisé dans ce travail, ne disposent que d’un seul degré de liberté sur la longueur focale, cette technique pourrait rendre possible le calibrage de lentilles Panomorphes à l’aide de ceux-ci. Finalement, certaines conclusions ont pu être dégagées quant aux facteurs déterminants influençant la qualité de la reconstruction 3D à l’aide de systèmes stéréoscopiques Panomorphes et aux caractéristiques à privilégier dans le choix des lentilles. La difficulté à calibrer les optiques Panomorphes en laboratoire a mené à l’élaboration d’une technique de calibrage virtuel utilisant un logiciel de conception optique et une boîte à outils de calibrage. Cette approche a permis d’effectuer des simulations en lien avec l’impact des conditions d’opération sur les paramètres de calibrage et avec l’effet des conditions de calibrage sur la qualité de la reconstruction. Des expérimentations de ce type sont pratiquement impossibles à réaliser en laboratoire mais représentent un intérêt certain pour les utilisateurs. Le calibrage virtuel d’une lentille traditionnelle a aussi montré que l’erreur de reprojection moyenne, couramment utilisée comme façon d’évaluer la qualité d’un calibrage, n’est pas nécessairement un indicateur fiable de la qualité de la reconstruction 3D. Il est alors nécessaire de disposer de données supplémentaires pour juger adéquatement de la qualité d’un calibrage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-walled carbon nanotubes (SWNTs) have been studied as a prominent class of high performance electronic materials for next generation electronics. Their geometry dependent electronic structure, ballistic transport and low power dissipation due to quasi one dimensional transport, and their capability of carrying high current densities are some of the main reasons for the optimistic expectations on SWNTs. However, device applications of individual SWNTs have been hindered by uncontrolled variations in characteristics and lack of scalable methods to integrate SWNTs into electronic devices. One relatively new direction in SWNT electronics, which avoids these issues, is using arrays of SWNTs, where the ensemble average may provide uniformity from device to device, and this new breed of electronic material can be integrated into electronic devices in a scalable fashion. This dissertation describes (1) methods for characterization of SWNT arrays, (2) how the electrical transport in these two-dimensional arrays depend on length scales and spatial anisotropy, (3) the interaction of aligned SWNTs with the underlying substrate, and (4) methods for scalable integration of SWNT arrays into electronic devices. The electrical characterization of SWNT arrays have been realized by polymer electrolyte-gated SWNT thin film transistors (TFTs). Polymer electrolyte-gating addresses many technical difficulties inherent to electrical characterization by gating through oxide-dielectrics. Having shown polymer electrolyte-gating can be successfully applied on SWNT arrays, we have studied the length scaling dependence of electrical transport in SWNT arrays. Ultrathin films formed by sub-monolayer surface coverage of SWNT arrays are very interesting systems in terms of the physics of two-dimensional electronic transport. We have observed that they behave qualitatively different than the classical conducting films, which obey the Ohm’s law. The resistance of an ultrathin film of SWNT arrays is indeed non-linear with the length of the film, across which the transport occurs. More interestingly, a transition between conducting and insulating states is observed at a critical surface coverage, which is called percolation limit. The surface coverage of conducting SWNTs can be manipulated by turning on and off the semiconductors in the SWNT array, leading to the operation principle of SWNT TFTs. The percolation limit depends also on the length and the spatial orientation of SWNTs. We have also observed that the percolation limit increases abruptly for aligned arrays of SWNTs, which are grown on single crystal quartz substrates. In this dissertation, we also compare our experimental results with a two-dimensional stick network model, which gives a good qualitative picture of the electrical transport in SWNT arrays in terms of surface coverage, length scaling, and spatial orientation, and briefly discuss the validity of this model. However, the electronic properties of SWNT arrays are not only determined by geometrical arguments. The contact resistances at the nanotube-nanotube and nanotube-electrode (bulk metal) interfaces, and interactions with the local chemical groups and the underlying substrates are among other issues related to the electronic transport in SWNT arrays. Different aspects of these factors have been studied in detail by many groups. In fact, I have also included a brief discussion about electron injection onto semiconducting SWNTs by polymer dopants. On the other hand, we have compared the substrate-SWNT interactions for isotropic (in two dimensions) arrays of SWNTs grown on Si/SiO2 substrates and horizontally (on substrate) aligned arrays of SWNTs grown on single crystal quartz substrates. The anisotropic interactions associated with the quartz lattice between quartz and SWNTs that allow near perfect horizontal alignment on substrate along a particular crystallographic direction is examined by Raman spectroscopy, and shown to lead to uniaxial compressive strain in as-grown SWNTs on single crystal quartz. This is the first experimental demonstration of the hard-to-achieve uniaxial compression of SWNTs. Temperature dependence of Raman G-band spectra along the length of individual nanotubes reveals that the compressive strain is non-uniform and can be larger than 1% locally at room temperature. Effects of device fabrication steps on the non-uniform strain are also examined and implications on electrical performance are discussed. Based on our findings, there are discussions about device performances and designs included in this dissertation. The channel length dependences of device mobilities and on/off ratios are included for SWNT TFTs. Time response of polymer-electrolyte gated SWNT TFTs has been measured to be ~300 Hz, and a proof-of-concept logic inverter has been fabricated by using polymer electrolyte gated SWNT TFTs for macroelectronic applications. Finally, I dedicated a chapter on scalable device designs based on aligned arrays of SWNTs, including a design for SWNT memory devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We acquired coincident marine controlled-source electromagnetic (CSEM), high-resolution seismic reflection and ocean-bottom seismometer (OBS) data over an active pockmark in the crest of the southern part of the Vestnesa Ridge, to estimate fluid composition within an underlying fluid-migration chimney. Synthetic model studies suggest resistivity obtained from CSEM data can resolve gas or hydrate saturation greater than 5% within the chimney. Acoustic chimneys imaged by seismic reflection data beneath the pockmark and on the ridge flanks, were found to be associated with high-resistivity anomalies (+2-4 m). High-velocity anomalies (+0.3 km/s), within the gas hydrate stability zone (GHSZ) and low-velocity anomalies (-0.2 km/s) underlying the GHSZ, were also observed. Joint analysis of the resistivity and velocity anomaly indicates pore saturation of up to 52% hydrate with 28% free gas, or up to 73% hydrate with 4% free gas, within the chimney beneath the pockmark assuming a non-uniform and uniform fluid distribution respectively. Similarly, we estimate up to 30% hydrate with 4% free gas or 30% hydrate with 2% free gas within the pore space of the GHSZ outside the central chimney assuming a non-uniform and uniform fluid distribution respectively. High levels of free-gas saturation in the top part of the chimney are consistent with episodic gas venting from the pockmark.