25 resultados para Array techniques

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past five years, the cost of solar panels has dropped drastically and, in concert, the number of installed modules has risen exponentially. However, solar electricity is still more than twice as expensive as electricity from a natural gas plant. Fortunately, wire array solar cells have emerged as a promising technology for further lowering the cost of solar.

Si wire array solar cells are formed with a unique, low cost growth method and use 100 times less material than conventional Si cells. The wires can be embedded in a transparent, flexible polymer to create a free-standing array that can be rolled up for easy installation in a variety of form factors. Furthermore, by incorporating multijunctions into the wire morphology, higher efficiencies can be achieved while taking advantage of the unique defect relaxation pathways afforded by the 3D wire geometry.

The work in this thesis shepherded Si wires from undoped arrays to flexible, functional large area devices and laid the groundwork for multijunction wire array cells. Fabrication techniques were developed to turn intrinsic Si wires into full p-n junctions and the wires were passivated with a-Si:H and a-SiNx:H. Single wire devices yielded open circuit voltages of 600 mV and efficiencies of 9%. The arrays were then embedded in a polymer and contacted with a transparent, flexible, Ni nanoparticle and Ag nanowire top contact. The contact connected >99% of the wires in parallel and yielded flexible, substrate free solar cells featuring hundreds of thousands of wires.

Building on the success of the Si wire arrays, GaP was epitaxially grown on the material to create heterostructures for photoelectrochemistry. These cells were limited by low absorption in the GaP due to its indirect bandgap, and poor current collection due to a diffusion length of only 80 nm. However, GaAsP on SiGe offers a superior combination of materials, and wire architectures based on these semiconductors were investigated for multijunction arrays. These devices offer potential efficiencies of 34%, as demonstrated through an analytical model and optoelectronic simulations. SiGe and Ge wires were fabricated via chemical-vapor deposition and reactive ion etching. GaAs was then grown on these substrates at the National Renewable Energy Lab and yielded ns lifetime components, as required for achieving high efficiency devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis covers four different problems in the understanding of vortex sheets, and these are presented in four chapters.

In Chapter 1, free streamline theory is used to determine the steady solutions of an array of identical, hollow or stagnant core vortices in an inviscid, incompressible fluid. Assuming the array is symmetric to rotation through π radians about an axis through any vortex centre, there are two solutions or no solutions depending on whether A^(1/2)/L is less than or greater than 0.38 where A is the area of the vortex and L is the separation distance. Stability analysis shows that the more deformed shape is unstable to infinitesimal symmetric disturbances which leave the centres of the vortices undisplaced.

Chapter 2 is concerned with the roll-up of vortex sheets in homogeneous fluid. The flow over conventional and ring wings is used to test the method of Fink and Soh (1974). Despite modifications which improve the accuracy of the method, unphysical results occur. A possible explanation for this is that small scales are important and an alternate method based on "Cloud-in-Cell" techniques is introduced. The results show small scale growth and amalgamation into larger structures.

The motion of a buoyant pair of line vortices of opposite circulation is considered in Chapter 3. The density difference between the fluid carried by the vortices and the fluid outside is considered small, so that the Boussinesq approximation may be used. A macroscopic model is developed which shows the formation of a detrainment filament and this is included as a modification to the model. The results agree well with the numerical solution as developed by Hill (1975b) and show that after an initial slowdown, the vortices begin to accelerate downwards.

Chapter 4 reproduces completely a paper that has already been published (Baker, Barker, Bofah and Saffman (1974)) on the effect of "vortex wandering" on the measurement of velocity profiles of the trailing vortices behind a wing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Light microscopy has been one of the most common tools in biological research, because of its high resolution and non-invasive nature of the light. Due to its high sensitivity and specificity, fluorescence is one of the most important readout modes of light microscopy. This thesis presents two new fluorescence microscopic imaging techniques: fluorescence optofluidic microscopy and fluorescent Talbot microscopy. The designs of the two systems are fundamentally different from conventional microscopy, which makes compact and portable devices possible. The components of the devices are suitable for mass-production, making the microscopic imaging system more affordable for biological research and clinical diagnostics.

Fluorescence optofluidic microscopy (FOFM) is capable of imaging fluorescent samples in fluid media. The FOFM employs an array of Fresnel zone plates (FZP) to generate an array of focused light spots within a microfluidic channel. As a sample flows through the channel and across the array of focused light spots, a filter-coated CMOS sensor collects the fluorescence emissions. The collected data can then be processed to render a fluorescence microscopic image. The resolution, which is determined by the focused light spot size, is experimentally measured to be 0.65 μm.

Fluorescence Talbot microscopy (FTM) is a fluorescence chip-scale microscopy technique that enables large field-of-view (FOV) and high-resolution imaging. The FTM method utilizes the Talbot effect to project a grid of focused excitation light spots onto the sample. The sample is placed on a filter-coated CMOS sensor chip. The fluorescence emissions associated with each focal spot are collected by the sensor chip and are composed into a sparsely sampled fluorescence image. By raster scanning the Talbot focal spot grid across the sample and collecting a sequence of sparse images, a filled-in high-resolution fluorescence image can be reconstructed. In contrast to a conventional microscope, a collection efficiency, resolution, and FOV are not tied to each other for this technique. The FOV of FTM is directly scalable. Our FTM prototype has demonstrated a resolution of 1.2 μm, and the collection efficiency equivalent to a conventional microscope objective with a 0.70 N.A. The FOV is 3.9 mm × 3.5 mm, which is 100 times larger than that of a 20X/0.40 N.A. conventional microscope objective. Due to its large FOV, high collection efficiency, compactness, and its potential for integration with other on-chip devices, FTM is suitable for diverse applications, such as point-of-care diagnostics, large-scale functional screens, and long-term automated imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.

Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.

The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.

The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.

In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.

Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.

The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.

The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.

Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-background applications such as climate monitoring, biology and security applications demand a large dynamic range. Under such conditions ultra-high sensitivity is not required. The resonator bolometer is a novel detector which is well-suited for these conditions. This device takes advantage of the high-density frequency multiplexing capabilities of superconducting microresonators while allowing for the use of high-Tc superconductors in fabrication, which enables a modest (1-4 K) operating temperature and larger dynamic range than is possible with conventional microresonators. The moderate operating temperature and intrinsic multiplexability of this device reduce cost and allow for large pixel counts, making the resonator bolometer especially suitable for the aforementioned applications. A single pixel consists of a superconducting microresonator whose light-absorbing area is placed on a thermally isolated island. Here we present experimental results and theoretical calculations for a prototype resonator bolometer array. Intrinsic device noise and noise equivalent power (NEP) under both dark and illuminated conditions are presented. Under dark conditions the device sensitivity is limited by the thermal noise fluctuations from the bolometer legs. Under the experimental illuminated conditions the device was photon noise limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, dry chemical modification methods involving UV/ozone, oxygen plasma, and vacuum annealing treatments are explored to precisely control the wettability of CNT arrays. By varying the exposure time of these treatments the surface concentration of oxygenated groups adsorbed on the CNT arrays can be controlled. CNT arrays with very low amount of oxygenated groups exhibit a superhydrophobic behavior. In addition to their extremely high static contact angle, they cannot be dispersed in DI water and their impedance in aqueous electrolytes is extremely high. These arrays have an extreme water repellency capability such that a water droplet will bounce off of their surface upon impact and a thin film of air is formed on their surface as they are immersed in a deep pool of water. In contrast, CNT arrays with very high surface concentration of oxygenated functional groups exhibit an extreme hydrophilic behavior. In addition to their extremely low static contact angle, they can be dispersed easily in DI water and their impedance in aqueous electrolytes is tremendously low. Since the bulk structure of the CNT arrays are preserved during the UV/ozone, oxygen plasma, and vacuum annealing treatments, all CNT arrays can be repeatedly switched between superhydrophilic and superhydrophobic, as long as their O/C ratio is kept below 18%.

The effect of oxidation using UV/ozone and oxygen plasma treatments is highly reversible as long as the O/C ratio of the CNT arrays is kept below 18%. At O/C ratios higher than 18%, the effect of oxidation is no longer reversible. This irreversible oxidation is caused by irreversible changes to the CNT atomic structure during the oxidation process. During the oxidation process, CNT arrays undergo three different processes. For CNT arrays with O/C ratios lower than 40%, the oxidation process results in the functionalization of CNT outer walls by oxygenated groups. Although this functionalization process introduces defects, vacancies and micropores opening, the graphitic structure of the CNT is still largely intact. For CNT arrays with O/C ratios between 40% and 45%, the oxidation process results in the etching of CNT outer walls. This etching process introduces large scale defects and holes that can be obviously seen under TEM at high magnification. Most of these holes are found to be several layers deep and, in some cases, a large portion of the CNT side walls are cut open. For CNT arrays with O/C ratios higher than 45%, the oxidation process results in the exfoliation of the CNT walls and amorphization of the remaining CNT structure. This amorphization process can be implied from the disappearance of C-C sp2 peak in the XPS spectra associated with the pi-bond network.

The impact behavior of water droplet impinging on superhydrophobic CNT arrays in a low viscosity regime is investigated for the first time. Here, the experimental data are presented in the form of several important impact behavior characteristics including critical Weber number, volume ratio, restitution coefficient, and maximum spreading diameter. As observed experimentally, three different impact regimes are identified while another impact regime is proposed. These regimes are partitioned by three critical Weber numbers, two of which are experimentally observed. The volume ratio between the primary and the secondary droplets is found to decrease with the increase of Weber number in all impact regimes other than the first one. In the first impact regime, this is found to be independent of Weber number since the droplet remains intact during and subsequent to the impingement. Experimental data show that the coefficient of restitution decreases with the increase of Weber number in all impact regimes. The rate of decrease of the coefficient of restitution in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Experimental data also show that the maximum spreading factor increases with the increase of Weber number in all impact regimes. The rate of increase of the maximum spreading factor in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Phenomenological approximations and interpretations of the experimental data, as well as brief comparisons to the previously proposed scaling laws, are shown here.

Dry oxidation methods are used for the first time to characterize the influence of oxidation on the capacitive behavior of CNT array EDLCs. The capacitive behavior of CNT array EDLCs can be tailored by varying their oxygen content, represented by their O/C ratio. The specific capacitance of these CNT arrays increases with the increase of their oxygen content in both KOH and Et4NBF4/PC electrolytes. As a result, their gravimetric energy density increases with the increase of their oxygen content. However, their gravimetric power density decreases with the increase of their oxygen content. The optimally oxidized CNT arrays are able to withstand more than 35,000 charge/discharge cycles in Et4NBF4/PC at a current density of 5 A/g while only losing 10% of their original capacitance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crustal structure in Southern California is investigated using travel times from over 200 stations and thousands of local earthquakes. The data are divided into two sets of first arrivals representing a two-layer crust. The Pg arrivals have paths that refract at depths near 10 km and the Pn arrivals refract along the Moho discontinuity. These data are used to find lateral and azimuthal refractor velocity variations and to determine refractor topography.

In Chapter 2 the Pn raypaths are modeled using linear inverse theory. This enables statistical verification that static delays, lateral slowness variations and anisotropy are all significant parameters. However, because of the inherent size limitations of inverse theory, the full array data set could not be processed and the possible resolution was limited. The tomographic backprojection algorithm developed for Chapters 3 and 4 avoids these size problems. This algorithm allows us to process the data sequentially and to iteratively refine the solution. The variance and resolution for tomography are determined empirically using synthetic structures.

The Pg results spectacularly image the San Andreas Fault, the Garlock Fault and the San Jacinto Fault. The Mojave has slower velocities near 6.0 km/s while the Peninsular Ranges have higher velocities of over 6.5 km/s. The San Jacinto block has velocities only slightly above the Mojave velocities. It may have overthrust Mojave rocks. Surprisingly, the Transverse Ranges are not apparent at Pg depths. The batholiths in these mountains are possibly only surficial.

Pn velocities are fast in the Mojave, slow in Southern California Peninsular Ranges and slow north of the Garlock Fault. Pn anisotropy of 2% with a NWW fast direction exists in Southern California. A region of thin crust (22 km) centers around the Colorado River where the crust bas undergone basin and range type extension. Station delays see the Ventura and Los Angeles Basins but not the Salton Trough, where high velocity rocks underlie the sediments. The Transverse Ranges have a root in their eastern half but not in their western half. The Southern Coast Ranges also have a thickened crust but the Peninsular Ranges have no major root.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of the strength of a material is relevant to a variety of applications including automobile collisions, armor penetration and inertial confinement fusion. Although dynamic behavior of materials at high pressures and strain-rates has been studied extensively using plate impact experiments, the results provide measurements in one direction only. Material behavior that is dependent on strength is unaccounted for. The research in this study proposes two novel configurations to mitigate this problem.

The first configuration introduced is the oblique wedge experiment, which is comprised of a driver material, an angled target of interest and a backing material used to measure in-situ velocities. Upon impact, a shock wave is generated in the driver material. As the shock encounters the angled target, it is reflected back into the driver and transmitted into the target. Due to the angle of obliquity of the incident wave, a transverse wave is generated that allows the target to be subjected to shear while being compressed by the initial longitudinal shock such that the material does not slip. Using numerical simulations, this study shows that a variety of oblique wedge configurations can be used to study the shear response of materials and this can be extended to strength measurement as well. Experiments were performed on an oblique wedge setup with a copper impactor, polymethylmethacrylate driver, aluminum 6061-t6 target, and a lithium fluoride window. Particle velocities were measured using laser interferometry and results agree well with the simulations.

The second novel configuration is the y-cut quartz sandwich design, which uses the anisotropic properties of y-cut quartz to generate a shear wave that is transmitted into a thin sample. By using an anvil material to back the thin sample, particle velocities measured at the rear surface of the backing plate can be implemented to calculate the shear stress in the material and subsequently the strength. Numerical simulations were conducted to show that this configuration has the ability to measure the strength for a variety of materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical microscopy is an essential tool in biological science and one of the gold standards for medical examinations. Miniaturization of microscopes can be a crucial stepping stone towards realizing compact, cost-effective and portable platforms for biomedical research and healthcare. This thesis reports on implementations of bright-field and fluorescence chip-scale microscopes for a variety of biological imaging applications. The term “chip-scale microscopy” refers to lensless imaging techniques realized in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical microscopes.

Our strategy for chip-scale microscopy involves utilization of low-cost Complementary metal Oxide Semiconductor (CMOS) image sensors, computational image processing and micro-fabricated structural components. First, the sub-pixel resolving optofluidic microscope (SROFM), will be presented, which combines microfluidics and pixel super-resolution image reconstruction to perform high-throughput imaging of fluidic samples, such as blood cells. We discuss design parameters and construction of the device, as well as the resulting images and the resolution of the device, which was 0.66 µm at the highest acuity. The potential applications of SROFM for clinical diagnosis of malaria in the resource-limited settings is discussed.

Next, the implementations of ePetri, a self-imaging Petri dish platform with microscopy resolution, are presented. Here, we simply place the sample of interest on the surface of the image sensor and capture the direct shadow images under the illumination. By taking advantage of the inherent motion of the microorganisms, we achieve high resolution (~1 µm) imaging and long term culture of motile microorganisms over ultra large field-of-view (5.7 mm × 4.4 mm) in a specialized ePetri platform. We apply the pixel super-resolution reconstruction to a set of low-resolution shadow images of the microorganisms as they move across the sensing area of an image sensor chip and render an improved resolution image. We perform longitudinal study of Euglena gracilis cultured in an ePetri platform and image based analysis on the motion and morphology of the cells. The ePetri device for imaging non-motile cells are also demonstrated, by using the sweeping illumination of a light emitting diode (LED) matrix for pixel super-resolution reconstruction of sub-pixel shifted shadow images. Using this prototype device, we demonstrate the detection of waterborne parasites for the effective diagnosis of enteric parasite infection in resource-limited settings.

Then, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope, which uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is also based on the image reconstruction with sweeping illumination technique, where the sequence of images are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.

Finally, we report on the implementation of fluorescence chip-scale microscope, based on a silo-filter structure fabricated on the pixel array of a CMOS image sensor. The extruded pixel design with metal walls between neighboring pixels successfully guides fluorescence emission through the thick absorptive filter to the photodiode layer of a pixel. Our silo-filter CMOS image sensor prototype achieves 13-µm resolution for fluorescence imaging over a wide field-of-view (4.8 mm × 4.4 mm). Here, we demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of a carbon nanotube microneedle array is explored in this thesis from multiple perspectives including microneedle fabrication, physical aspects of transdermal delivery, and in vivo transdermal drug delivery experiments. Starting with standard techniques in carbon nanotube (CNT) fabrication, including catalyst patterning and chemical vapor deposition, vertically-aligned carbon nanotubes are utilized as a scaffold to define the shape of the hollow microneedle. Passive, scalable techniques based on capillary action and unique photolithographic methods are utilized to produce a CNT-polymer composite microneedle. Specific examples of CNT-polyimide and CNT-epoxy microneedles are investigated. Further analysis of the transport properties of polymer resins reveals general requirements for applying arbitrary polymers to the fabrication process.

The bottom-up fabrication approach embodied by vertically-aligned carbon nanotubes allows for more direct construction of complex high-aspect ratio features than standard top-down fabrication approaches, making microneedles an ideal application for CNTs. However, current vertically-aligned CNT fabrication techniques only allow for the production of extruded geometries with a constant cross-sectional area, such as cylinders. To rectify this limitation, isotropic oxygen etching is introduced as a novel fabrication technique to create true 3D CNT geometry. Oxygen etching is utilized to create a conical geometry from a cylindrical CNT structure as well as create complex shape transformations in other CNT geometries.

CNT-polymer composite microneedles are anchored onto a common polymer base less than 50 µm thick, which allows for the microneedles to be incorporated into multiple drug delivery platforms, including modified hypodermic syringes and silicone skin patches. Cylindrical microneedles are fabricated with 100 µm outer diameter and height of 200-250 µm with a central cavity, or lumen, diameter of 30 µm to facilitate liquid drug flow. In vitro delivery experiments in swine skin demonstrate the ability of the microneedles to successfully penetrate the skin and deliver aqueous solutions.

An in vivo study was performed to assess the ability of the CNT-polymer microneedles to deliver drugs transdermally. CNT-polymer microneedles are attached to a hand actuated silicone skin patch that holds a liquid reservoir of drugs. Fentanyl, a potent analgesic, was administered to New Zealand White Rabbits through 3 routes of delivery: topical patch, CNT-polymer microneedles, and subcutaneous hypodermic injection. Results demonstrate that the CNT-polymer microneedles have a similar onset of action as the topical patch. CNT-polymer microneedles were also vetted as a painless delivery approach compared to hypodermic injection. Comparative analysis with contemporary microneedle designs demonstrates that the delivery achieved through CNT-polymer microneedles is akin to current hollow microneedle architectures. The inherent advantage of applying a bottom-up fabrication approach alongside similar delivery performance to contemporary microneedle designs demonstrates that the CNT-polymer composite microneedle is a viable architecture in the emerging field of painless transdermal delivery.