10 resultados para level lifetime
em CaltechTHESIS
Resumo:
Noise measurements from 140°K to 350°K ambient temperature and between 10kHz and 22MHz performed on a double injection silicon diode as a function of operating point indicate that the high frequency noise depends linearly on the ambient temperature T and on the differential conductance g measured at the same frequency. The noise is represented quantitatively by〈i^2〉 = α•4kTgΔf. A new interpretation demands Nyquist noise with α ≡ 1 in these devices at high frequencies. This is in accord with an equivalent circuit derived for the double injection process. The effects of diode geometry on the static I-V characteristic as well as on the ac properties are illustrated. Investigation of the temperature dependence of double injection yields measurements of the temperature variation of the common high-level lifetime τ(τ ∝ T^2), the hole conductivity mobility µ_p (µ_p ∝ T^(-2.18)) and the electron conductivity mobility µ_n(µ_n ∝ T^(-1.75)).
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.
Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.
Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.
Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.
In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.
Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.
The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.
Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.
Resumo:
Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.
However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.
This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.
Resumo:
This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.
We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.
In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.
In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.
The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.
Resumo:
Non-classical properties and quantum interference (QI) in two-photon excitation of a three level atom (|1〉), |2〉, |3〉) in a ladder configuration, illuminated by multiple fields in non-classical (squeezed) and/or classical (coherent) states, is studied. Fundamentally new effects associated with quantum correlations in the squeezed fields and QI due to multiple excitation pathways have been observed. Theoretical studies and extrapolations of these findings have revealed possible applications which are far beyond any current capabilities, including ultrafast nonlinear mixing, ultrafast homodyne detection and frequency metrology. The atom used throughout the experiments was Cesium, which was magneto-optically trapped in a vapor cell to produce a Doppler-free sample. For the first part of the work the |1〉 → |2〉 → |3〉 transition (corresponding to the 6S1/2F = 4 → 6P3/2F' = 5 → 6D5/2F" = 6 transition) was excited by using the quantum-correlated signal (Ɛs) and idler (Ɛi) output fields of a subthreshold non-degenerate optical parametric oscillator, which was tuned so that the signal and idler fields were resonant with the |1〉 → |2〉 and |2〉 → |3〉 transitions, respectively. In contrast to excitation with classical fields for which the excitation rate as a function of intensity has always an exponent greater than or equal to two, excitation with squeezed-fields has been theoretically predicted to have an exponent that approaches unity for small enough intensities. This was verified experimentally by probing the exponent down to a slope of 1.3, demonstrating for the first time a purely non-classical effect associated with the interaction of squeezed fields and atoms. In the second part excitation of the two-photon transition by three phase coherent fields Ɛ1 , Ɛ2 and Ɛ0, resonant with the dipole |1〉 → |2〉 and |2〉 → |3〉 and quadrupole |1〉 → |3〉 transitions, respectively, is studied. QI in the excited state population is observed due to two alternative excitation pathways. This is equivalent to nonlinear mixing of the three excitation fields by the atom. Realizing that in the experiment the three fields are spaced in frequency over a range of 25 THz, and extending this scheme to other energy triplets and atoms, leads to the discovery that ranges up to 100's of THz can be bridged in a single mixing step. Motivated by these results, a master equation model has been developed for the system and its properties have been extensively studied.
Resumo:
Part I
The latent heat of vaporization of n-decane is measured calorimetrically at temperatures between 160° and 340°F. The internal energy change upon vaporization, and the specific volume of the vapor at its dew point are calculated from these data and are included in this work. The measurements are in excellent agreement with available data at 77° and also at 345°F, and are presented in graphical and tabular form.
Part II
Simultaneous material and energy transport from a one-inch adiabatic porous cylinder is studied as a function of free stream Reynolds Number and turbulence level. Experimental data is presented for Reynolds Numbers between 1600 and 15,000 based on the cylinder diameter, and for apparent turbulence levels between 1.3 and 25.0 per cent. n-heptane and n-octane are the evaporating fluids used in this investigation.
Gross Sherwood Numbers are calculated from the data and are in substantial agreement with existing correlations of the results of other workers. The Sherwood Numbers, characterizing mass transfer rates, increase approximately as the 0.55 power of the Reynolds Number. At a free stream Reynolds Number of 3700 the Sherwood Number showed a 40% increase as the apparent turbulence level of the free stream was raised from 1.3 to 25 per cent.
Within the uncertainties involved in the diffusion coefficients used for n-heptane and n-octane, the Sherwood Numbers are comparable for both materials. A dimensionless Frössling Number is computed which characterizes either heat or mass transfer rates for cylinders on a comparable basis. The calculated Frössling Numbers based on mass transfer measurements are in substantial agreement with Frössling Numbers calculated from the data of other workers in heat transfer.
Resumo:
Since the discovery in 1962 of laser action in semiconductor diodes made from GaAs, the study of spontaneous and stimulated light emission from semiconductors has become an exciting new field of semiconductor physics and quantum electronics combined. Included in the limited number of direct-gap semiconductor materials suitable for laser action are the members of the lead salt family, i.e . PbS, PbSe and PbTe. The material used for the experiments described herein is PbTe . The semiconductor PbTe is a narrow band- gap material (Eg = 0.19 electron volt at a temperature of 4.2°K). Therefore, the radiative recombination of electron-hole pairs between the conduction and valence bands produces photons whose wavelength is in the infrared (λ ≈ 6.5 microns in air).
The p-n junction diode is a convenient device in which the spontaneous and stimulated emission of light can be achieved via current flow in the forward-bias direction. Consequently, the experimental devices consist of a group of PbTe p-n junction diodes made from p –type single crystal bulk material. The p - n junctions were formed by an n-type vapor- phase diffusion perpendicular to the (100) plane, with a junction depth of approximately 75 microns. Opposite ends of the diode structure were cleaved to give parallel reflectors, thereby forming the Fabry-Perot cavity needed for a laser oscillator. Since the emission of light originates from the recombination of injected current carriers, the nature of the radiation depends on the injection mechanism.
The total intensity of the light emitted from the PbTe diodes was observed over a current range of three to four orders of magnitude. At the low current levels, the light intensity data were correlated with data obtained on the electrical characteristics of the diodes. In the low current region (region A), the light intensity, current-voltage and capacitance-voltage data are consistent with the model for photon-assisted tunneling. As the current is increased, the light intensity data indicate the occurrence of a change in the current injection mechanism from photon-assisted tunneling (region A) to thermionic emission (region B). With the further increase of the injection level, the photon-field due to light emission in the diode builds up to the point where stimulated emission (oscillation) occurs. The threshold current at which oscillation begins marks the beginning of a region (region C) where the total light intensity increases very rapidly with the increase in current. This rapid increase in intensity is accompanied by an increase in the number of narrow-band oscillating modes. As the photon density in the cavity continues to increase with the injection level, the intensity gradually enters a region of linear dependence on current (region D), i.e. a region of constant (differential) quantum efficiency.
Data obtained from measurements of the stimulated-mode light-intensity profile and the far-field diffraction pattern (both in the direction perpendicular to the junction-plane) indicate that the active region of high gain (i.e. the region where a population inversion exists) extends to approximately a diffusion length on both sides of the junction. The data also indicate that the confinement of the oscillating modes within the diode cavity is due to a variation in the real part of the dielectric constant, caused by the gain in the medium. A value of τ ≈ 10-9 second for the minority- carrier recombination lifetime (at a diode temperature of 20.4°K) is obtained from the above measurements. This value for τ is consistent with other data obtained independently for PbTe crystals.
Data on the threshold current for stimulated emission (for a diode temperature of 20. 4°K) as a function of the reciprocal cavity length were obtained. These data yield a value of J’th = (400 ± 80) amp/cm2 for the threshold current in the limit of an infinitely long diode-cavity. A value of α = (30 ± 15) cm-1 is obtained for the total (bulk) cavity loss constant, in general agreement with independent measurements of free- carrier absorption in PbTe. In addition, the data provide a value of ns ≈ 10% for the internal spontaneous quantum efficiency. The above value for ns yields values of tb ≈ τ ≈ 10-9 second and ts ≈ 10-8 second for the nonradiative and the spontaneous (radiative) lifetimes, respectively.
The external quantum efficiency (nd) for stimulated emission from diode J-2 (at 20.4° K) was calculated by using the total light intensity vs. diode current data, plus accepted values for the material parameters of the mercury- doped germanium detector used for the measurements. The resulting value is nd ≈ 10%-20% for emission from both ends of the cavity. The corresponding radiative power output (at λ = 6.5 micron) is 120-240 milliwatts for a diode current of 6 amps.
Resumo:
I. PHOSPHORESCENCE AND THE TRUE LIFETIME OF TRIPLET STATES IN FLUID SOLUTIONS
Phosphorescence has been observed in a highly purified fluid solution of naphthalene in 3-methylpentane (3-MP). The phosphorescence lifetime of C10H8 in 3-MP at -45 °C was found to be 0.49 ± 0.07 sec, while that of C10D8 under identical conditions is 0.64 ± 0.07 sec. At this temperature 3-MP has the same viscosity (0.65 centipoise) as that of benzene at room temperature. It is believed that even these long lifetimes are dominated by impurity quenching mechanisms. Therefore it seems that the radiationless decay times of the lowest triplet states of simple aromatic hydrocarbons in liquid solutions are sensibly the same as those in the solid phase. A slight dependence of the phosphorescence lifetime on solvent viscosity was observed in the temperature region, -60° to -18°C. This has been attributed to the diffusion-controlled quenching of the triplet state by residual impurity, perhaps oxygen. Bimolecular depopulation of the triplet state was found to be of major importance over a large part of the triplet decay.
The lifetime of triplet C10H8 at room temperature was also measured in highly purified benzene by means of both phosphorescence and triplet-triplet absorption. The lifetime was estimated to be at least ten times shorter than that in 3-MP. This is believed to be due not only to residual impurities in the solvent but also to small amounts of impurities produced through unavoidable irradiation by the excitation source. In agreement with this idea, lifetime shortening caused by intense flashes of light is readily observed. This latter result suggests that experiments employing flash lamp techniques are not suitable for these kinds of studies.
The theory of radiationless transitions, based on Robinson's theory, is briefly outlined. A simple theoretical model which is derived from Fano's autoionization gives identical result.
Il. WHY IS CONDENSED OXYGEN BLUE?
The blue color of oxygen is mostly derived from double transitions. This paper presents a theoretical calculation of the intensity of the double transition (a 1Δg) (a 1Δg)←(X 3Σg-) (X 3Σg-), using a model based on a pair of oxygen molecules at a fixed separation of 3.81 Å. The intensity enhancement is assumed to be derived from the mixing (a 1Δg) (a 1Δg) ~~~ (X 3Σg-) (X 3Σu-) and (a 1Δg) (1Δu) ~~~ (X 3Σg-) (X 3Σg-). Matrix elements for these interactions are calculated using a π-electron approximation for the pair system. Good molecular wavefunctions are used for all but the perturbing (B 3Σu-) state, which is approximated in terms of ground state orbitals. The largest contribution to the matrix elements arises from large intramolecular terms multiplied by intermolecular overlap integrals. The strength of interaction depends not only on the intermolecular separation of the two oxygen molecules, but also as expected on the relative orientation. Matrix elements are calculated for different orientations, and the angular dependence is fit to an analytical expression. The theory therefore not only predicts an intensity dependence on density but also one on phase at constant density. Agreement between theory and available experimental results is satisfactory considering the nature of the approximation, and indicates the essential validity of the overall approach to this interesting intensity enhancement problem.
Resumo:
An air filled ionization chamber has been constructed with a volume of 552 liters and a wall consisting of 12.7 mg/cm2 of plastic wrapped over a rigid, lightweight aluminum frame. A calibration in absolute units, independent of previous Caltech ion chamber calibrations, was applied to a sealed Neher electrometer for use in this chamber. The new chamber was flown along with an older, argon filled, balloon type chamber in a C-135 aircraft from 1,000 to 40,000 feet altitude, and other measurements of sea level cosmic ray ionization were made, resulting in the value of 2.60 ± .03 ion pairs/cm3 sec atm) at sea level. The calibrations of the two instruments were found to agree within 1 percent, and the airplane data were consistent with previous balloon measurements in the upper atmosphere. Ionization due to radon gas in the atmosphere was investigated. Absolute ionization data in the lower atmosphere have been compared with results of other observers, and discrepancies have been discussed.
Data from a polar orbiting ion chamber on the OGO-II, IV spacecraft have been analyzed. The problem of radioactivity produced on the spacecraft during passes through high fluxes of trapped protons has been investigated, and some corrections determined. Quiet time ionization averages over the polar regions have been plotted as function of altitude, and an analytical fit is made to the data that gives a value of 10.4 ± 2.3 percent for the fractional part of the ionization at the top of the atmosphere due to splash albedo particles, although this result is shown to depend on an assumed angular distribution for the albedo particles. Comparisons with other albedo measurements are made. The data are shown to be consistent with balloon and interplanetary ionization measurements. The position of the cosmic ray knee is found to exhibit an altitude dependence, a North-South effect, and a small local time variation.