919 resultados para Very long path length


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has long been recognized that many direct parallel tridiagonal solvers are only efficient for solving a single tridiagonal equation of large sizes, and they become inefficient when naively used in a three-dimensional ADI solver. In order to improve the parallel efficiency of an ADI solver using a direct parallel solver, we implement the single parallel partition (SPP) algorithm in conjunction with message vectorization, which aggregates several communication messages into one to reduce the communication costs. The measured performances show that the longest allowable message vector length (MVL) is not necessarily the best choice. To understand this observation and optimize the performance, we propose an improved model that takes the cache effect into consideration. The optimal MVL for achieving the best performance is shown to depend on number of processors and grid sizes. Similar dependence of the optimal MVL is also found for the popular block pipelined method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The findings are presented of an assessment made of the gillnet fishery in Kainji Lake, Nigeria from 1969 to the present, on the basis of data sets from commercial and experimental gillnet fishing, with the purpose to detect trends in some key fishery monitoring indicators. During this period, there has been an increase in the number of small meshed nets in the fishery resulting in a shift in the mode to lower mesh sizes; consequently, the average mesh size declined gradually in the fishery. This trend is found to be directly correlated with the decline in the CPUE and mean weight of the fish species. It is argued that the observed trend in CPUE and mean weight is forcing the fishermen to switch effort to gears such as traps which have very small meshes and can indescriminately take all sizes of the fish. It is shown that the catch composition by weight of Citharinus citharus, Lates niloticus and tilapias declined in the gillnet fishery in the late 70's and early 80's. Recent data, from 1994 to 1996, however indicates that C. citharus is recovering, but with declining mean weight. This suggests that the exploitation pattern is shifting to the smaller fish through the use of small meshed nets. In general, however, there has not been drastic changes in species bio-diversity in the Lake as a result of predatory effect and ecosystem overfishing as has happened in other great African Lakes. The species composition since lake formation continued to be dominated by fewer that 20 species. The potential yield for the lake has been estimated to be 32,166 tonnes (excluding clupeids) and the required optimum fishing effort to be 1,814 fishing canoes. In view of the relative stability of the species diversity in the lake and the current fish production level, it is proposed here that this MSY be adopted for all species. This would be achieved with the current effort level in the lake assuming that the efficiency of the fishermen and their gears do not improve. It should be reviewed after 10 or more years of catch and effort data collection. (PDF contains 65 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate high-order harmonic emission and isolated attosecond pulse (IAP) generation in atoms driven by a two-colour multi-cycle laser field consisting of an 800 nm pulse and an infrared laser pulse at an arbitrary wavelength. With moderate laser intensity, an IAP of similar to 220 as can be generated in helium atoms by using two-colour laser pulses of 35 fs/800 nm and 46 fs/1150 nm. The discussion based on the three-step semiclassical model, and time-frequency analysis shows a clear picture of the high-order harmonic generation in the waveform-controlled laser field which is of benefit to the generation of XUV IAP and attosecond electron pulses. When the propagation effect is included, the duration of the IAP can be shorter than 200 as, when the driving laser pulses are focused 1 mm before the gas medium with a length between 1.5 mm and 2 mm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytical fluid model for resonance absorption during the oblique incidence by femtosecond laser pulses on a small-scale-length density plasma [k(0)L is an element of(0.1,10)] is proposed. The physics of resonance absorption is analyzed more clearly as we separate the electric field into an electromagnetic part and an electrostatic part. It is found that the characteristics of the physical quantities (fractional absorption, optimum angle, etc.) in a small-scale-length plasma are quite different from the predictions of classical theory. Absorption processes are generally dependent on the density scale length. For shorter scale length or higher laser intensity, vacuum heating tends to be dominant. It is shown that the electrons being pulled out and then returned to the plasma at the interface layer by the wave field can lead to a phenomenon like wave breaking. This can lead to heating of the plasma at the expanse of the wave energy. It is found that the optimum angle is independent of the laser intensity while the absorption rate increases with the laser intensity, and the absorption rate can reach as high as 25%. (c) 2006 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of electron transport between distant sites was studied. The rate depends crucially on the chemical details of the donor, acceptor, and surrounding medium. These reactions involve electron tunneling through the intervening medium and are, therefore, profoundly influenced by the geometry and energetics of the intervening molecules. The dependence of rate on distance was considered for several rigid donor-acceptor "linkers" of experimental importance. Interpretation of existing experiments and predictions for new experiments were made.

The electronic and nuclear motion in molecules is correlated. A Born-Oppenheimer separation is usually employed in quantum chemistry to separate this motion. Long distance electron transfer rate calculations require the total donor wave function when the electron is very far from its binding nuclei. The Born-Oppenheimer wave functions at large electronic distance are shown to be qualitatively wrong. A model which correctly treats the coupling was proposed. The distance and energy dependence of the electron transfer rate was determined for such a model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.

Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.

However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.

Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Chapter 1.....A physicochemical study of the DNA molecules from the three bacteriophages, N1, N5, and N6, which infect the bacterium, M. lysodeikticus, has been made. The molecular weights, as measured by both electron microscopy and sedimentation velocity, are 23 x 106 for N5 DNA and 31 x 106 for N1 and N6 DNA's. All three DNA's are capable of thermally reversible cyclization. N1 and N6 DNA's have identical or very similar base sequences as judged by membrane filter hybridization and by electron microscope heteroduplex studies. They have identical or similar cohesive ends. These results are in accord with the close biological relation between N1 and N6 phages. N5 DNA is not closely related to N1 or N6 DNA. The denaturation Tm of all three DNA's is the same and corresponds to a (GC) content of 70%. However, the buoyant densities in CsCl of Nl and N6 DNA's are lower than expected, corresponding to predicted GC contents of 64 and 67%. The buoyant densities in Cs2SO4 are also somewhat anomalous. The buoyant density anomalies are probably due to the presence of odd bases. However, direct base composition analysis of N1 DNA by anion exchange chromatography confirms a GC content of 70%, and, in the elution system used, no peaks due to odd bases are present.

Chapter 2.....A covalently closed circular DNA form has been observed as an intracellular form during both productive and abortive infection processes in M. lysodeikticus. This species has been isolated by the method of CsC1-ethidium bromide centrifugation and examined with an electron microscope.

Chapter 3.....A minute circular DNA has been discovered as a homogeneous population in M. lysodeikticus. Its length and molecular weight as determined by electron microscopy are 0.445 μ and 0.88 x 106 daltons respectively. There is about one minicircle per bacterium.

Chapter 4.....Several strains of E. coli 15 harbor a prophage. Viral growth can be induced by exposing the host to mitomycin C or to uv irradiation. The coliphage 15 particles from E. coli 15 and E, coli 15 T- appear as normal phage with head and tail structure; the particles from E. coli 15 TAU are tailless. The complete particles exert a colicinogenic activity on E.coli 15 and 15 T-, the tailless particles do not. No host for a productive viral infection has been found and the phage may be defective. The properties of the DNA of the virus have been studied, mainly by electron microscopy. After induction but before lysis, a closed circular DNA with a contour length of about 11.9 μ is found in the bacterium; the mature phage DNA is a linear duplex and 7.5% longer than the intracellular circular form. This suggests the hypothesis that the mature phage DNA is terminally repetitious and circularly permuted. The hypothesis was confirmed by observing that denaturation and renaturation of the mature phage DNA produce circular duplexes with two single-stranded branches corresponding to the terminal repetition. The contour length of the mature phage DNA was measured relative to φX RFII DNA and λ DNA; the calculated molecular weight is 27 x 106. The length of the single-stranded terminal repetition was compared to the length of φX 174 DNA under conditions where single-stranded DNA is seen in an extended form in electron micrographs. The length of the terminal repetition is found to be 7.4% of the length of the nonrepetitious part of the coliphage 15 DNA. The number of base pairs in the terminal repetition is variable in different molecules, with a fractional standard deviation of 0.18 of the average number in the terminal repetition. A new phenomenon termed "branch migration" has been discovered in renatured circular molecules; it results in forked branches, with two emerging single strands, at the position of the terminal repetition. The distribution of branch separations between the two terminal repetitions in the population of renatured circular molecules was studied. The observed distribution suggests that there is an excluded volume effect in the renaturation of a population of circularly permuted molecules such that strands with close beginning points preferentially renature with each other. This selective renaturation and the phenomenon of branch migration both affect the distribution of branch separations; the observed distribution does not contradict the hypothesis of a random distribution of beginning points around the chromosome.

Chapter 5....Some physicochemical studies on the minicircular DNA species in E. coli 15 (0.670 μ, 1.47 x 106 daltons) have been made. Electron microscopic observations showed multimeric forms of the minicircle which amount to 5% of total DNA species and also showed presumably replicating forms of the minicircle. A renaturation kinetic study showed that the minicircle is a unique DNA species in its size and base sequence. A study on the minicircle replication has been made under condition in which host DNA synthesis is synchronized. Despite experimental uncertainties involved, it seems that the minicircle replication is random and the number of the minicircles increases continuously throughout a generation of the host, regardless of host DNA synchronization.

Part II

The flow dichroism of dilute DNA solutions (A260≈0.1) has been studied in a Couette-type apparatus with the outer cylinder rotating and with the light path parallel to the cylinder axis. Shear gradients in the range of 5-160 sec.-1 were studied. The DNA samples were whole, "half," and "quarter" molecules of T4 bacteriophage DNA, and linear and circular λb2b5c DNA. For the linear molecules, the fractional flow dichroism is a linear function of molecular weight. The dichroism for linear A DNA is about 1.8 that of the circular molecule. For a given DNA, the dichroism is an approximately linear function of shear gradient, but with a slight upward curvature at low values of G, and some trend toward saturation at larger values of G. The fractional dichroism increases as the supporting electrolyte concentration decreases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the discovery in 1962 of laser action in semiconductor diodes made from GaAs, the study of spontaneous and stimulated light emission from semiconductors has become an exciting new field of semiconductor physics and quantum electronics combined. Included in the limited number of direct-gap semiconductor materials suitable for laser action are the members of the lead salt family, i.e . PbS, PbSe and PbTe. The material used for the experiments described herein is PbTe . The semiconductor PbTe is a narrow band- gap material (Eg = 0.19 electron volt at a temperature of 4.2°K). Therefore, the radiative recombination of electron-hole pairs between the conduction and valence bands produces photons whose wavelength is in the infrared (λ ≈ 6.5 microns in air).

The p-n junction diode is a convenient device in which the spontaneous and stimulated emission of light can be achieved via current flow in the forward-bias direction. Consequently, the experimental devices consist of a group of PbTe p-n junction diodes made from p –type single crystal bulk material. The p - n junctions were formed by an n-type vapor- phase diffusion perpendicular to the (100) plane, with a junction depth of approximately 75 microns. Opposite ends of the diode structure were cleaved to give parallel reflectors, thereby forming the Fabry-Perot cavity needed for a laser oscillator. Since the emission of light originates from the recombination of injected current carriers, the nature of the radiation depends on the injection mechanism.

The total intensity of the light emitted from the PbTe diodes was observed over a current range of three to four orders of magnitude. At the low current levels, the light intensity data were correlated with data obtained on the electrical characteristics of the diodes. In the low current region (region A), the light intensity, current-voltage and capacitance-voltage data are consistent with the model for photon-assisted tunneling. As the current is increased, the light intensity data indicate the occurrence of a change in the current injection mechanism from photon-assisted tunneling (region A) to thermionic emission (region B). With the further increase of the injection level, the photon-field due to light emission in the diode builds up to the point where stimulated emission (oscillation) occurs. The threshold current at which oscillation begins marks the beginning of a region (region C) where the total light intensity increases very rapidly with the increase in current. This rapid increase in intensity is accompanied by an increase in the number of narrow-band oscillating modes. As the photon density in the cavity continues to increase with the injection level, the intensity gradually enters a region of linear dependence on current (region D), i.e. a region of constant (differential) quantum efficiency.

Data obtained from measurements of the stimulated-mode light-intensity profile and the far-field diffraction pattern (both in the direction perpendicular to the junction-plane) indicate that the active region of high gain (i.e. the region where a population inversion exists) extends to approximately a diffusion length on both sides of the junction. The data also indicate that the confinement of the oscillating modes within the diode cavity is due to a variation in the real part of the dielectric constant, caused by the gain in the medium. A value of τ ≈ 10-9 second for the minority- carrier recombination lifetime (at a diode temperature of 20.4°K) is obtained from the above measurements. This value for τ is consistent with other data obtained independently for PbTe crystals.

Data on the threshold current for stimulated emission (for a diode temperature of 20. 4°K) as a function of the reciprocal cavity length were obtained. These data yield a value of J’th = (400 ± 80) amp/cm2 for the threshold current in the limit of an infinitely long diode-cavity. A value of α = (30 ± 15) cm-1 is obtained for the total (bulk) cavity loss constant, in general agreement with independent measurements of free- carrier absorption in PbTe. In addition, the data provide a value of ns ≈ 10% for the internal spontaneous quantum efficiency. The above value for ns yields values of tb ≈ τ ≈ 10-9 second and ts ≈ 10-8 second for the nonradiative and the spontaneous (radiative) lifetimes, respectively.

The external quantum efficiency (nd) for stimulated emission from diode J-2 (at 20.4° K) was calculated by using the total light intensity vs. diode current data, plus accepted values for the material parameters of the mercury- doped germanium detector used for the measurements. The resulting value is nd ≈ 10%-20% for emission from both ends of the cavity. The corresponding radiative power output (at λ = 6.5 micron) is 120-240 milliwatts for a diode current of 6 amps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While photovoltaics hold much promise as a sustainable electricity source, continued cost reduction is necessary to continue the current growth in deployment. A promising path to continuing to reduce total system cost is by increasing device efficiency. This thesis explores several silicon-based photovoltaic technologies with the potential to reach high power conversion efficiencies. Silicon microwire arrays, formed by joining millions of micron diameter wires together, were developed as a low cost, low efficiency solar technology. The feasibility of transitioning this to a high efficiency technology was explored. In order to achieve high efficiency, high quality silicon material must be used. Lifetimes and diffusion lengths in these wires were measured and the action of various surface passivation treatments studied. While long lifetimes were not achieved, strong inversion at the silicon / hydrofluoric acid interface was measured, which is important for understanding a common measurement used in solar materials characterization.

Cryogenic deep reactive ion etching was then explored as a method for fabricating high quality wires and improved lifetimes were measured. As another way to reach high efficiency, growth of silicon-germanium alloy wires was explored as a substrate for a III-V on Si tandem device. Patterned arrays of wires with up to 12% germanium incorporation were grown. This alloy is more closely lattice matched to GaP than silicon and allows for improvements in III-V integration on silicon.

Heterojunctions of silicon are another promising path towards achieving high efficiency devices. The GaP/Si heterointerface and properties of GaP grown on silicon were studied. Additionally, a substrate removal process was developed which allows the formation of high quality free standing GaP films and has wide applications in the field of optics.

Finally, the effect of defects at the interface of the amorphous silicon heterojuction cell was studied. Excellent voltages, and thus efficiencies, are achievable with this system, but the voltage is very sensitive to growth conditions. We directly measured lateral transport lengths at the heterointerface on the order of tens to hundreds of microns, which allows carriers to travel towards any defects that are present and recombine. This measurement adds to the understanding of these types of high efficiency devices and may aid in future device design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente estudo avaliou a eficáciado instrumento Reciproc#25 em atingir o forame de canais de molares inferiores sem um glide path manual prévio. Para isso, uma amostra geral de 300 molares inferiores foi radiografada e previamente selecionada quanto ao grau de curvatura segundo critério de Schneider sendo divididos em classes I e II. Após a aplicação dos critérios de inclusão e exclusão, uma amostragem total de 502 canais radiculares foi incluída para formar os grupos experimentais: 253 canais no grupo de molares inferiores classe I e 249 no grupo de molares classe II. Todos os canais foram instrumentados diretamente com a lima 25, sem nenhum glide path prévio, seguindo criteriosamente as diretrizes do fabricante. Os dados foram descritos como a frequência da distribuição do número de canais (%) nos quais foi possível chegar ao forame apical sem a necessidade de glide path, assim como o número de fraturas em cada grupo. Os resultados compilados dos 2 grupos experimentais mostraram que em 93,4% do total dos canais instrumentados, o instrumento R25 foi capaz de ir até o forame apical sem a necessidade de glide path. Em 6,4% do total dos canais, o instrumento R25 não chegou até o forame apical e em somente 0,2% dos casos ocorreu fratura da lima (um caso no grupo classe I, enquanto no grupo classe II não houve nenhuma ocorrência de fratura). O teste Qui-quadrado foi realizado para verificar se uma determinada classe de canais encontra-se mais associada ou não a necessidade de glide path quando o sistema Reciproc é usado. No grupo de molares classe II houve maior número de canais (23) que o instrumento R25 não foi capaz de ir até o forame apical do que no grupo de molares classe I (9), sendo essa diferença estatisticamente significante (Qui-quadrado, p=0,020, X2=5,452). Dentro das condições experimentais do presente estudo, pode-se concluir então quea lima R25 mostrou uma alta eficácia em instrumentar toda a extensão dos canais de molares inferiores classe I e II sem a necessidade de glide path prévio. Além disso, o sistema de instrumentação proposto mostrou-se altamente seguro quanto ao índice de fratura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate passive Q-switching of short-length double-clad Tm3+-doped silica fiber lasers near 2 mu m pumped by a laser diode array (LDA) at 790 nm. Polycrystalline Cr2+:ZnSe microchips with thickness from 0.3 to 1 mm are adopted as the Q-switching elements. Pulse duration of 120 ns, pulse energy over 14 mu] and repetition rate of 53 kHz are obtained from a 5-cm long fiber laser. As high as 530 kHz repetition rate is achieved from a 50-cm long fiber laser at similar to 10-W pump power. The performance of the Q-switched fiber lasers as a function of fiber length is also analyzed. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focusing characteristics of long-distance flying optics were studied systemically for TEMmn Gaussian beams. The results show that the ABCD law of parameter q can be extended to Gaussian modes of any order when waist radius w in the imaginary part of parameter q is replaced by Rayleigh range Z(R) of a certain resonator in the equation. The difference between the real focal length and the geometric focal length, defined as Delta f, was calculated for laser applications. A novel self-adaptive optical system was demonstrated for precisely controlling the focusing characteristics of long-distance flying optics, Theoretical analyses and experimental results were consistent. (c) 2006 Optical Society of America.