8 resultados para Most Productive Scale Size
em CaltechTHESIS
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
Light microscopy has been one of the most common tools in biological research, because of its high resolution and non-invasive nature of the light. Due to its high sensitivity and specificity, fluorescence is one of the most important readout modes of light microscopy. This thesis presents two new fluorescence microscopic imaging techniques: fluorescence optofluidic microscopy and fluorescent Talbot microscopy. The designs of the two systems are fundamentally different from conventional microscopy, which makes compact and portable devices possible. The components of the devices are suitable for mass-production, making the microscopic imaging system more affordable for biological research and clinical diagnostics.
Fluorescence optofluidic microscopy (FOFM) is capable of imaging fluorescent samples in fluid media. The FOFM employs an array of Fresnel zone plates (FZP) to generate an array of focused light spots within a microfluidic channel. As a sample flows through the channel and across the array of focused light spots, a filter-coated CMOS sensor collects the fluorescence emissions. The collected data can then be processed to render a fluorescence microscopic image. The resolution, which is determined by the focused light spot size, is experimentally measured to be 0.65 μm.
Fluorescence Talbot microscopy (FTM) is a fluorescence chip-scale microscopy technique that enables large field-of-view (FOV) and high-resolution imaging. The FTM method utilizes the Talbot effect to project a grid of focused excitation light spots onto the sample. The sample is placed on a filter-coated CMOS sensor chip. The fluorescence emissions associated with each focal spot are collected by the sensor chip and are composed into a sparsely sampled fluorescence image. By raster scanning the Talbot focal spot grid across the sample and collecting a sequence of sparse images, a filled-in high-resolution fluorescence image can be reconstructed. In contrast to a conventional microscope, a collection efficiency, resolution, and FOV are not tied to each other for this technique. The FOV of FTM is directly scalable. Our FTM prototype has demonstrated a resolution of 1.2 μm, and the collection efficiency equivalent to a conventional microscope objective with a 0.70 N.A. The FOV is 3.9 mm × 3.5 mm, which is 100 times larger than that of a 20X/0.40 N.A. conventional microscope objective. Due to its large FOV, high collection efficiency, compactness, and its potential for integration with other on-chip devices, FTM is suitable for diverse applications, such as point-of-care diagnostics, large-scale functional screens, and long-term automated imaging.
Resumo:
Secondary-ion mass spectrometry (SIMS), electron probe analysis (EPMA), analytical scanning electron microscopy (SEM) and infrared (IR) spectroscopy were used to determine the chemical composition and the mineralogy of sub-micrometer inclusions in cubic diamonds and in overgrowths (coats) on octahedral diamonds from Zaire, Botswana, and some unknown localities.
The inclusions are sub-micrometer in size. The typical diameter encountered during transmission electron microscope (TEM) examination was 0.1-0.5 µm. The micro-inclusions are sub-rounded and their shape is crystallographically controlled by the diamond. Normally they are not associated with cracks or dislocations and appear to be well isolated within the diamond matrix. The number density of inclusions is highly variable on any scale and may reach 10^(11) inclusions/cm^3 in the most densely populated zones. The total concentration of metal oxides in the diamonds varies between 20 and 1270 ppm (by weight).
SIMS analysis yields the average composition of about 100 inclusions contained in the sputtered volume. Comparison of analyses of different volumes of an individual diamond show roughly uniform composition (typically ±10% relative). The variation among the average compositions of different diamonds is somewhat greater (typically ±30%). Nevertheless, all diamonds exhibit similar characteristics, being rich in water, carbonate, SiO_2, and K_2O, and depleted in MgO. The composition of micro-inclusions in most diamonds vary within the following ranges: SiO_2, 30-53%; K_2O, 12-30%; CaO, 8-19%; FeO, 6-11%; Al_2O_3, 3-6%; MgO, 2-6%; TiO_2, 2-4%; Na_2O, 1-5%; P_2O_5, 1-4%; and Cl, 1-3%. In addition, BaO, 1-4%; SrO, 0.7-1.5%; La_2O_3, 0.1-0.3%; Ce_2O_3, 0.3-0.5%; smaller amounts of other rare-earth elements (REE), as well as Mn, Th, and U were also detected by instrumental neutron activation analysis (INAA). Mg/(Fe+Mg), 0.40-0.62 is low compared with other mantle derived phases; K/ AI ratios of 2-7 are very high, and the chondrite-normalized Ce/Eu ratios of 10-21 are also high, indicating extremely fractionated REE patterns.
SEM analyses indicate that individual inclusions within a single diamond are roughly of similar composition. The average composition of individual inclusions as measured with the SEM is similar to that measured by SIMS. Compositional variations revealed by the SEM are larger than those detected by SIMS and indicate a small variability in the composition of individual inclusions. No compositions of individual inclusions were determined that might correspond to mono-mineralic inclusions.
IR spectra of inclusion- bearing zones exhibit characteristic absorption due to: (1) pure diamonds, (2) nitrogen and hydrogen in the diamond matrix; and (3) mineral phases in the micro-inclusions. Nitrogen concentrations of 500-1100 ppm, typical of the micro-inclusion-bearing zones, are higher than the average nitrogen content of diamonds. Only type IaA centers were detected by IR. A yellow coloration may indicate small concentration of type IB centers.
The absorption due to the micro-inclusions in all diamonds produces similar spectra and indicates the presence of hydrated sheet silicates (most likely, Fe-rich clay minerals), carbonates (most likely calcite), and apatite. Small quantities of molecular CO_2 are also present in most diamonds. Water is probably associated with the silicates but the possibility of its presence as a fluid phase cannot be excluded. Characteristic lines of olivine, pyroxene and garnet were not detected and these phases cannot be significant components of the inclusions. Preliminary quantification of the IR data suggests that water and carbonate account for, on average, 20-40 wt% of the micro-inclusions.
The composition and mineralogy of the micro-inclusions are completely different from those of the more common, larger inclusions of the peridotitic or eclogitic assemblages. Their bulk composition resembles that of potassic magmas, such as kimberlites and lamproites, but is enriched in H_2O, CO_3, K_2O, and incompatible elements, and depleted in MgO.
It is suggested that the composition of the micro-inclusions represents a volatile-rich fluid or a melt trapped by the diamond during its growth. The high content of K, Na, P, and incompatible elements suggests that the trapped material found in the micro-inclusions may represent an effective metasomatizing agent. It may also be possible that fluids of similar composition are responsible for the extreme enrichment of incompatible elements documented in garnet and pyroxene inclusions in diamonds.
The origin of the fluid trapped in the micro-inclusions is still uncertain. It may have been formed by incipient melting of a highly metasomatized mantle rocks. More likely, it is the result of fractional crystallization of a potassic parental magma at depth. In either case, the micro-inclusions document the presence of highly potassic fluids or melts at depths corresponding to the diamond stability field in the upper mantle. The phases presently identified in the inclusions are believed to be the result of closed system reactions at lower pressures.
Resumo:
The O18/O16, C13/C12, and D/H ratios have been determined for rocks and coexisting minerals from several granitic plutons and their contact metamorphic aureoles in northern Nevada, eastern California, central Colorado, and Texas, with emphasis on oxygen isotopes. A consistent order of O18/O16, C13/C12, and D/H enrichment in coexisting minerals, and a correlation between isotopic fractionations among coexisting mineral pairs are in general observed, suggesting that mineral assemblages tend to approach isotopic equilibrium during contact metamorphism. In certain cases, a correlation is observed between oxygen isotopic fractionations of a mineral pair and sample distance from intrusive contacts. Isotopic temperatures generally show good agreement with heat flow considerations. Based on the experimentally determined quartz-muscovite O18/O16 fractionation calibration curve, temperatures are estimated to be 525 to 625°C at the contacts of the granitic stocks studied.
Small-scale oxygen isotope exchange effects between intrusive and country rock are observed over distances of 0.5 to 3 feet on both sides of the contacts; the isotopic gradients are typically 2 to 3 per mil per foot. The degree of oxygen isotopic exchange is essentially identical for different coexisting minerals. This presumably occurred through a diffusion-controlled recrystallization process. The size of the oxygen isotope equilibrium systems in the small-scale exchanged zones vary from about 1.5 cm to 30 cm. A xenolith and a re-entrant of country rock projecting into on intrusive hove both undergone much more extensive isotopic exchange (to hundreds of feet); they also show abnormally high isotopic temperatures. The marginal portions of most plutons have unusually high O18/O16 ratios compared to "normal" igneous rocks, presumably due to large-scale isotopic exchange with meta-sedimentary country rocks when the igneous rocks were essentially in a molten state. The isotopic data suggest that outward horizontal movement of H2O into the contact metamorphic aureoles is almost negligible, but upward movement of H2O may be important. Also, direct influx and absorption of water from the country rock may be significant in certain intrusive stocks.
Except in the exchanged zones, the O18/O16 ratios of pelitic rocks do not change appreciably during contact metamorphism, even in the cordierite and sillimanite grades; this is in contrast to regional metamorphic rocks which commonly decrease in O18 with increasing grade. Low O18/O16 and C13/C12 ratios of the contact metamorphic marbles generally correlate well with the presence of calc-silicate minerals, indicating that the CO2 liberated during metamorphic decarbonation reactions is enriched in both O18 and C13 relative to the carbonates.
The D/H ratios of biotites in the contact metamorphic rocks and their associated intrusions show a geographic correlation that is similar to that shown by the D/H ratios of meteoric surface waters, perhaps indicating that meteoric waters were present in the rocks during crystallization of the biotites.
Resumo:
On the materials scale, thermoelectric efficiency is defined by the dimensionless figure of merit zT. This value is made up of three material components in the form zT = Tα2/ρκ, where α is the Seebeck coefficient, ρ is the electrical resistivity, and κ is the total thermal conductivity. Therefore, in order to improve zT would require the reduction of κ and ρ while increasing α. However due to the inter-relation of the electrical and thermal properties of materials, typical routes to thermoelectric enhancement come in one of two forms. The first is to isolate the electronic properties and increase α without negatively affecting ρ. Techniques like electron filtering, quantum confinement, and density of states distortions have been proposed to enhance the Seebeck coefficient in thermoelectric materials. However, it has been difficult to prove the efficacy of these techniques. More recently efforts to manipulate the band degeneracy in semiconductors has been explored as a means to enhance α.
The other route to thermoelectric enhancement is through minimizing the thermal conductivity, κ. More specifically, thermal conductivity can be broken into two parts, an electronic and lattice term, κe and κl respectively. From a functional materials standpoint, the reduction in lattice thermal conductivity should have a minimal effect on the electronic properties. Most routes incorporate techniques that focus on the reduction of the lattice thermal conductivity. The components that make up κl (κl = 1/3Cνl) are the heat capacity (C), phonon group velocity (ν), and phonon mean free path (l). Since the difficulty is extreme in altering the heat capacity and group velocity, the phonon mean free path is most often the source of reduction.
Past routes to decreasing the phonon mean free path has been by alloying and grain size reduction. However, in these techniques the electron mobility is often negatively affected because in alloying any perturbation to the periodic potential can cause additional adverse carrier scattering. Grain size reduction has been another successful route to enhancing zT because of the significant difference in electron and phonon mean free paths. However, grain size reduction is erratic in anisotropic materials due to the orientation dependent transport properties. However, microstructure formation in both equilibrium and nonequilibrium processing routines can be used to effectively reduce the phonon mean free path as a route to enhance the figure of merit.
This work starts with a discussion of several different deliberate microstructure varieties. Control of the morphology and finally structure size and spacing is discussed at length. Since the material example used throughout this thesis is anisotropic a short primer on zone melting is presented as an effective route to growing homogeneous and oriented polycrystalline material. The resulting microstructure formation and control is presented specifically in the case of In2Te3-Bi2Te3 composites and the transport properties pertinent to thermoelectric materials is presented. Finally, the transport and discussion of iodine doped Bi2Te3 is presented as a re-evaluation of the literature data and what is known today.
Resumo:
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.
We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.
We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.
We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.
Resumo:
The purpose of this thesis is to characterize the behavior of the smallest turbulent scales in high Karlovitz number (Ka) premixed flames. These scales are particularly important in the two-way coupling between turbulence and chemistry and better understanding of these scales will support future modeling efforts using large eddy simulations (LES). The smallest turbulent scales are studied by considering the vorticity vector, ω, and its transport equation.
Due to the complexity of turbulent combustion introduced by the wide range of length and time scales, the two-dimensional vortex-flame interaction is first studied as a simplified test case. Numerical and analytical techniques are used to discern the dominate transport terms and their effects on vorticity based on the initial size and strength of the vortex. This description of the effects of the flame on a vortex provides a foundation for investigating vorticity in turbulent combustion.
Subsequently, enstrophy, ω2 = ω • ω, and its transport equation are investigated in premixed turbulent combustion. For this purpose, a series of direct numerical simulations (DNS) of premixed n-heptane/air flames are performed, the conditions of which span a wide range of unburnt Karlovitz numbers and turbulent Reynolds numbers. Theoretical scaling analysis along with the DNS results support that, at high Karlovitz number, enstrophy transport is controlled by the viscous dissipation and vortex stretching/production terms. As a result, vorticity scales throughout the flame with the inverse of the Kolmogorov time scale, τη, just as in homogeneous isotropic turbulence. As τη is only a function of the viscosity and dissipation rate, this supports the validity of Kolmogorov’s first similarity hypothesis for sufficiently high Ka numbers (Ka ≳ 100). These conclusions are in contrast to low Karlovitz number behavior, where dilatation and baroclinic torque have a significant impact on vorticity within the flame. Results are unaffected by the transport model, chemical model, turbulent Reynolds number, and lastly the physical configuration.
Next, the isotropy of vorticity is assessed. It is found that given a sufficiently large value of the Karlovitz number (Ka ≳ 100) the vorticity is isotropic. At lower Karlovitz numbers, anisotropy develops due to the effects of the flame on the vortex stretching/production term. In this case, the local dynamics of vorticity in the strain-rate tensor, S, eigenframe are altered by the flame. At sufficiently high Karlovitz numbers, the dynamics of vorticity in this eigenframe resemble that of homogeneous isotropic turbulence.
Combined, the results of this thesis support that both the magnitude and orientation of vorticity resemble the behavior of homogeneous isotropic turbulence, given a sufficiently high Karlovitz number (Ka ≳ 100). This supports the validity of Kolmogorov’s first similarity hypothesis and the hypothesis of local isotropy under these condition. However, dramatically different behavior is found at lower Karlovitz numbers. These conclusions provides/suggests directions for modeling high Karlovitz number premixed flames using LES. With more accurate models, the design of aircraft combustors and other combustion based devices may better mitigate the detrimental effects of combustion, from reducing CO2 and soot production to increasing engine efficiency.
Resumo:
Liquefaction is a devastating instability associated with saturated, loose, and cohesionless soils. It poses a significant risk to distributed infrastructure systems that are vital for the security, economy, safety, health, and welfare of societies. In order to make our cities resilient to the effects of liquefaction, it is important to be able to identify areas that are most susceptible. Some of the prevalent methodologies employed to identify susceptible areas include conventional slope stability analysis and the use of so-called liquefaction charts. However, these methodologies have some limitations, which motivate our research objectives. In this dissertation, we investigate the mechanics of origin of liquefaction in a laboratory test using grain-scale simulations, which helps (i) understand why certain soils liquefy under certain conditions, and (ii) identify a necessary precursor for onset of flow liquefaction. Furthermore, we investigate the mechanics of liquefaction charts using a continuum plasticity model; this can help in modeling the surface hazards of liquefaction following an earthquake. Finally, we also investigate the microscopic definition of soil shear wave velocity, a soil property that is used as an index to quantify liquefaction resistance of soil. We show that anisotropy in fabric, or grain arrangement can be correlated with anisotropy in shear wave velocity. This has the potential to quantify the effects of sample disturbance when a soil specimen is extracted from the field. In conclusion, by developing a more fundamental understanding of soil liquefaction, this dissertation takes necessary steps for a more physical assessment of liquefaction susceptibility at the field-scale.