993 resultados para Deep tissue


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The equations of state (EOS) of several geologically important silicate liquids have been constrained via preheated shock wave techniques. Results on molten Fe2SiO4 (fayalite), Mg2SiO4 (forsterite), CaFeSi2O6 (hedenbergite), an equimolar mixture of CaAl2Si2O8-CaFeSi2O6 (anorthite-hedenbergite), and an equimolar mixture of CaAl2Si2O8-CaFeSi2O6-CaMgSi2O6(anorthite-hedenbergite-diopside) are presented. This work represents the first ever direct EOS measurements of an iron-bearing liquid or of a forsterite liquid at pressures relevant to the deep Earth (> 135 GPa). Additionally, revised EOS for molten CaMgSi2O6 (diopside), CaAl2Si2O8 (anorthite), and MgSiO3 (enstatite), which were previously determined by shock wave methods, are also presented.

The liquid EOS are incorporated into a model, which employs linear mixing of volumes to determine the density of compositionally intermediate liquids in the CaO-MgO-Al2O3-SiO2-FeO major element space. Liquid volumes are calculated for temperature and pressure conditions that are currently present at the core-mantle boundary or that may have occurred during differentiation of a fully molten mantle magma ocean.

The most significant implications of our results include: (1) a magma ocean of either chondrite or peridotite composition is less dense than its first crystallizing solid, which is not conducive to the formation of a basal mantle magma ocean, (2) the ambient mantle cannot produce a partial melt and an equilibrium residue sufficiently dense to form an ultralow velocity zone mush, and (3) due to the compositional dependence of Fe2+ coordination, there is a threshold of Fe concentration (molar XFe ≤ 0.06) permitted in a liquid for which its density can still be approximated by linear mixing of end-member volumes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological machines are active devices that are comprised of cells and other biological components. These functional devices are best suited for physiological environments that support cellular function and survival. Biological machines have the potential to revolutionize the engineering of biomedical devices intended for implantation, where the human body can provide the required physiological environment. For engineering such cell-based machines, bio-inspired design can serve as a guiding platform as it provides functionally proven designs that are attainable by living cells. In the present work, a systematic approach was used to tissue engineer one such machine by exclusively using biological building blocks and by employing a bio-inspired design. Valveless impedance pumps were constructed based on the working principles of the embryonic vertebrate heart and by using cells and tissue derived from rats. The function of these tissue-engineered muscular pumps was characterized by exploring their spatiotemporal and flow behavior in order to better understand the capabilities and limitations of cells when used as the engines of biological machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep-subwavelength gratings with periodicities of 170, 120, and 70 nm can be observed on highly oriented pyrolytic graphite irradiated by a femtosecond (fs) laser at 800 nm. Under picosecond laser irradiation, such gratings likewise can be produced. Interestingly, the 170-nm grating is also observed on single-crystal diamond irradiated by the 800-nm fs laser. In our opinion, the optical properties of the high-excited state of material surface play a key role for the formation of the deep-subwavelength gratings. The numerical simulations of the graphite deep-subwavelength grating at normal and high-excited states confirm that in the groove the light intensity can be extraordinarily enhanced via cavity-mode excitation in the condition of transverse-magnetic wave irradiation with near-ablation-threshold fluences. This field enhancement of polarization sensitiveness in deep-subwavelength apertures acts as an important feedback mechanism for the growth and polarization dependence of the deep-subwavelength gratings. In addition, we suggest that surface plasmons are responsible for the formation of seed deep-subwavelength apertures with a particular periodicity and the initial polarization dependence. Finally, we propose that the nanoscale Coulomb explosion occurring in the groove is responsible for the ultrafast nonthermal ablation mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the mechanisms of enzymes is crucial for our understanding of their role in biology and for designing methods to perturb or harness their activities for medical treatments, industrial processes, or biological engineering. One aspect of enzymes that makes them difficult to fully understand is that they are in constant motion, and these motions and the conformations adopted throughout these transitions often play a role in their function.

Traditionally, it has been difficult to isolate a protein in a particular conformation to determine what role each form plays in the reaction or biology of that enzyme. A new technology, computational protein design, makes the isolation of various conformations possible, and therefore is an extremely powerful tool in enabling a fuller understanding of the role a protein conformation plays in various biological processes.

One such protein that undergoes large structural shifts during different activities is human type II transglutaminase (TG2). TG2 is an enzyme that exists in two dramatically different conformational states: (1) an open, extended form, which is adopted upon the binding of calcium, and (2) a closed, compact form, which is adopted upon the binding of GTP or GDP. TG2 possess two separate active sites, each with a radically different activity. This open, calcium-bound form of TG2 is believed to act as a transglutaminse, where it catalyzes the formation of an isopeptide bond between the sidechain of a peptide-bound glutamine and a primary amine. The closed, GTP-bound conformation is believed to act as a GTPase. TG2 is also implicated in a variety of biological and pathological processes.

To better understand the effects of TG2’s conformations on its activities and pathological processes, we set out to design variants of TG2 isolated in either the closed or open conformations. We were able to design open-locked and closed-biased TG2 variants, and use these designs to unseat the current understanding of the activities and their concurrent conformations of TG2 and explore each conformation’s role in celiac disease models. This work also enabled us to help explain older confusing results in regards to this enzyme and its activities. The new model for TG2 activity has immense implications for our understanding of its functional capabilities in various environments, and for our ability to understand which conformations need to be inhibited in the design of new drugs for diseases in which TG2’s activities are believed to elicit pathological effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the correlation properties of the speckles in the deep Fresnel diffraction region produced by the scattering of rough self-affine fractal surfaces. The autocorrelation function of the speckle intensities is formulated by the combination of the light scattering theory of Kirchhoff approximation and the principles of speckle statistics. We propose a method for extracting the three surface parameters, i.e. the roughness w, the lateral correlation length xi and the roughness exponent alpha, from the autocorrelation functions of speckles. This method is verified by simulating the speckle intensities and calculating the speckle autocorrelation function. We also find the phenomenon that for rough surfaces with alpha = 1, the structure of the speckles resembles that of the surface heights, which results from the effect of the peak and the valley parts of the surface, acting as micro-lenses converging and diverging the light waves.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the design, fabrication, and excellent performance of an optimized deep-etched high-density fused-silica transmission grating for use in dense wavelength division multiplexing (DWDM) systems. The fabricated optimized transmission grating exhibits an efficiency of 87.1% at a wavelength of 1550 nm. Inductively coupled plasma-etching technology was used to fabricate the grating. The deep-etched high-density fused-silica transmission grating is suitable for use in a DWDM system because of its high efficiency, low polarization-dependent loss, parallel demultiplexing, and stable optical performance. The fabricated deep-etched high-density fused-silica transmission gratings should play an important role in DWDM systems. (c) 2006 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.

Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.

We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.

By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.

Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.

A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We described a highly efficient polarizing beam splitter (PBS) of a deep-etched binary-phase fused-silica grating, where TE- and TM-polarized waves are mainly diffracted in the -1st and 0th orders, respectively. Tb achieve a high extinction ratio and diffraction efficiency, the grating depth and period are optimized by using rigorous coupled-wave analysis, which can be well explained based on the modal method with effective indices of the modes for TE/TM polarization. Holographic recording technology and inductively coupled plasma etching are employed to fabricate the fused-silica PBS grating. Experimental results of diffraction efficiencies approaching 80% for a TE-polarized wave in the -1st order and more than 85% for a TM-polarized wave in the 0th order were obtained at a wavelength of 1550 nm. Because of its compact structure and simple fabrication process, which is suitable for mass reproduction, a deep-etched fused-silica grating as a PBS should be a useful device for practical applications. (C) 2007 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Both chemical and biological methods are used to assess the water quality of rivers. Many standard physical and chemical methods are now established, but biological procedures of comparable accuracy and versatility are still lacking. This is unfortunate because the biological assessment of water quality has several advantages over physical and chemical analyses. Several groups of organisms have been used to assess water quality in rivers and these include Bacteria, Protozoa, Algae, macrophytes, macroinvertebrates and fish. Hellawell (1978) provides an excellent review of the advantages and disadvantages of these groups, and concludes that macroinvertebrates are the most useful for monitoring water quality. Although macroinvertebrates are relatively easy to sample in shallow water (depth < 1m), quantitative sampling poses more problems than qualitative sampling because a large number of replicate sampling units are usually required for accurate estimates of numbers or biomass per unit area. Both qualitative and quantitative sampling are difficult in deep water (depth > 1m). The present paper first considers different types of samplers with emphasis on immediate samplers, and then discusses some problems in choosing a suitable sampler for benthic macroinvertebrates in deep rivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.