8 resultados para Infrared wavelengths

em CaltechTHESIS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this thesis is to present new observations of thermal-infrared radiation from asteroids. Stellar photometry was performed to provide standards for comparison with the asteroid data. The details of the photometry and the data reduction are discussed in Part 1. A system of standard stars is derived for wavelengths of 8.5, 10.5 and 11.6 µm and a new calibration is adopted. Sources of error are evaluated and comparisons are made with the data of other observers.

The observations and analysis of the thermal-emission observations of asteroids are presented in Part 2. Thermal-emission lightcurve and phase effect data are considered. Special color diagrams are introduced to display the observational data. These diagrams are free of any model-dependent assumptions and show that asteroids differ in their surface properties.

On the basis of photometric models, (4) Vesta is thought to have a bolometric Bond albedo of about 0.1, an emissivity greater than 0.7 and a true radius that is close to the model value of 300^(+50)_(-30)km. Model albedos and model radii are given for asteroids 1, 2, 4, 5, 6, 7, 15, 19, 20, 27, 39, 44, 68, 80, 324 and 674. The asteroid (324) Bamberga is extremely dark with a model (~bolometric Bond) albedo in the 0.01 - 0.02 range, which is thought to be the lowest albedo yet measured for any solar-system body. The crucial question about such low-albedo asteroids is their number and the distribution of their orbits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The unique structure and properties of brush polymers have led to increased interest in them within the scientific community. This thesis describes studies on the self-assembly of these brush polymers.

Chapter 2 describes a study on the rapid self-assembly of brush block copolymers into nanostructures with photonic bandgaps spanning the entire visible spectrum, from ultraviolet to near infrared. Linear relationships are observed between the peak wavelengths of reflection and polymer molecular weights. This work enables "bottom-up" fabrication of photonic crystals with application-tailored bandgaps, through synthetic control of the polymer molecular weight and the method of self-assembly.

Chapter 3 details the analysis of the self-assembly of symmetrical brush block copolymers in bulk and thin films. Highly ordered lamellae with domain spacing ranging from 20 to 240 nm are obtained by varying molecular weight of the backbone. The relationship between degree of polymerization and the domain spacing is reported, and evidence is provided for how rapidly the brush block copolymers self-assemble and achieve thermodynamic equilibrium.

Chapter 4 describes investigations into where morphology transitions take place as the volume fraction of each block is varied in asymmetrical brush block copolymers. Imaging techniques are used to observe a transition from lamellar to a cylindrical morphology as the volume fraction of one of the blocks exceeds 70%. It is also shown that the asymmetric brush block copolymers can be kinetically trapped into undulating lamellar structures by drop casting the samples.

Chapter 5 explores the capability of macromolecules to interdigitate into densely grafted molecular brush copolymers using stereocomplex formation as a driving force. The stereocomplex formation between complementary linear polymers and brush copolymers is demonstrated, while the stereocomplex formation between complementary brush copolymers is shown to be restricted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From studies of protoplanetary disks to extrasolar planets and planetary debris, we aim to understand the full evolution of a planetary system. Observational constraints from ground- and space-based instrumentation allows us to measure the properties of objects near and far and are central to developing this understanding. We present here three observational campaigns that, when combined with theoretical models, reveal characteristics of different stages and remnants of planet formation. The Kuiper Belt provides evidence of chemical and dynamical activity that reveals clues to its primordial environment and subsequent evolution. Large samples of this population can only be assembled at optical wavelengths, with thermal measurements at infrared and sub-mm wavelengths currently available for only the largest and closest bodies. We measure the size and shape of one particular object precisely here, in hopes of better understanding its unique dynamical history and layered composition.

Molecular organic chemistry is one of the most fundamental and widespread facets of the universe, and plays a key role in planet formation. A host of carbon-containing molecules vibrationally emit in the near-infrared when excited by warm gas, T~1000 K. The NIRSPEC instrument at the W.M. Keck Observatory is uniquely configured to study large ranges of this wavelength region at high spectral resolution. Using this facility we present studies of warm CO gas in protoplanetary disks, with a new code for precise excitation modeling. A parameterized suite of models demonstrates the abilities of the code and matches observational constraints such as line strength and shape. We use the models to probe various disk parameters as well, which are easily extensible to others with known disk emission spectra such as water, carbon dioxide, acetylene, and hydrogen cyanide.

Lastly, the existence of molecules in extrasolar planets can also be studied with NIRSPEC and reveals a great deal about the evolution of the protoplanetary gas. The species we observe in protoplanetary disks are also often present in exoplanet atmospheres, and are abundant in Earth's atmosphere as well. Thus, a sophisticated telluric removal code is necessary to analyze these high dynamic range, high-resolution spectra. We present observations of a hot Jupiter, revealing water in its atmosphere and demonstrating a new technique for exoplanet mass determination and atmospheric characterization. We will also be applying this atmospheric removal code to the aforementioned disk observations, to improve our data analysis and probe less abundant species. Guiding models using observations is the only way to develop an accurate understanding of the timescales and processes involved. The futures of the modeling and of the observations are bright, and the end goal of realizing a unified model of planet formation will require both theory and data, from a diverse collection of sources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

The spectrum of dissolved mercury atoms in simple liquids has been shown to be capable of revealing information concerning local structures in these liquids.

Part II

Infrared intensity perturbations in simple solutions have been shown to involve more detailed interaction than just dielectric polarization. No correlation has been found between frequency shifts and intensity enhancements.

Part III

Evidence for perturbed rotation of HCl in rare gas matrices has been found. The magnitude of the barrier to rotation is concluded to be of order of 30 cm^(-1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A substantial amount of important scientific information is contained within astronomical data at the submillimeter and far-infrared (FIR) wavelengths, including information regarding dusty galaxies, galaxy clusters, and star-forming regions; however, these wavelengths are among the least-explored fields in astronomy because of the technological difficulties involved in such research. Over the past 20 years, considerable efforts have been devoted to developing submillimeter- and millimeter-wavelength astronomical instruments and telescopes.

The number of detectors is an important property of such instruments and is the subject of the current study. Future telescopes will require as many as hundreds of thousands of detectors to meet the necessary requirements in terms of the field of view, scan speed, and resolution. A large pixel count is one benefit of the development of multiplexable detectors that use kinetic inductance detector (KID) technology.

This dissertation presents the development of a KID-based instrument including a portion of the millimeter-wave bandpass filters and all aspects of the readout electronics, which together enabled one of the largest detector counts achieved to date in submillimeter-/millimeter-wavelength imaging arrays: a total of 2304 detectors. The work presented in this dissertation has been implemented in the MUltiwavelength Submillimeter Inductance Camera (MUSIC), a new instrument for the Caltech Submillimeter Observatory (CSO).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spectral data are presented, giving intensities of the Brackett ɤ (B7) line at six positions in M 42 and of the Brackett ten through fourteen (B10-B14) lines plus the He 4d3D-3p3p0 line at three positions in M 42. Observations of the Brackett ɤ line are also given for the planetary nebulae NGC 7027 and IC 418. Brackett gamma is shown to exhibit an anomalous satellite line in NGC 7027. Broadband data are presented, giving intensities at effective wavelengths of 1.25 μ, 1.65 μ, 2.2 μ, 3.5 μ and 4.8 μ for three positions in M 42.

Comparisons with visual and radio data as well as 12 micron and 20 micron data are used to derive reddening, electron temperatures, and electron densities for M 42 and the two planetaries, as well as a helium abundance for M 42. A representative electron temperature of 8400°K ± 1000°K, an electron density of 1.5 ±0.1 x 103 cm-3 and a He/H number density ratio of 0.10 +0.10/-0.05 are derived for the central region of M 42. The electron temperature is found to increase slightly with distance from the Trapezium.

M 42 is shown to emit in excess of the predicted recombination radiation throughout the entire infrared spectrum. The variations in the excess with wavelength and with position are analyzed to determine which of several physical processes may be operating. The longer wavelength infrared excess is shown to be dominated by dust emission, while the shorter wavelength infrared excess is caused by dust scattering. The dust is shown to be larger than the average interstellar particle. A new feature of the Orion red star ORS-1 is found in that it appears to have a reflection nebula around it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.

Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.

We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.

By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.

Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.

A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.