5 resultados para full-width at half-maximum

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptive optics (AO) corrects distortions created by atmospheric turbulence and delivers diffraction-limited images on ground-based telescopes. The vastly improved spatial resolution and sensitivity has been utilized for studying everything from the magnetic fields of sunspots upto the internal dynamics of high-redshift galaxies. This thesis about AO science from small and large telescopes is divided into two parts: Robo-AO and magnetar kinematics.

In the first part, I discuss the construction and performance of the world’s first fully autonomous visible light AO system, Robo-AO, at the Palomar 60-inch telescope. Robo-AO operates extremely efficiently with an overhead < 50s, typically observing about 22 targets every hour. We have performed large AO programs observing a total of over 7,500 targets since May 2012. In the visible band, the images have a Strehl ratio of about 10% and achieve a contrast of upto 6 magnitudes at a separation of 1′′. The full-width at half maximum achieved is 110–130 milli-arcsecond. I describe how Robo-AO is used to constrain the evolutionary models of low-mass pre-main-sequence stars by measuring resolved spectral energy distributions of stellar multiples in the visible band, more than doubling the current sample. I conclude this part with a discussion of possible future improvements to the Robo-AO system.

In the second part, I describe a study of magnetar kinematics using high-resolution near-infrared (NIR) AO imaging from the 10-meter Keck II telescope. Measuring the proper motions of five magnetars with a precision of upto 0.7 milli-arcsecond/yr, we have more than tripled the previously known sample of magnetar proper motions and proved that magnetar kinematics are equivalent to those of radio pulsars. We conclusively showed that SGR 1900+14 and SGR 1806-20 were ejected from the stellar clusters with which they were traditionally associated. The inferred kinematic ages of these two magnetars are 6±1.8 kyr and 650±300 yr respectively. These ages are a factor of three to four times greater than their respective characteristic ages. The calculated braking index is close to unity as compared to three for the vacuum dipole model and 2.5-2.8 as measured for young pulsars. I conclude this section by describing a search for NIR counterparts of new magnetars and a future promise of polarimetric investigation of a magnetars’ NIR emission mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The response of linear, viscous damped systems to excitations having time-varying frequency is the subject of exact and approximate analyses, which are supplemented by an analog computer study of single degree of freedom system response to excitations having frequencies depending linearly and exponentially on time.

The technique of small perturbations and the methods of stationary phase and saddle-point integration, as well as a novel bounding procedure, are utilized to derive approximate expressions characterizing the system response envelope—particularly near resonances—for the general time-varying excitation frequency.

Descriptive measurements of system resonant behavior recorded during the course of the analog study—maximum response, excitation frequency at which maximum response occurs, and the width of the response peak at the half-power level—are investigated to determine dependence upon natural frequency, damping, and the functional form of the excitation frequency.

The laboratory problem of determining the properties of a physical system from records of its response to excitations of this class is considered, and the transient phenomenon known as “ringing” is treated briefly.

It is shown that system resonant behavior, as portrayed by the above measurements and expressions, is relatively insensitive to the specifics of the excitation frequency-time relation and may be described to good order in terms of parameters combining system properties with the time derivative of excitation frequency evaluated at resonance.

One of these parameters is shown useful for predicting whether or not a given excitation having a time-varying frequency will produce strong or subtle changes in the response envelope of a given system relative to the steady-state response envelope. The parameter is shown, additionally, to be useful for predicting whether or not a particular response record will exhibit the “ringing” phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental and theoretical studies have been made of the electrothermal waves occurring in a nonequilibrium MHD plasma. These waves are caused by an instability that occurs when a plasma having a dependence of conductivity on current density is subjected to crossed electric and magnetic fields. Theoretically, these waves were studied by developing and solving the equations of a steady, one-dimensional nonuniformity in electron density. From these nonlinear equations, predictions of the maximum amplitude and of the half width of steady waves could be obtained. Experimentally, the waves were studied in a nonequilibrium discharge produced in a potassium-seeded argon plasma at 2000°K and 1 atm. pressure. The behavior of such a discharge with four different configurations of electrodes was determined from photographs, photomultiplier measurements, and voltage probes. These four configurations were chosen to produce steady waves, to check the stability of steady waves, and to observe the manifestation of the waves in a MHD generator or accelerator configuration.

Steady, one-dimensional waves were found to exist in a number of situations, and where they existed, their characteristics agreed with the predictions of the steady theory. Some extensions of this theory were necessary, however, to describe the transient phenomena occurring in the inlet region of a discharge transverse to the gas flow. It was also found that in a discharge away from the stabilizing effect of the electrodes, steady waves became unstable for large Hall parameters. Methods of prediction of the effective electrical conductivity and Hall parameter of a plasma with nonuniformities caused by the electrothermal waves were also studied. Using these methods and the values of amplitude predicted by the steady theory, it was found that the measured decrease in transverse conductivity of a MHD device, 50 per cent at a Hall parameter of 5, could be accounted for in terms of the electrothermal instability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.