9 resultados para Simulated experiment

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.

In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.

We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.

We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.

In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.

Finally, we discuss prospects for related future investigations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Strength at extreme pressures (>1 Mbar or 100 GPa) and high strain rates (106-108 s-1) of materials is not well characterized. The goal of the research outlined in this thesis is to study the strength of tantalum (Ta) at these conditions. The Omega Laser in the Laboratory for Laser Energetics in Rochester, New York is used to create such extreme conditions. Targets are designed with ripples or waves on the surface, and these samples are subjected to high pressures using Omega’s high energy laser beams. In these experiments, the observational parameter is the Richtmyer-Meshkov (RM) instability in the form of ripple growth on single-mode ripples. The experimental platform used for these experiments is the “ride-along” laser compression recovery experiments, which provide a way to recover the specimens having been subjected to high pressures. Six different experiments are performed on the Omega laser using single-mode tantalum targets at different laser energies. The energy indicates the amount of laser energy that impinges the target. For each target, values for growth factor are obtained by comparing the profile of ripples before and after the experiment. With increasing energy, the growth factor increased.

Engineering simulations are used to interpret and correlate the measurements of growth factor to a measure of strength. In order to validate the engineering constitutive model for tantalum, a series of simulations are performed using the code Eureka, based on the Optimal Transportation Meshfree (OTM) method. Two different configurations are studied in the simulations: RM instabilities in single and multimode ripples. Six different simulations are performed for the single ripple configuration of the RM instability experiment, with drives corresponding to laser energies used in the experiments. Each successive simulation is performed at higher drive energy, and it is observed that with increasing energy, the growth factor increases. Overall, there is favorable agreement between the data from the simulations and the experiments. The peak growth factors from the simulations and the experiments are within 10% agreement. For the multimode simulations, the goal is to assist in the design of the laser driven experiments using the Omega laser. A series of three-mode and four-mode patterns are simulated at various energies and the resulting growth of the RM instability is computed. Based on the results of the simulations, a configuration is selected for the multimode experiments. These simulations also serve as validation for the constitutive model and the material parameters for tantalum that are used in the simulations.

By designing samples with initial perturbations in the form of single-mode and multimode ripples and subjecting these samples to high pressures, the Richtmyer-Meshkov instability is investigated in both laser compression experiments and simulations. By correlating the growth of these ripples to measures of strength, a better understanding of the strength of tantalum at high pressures is achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes investigations of two classes of laboratory plasmas with rather different properties: partially ionized low pressure radiofrequency (RF) discharges, and fully ionized high density magnetohydrodynamically (MHD)-driven jets. An RF pre-ionization system was developed to enable neutral gas breakdown at lower pressures and create hotter, faster jets in the Caltech MHD-Driven Jet Experiment. The RF plasma source used a custom pulsed 3 kW 13.56 MHz RF power amplifier that was powered by AA batteries, allowing it to safely float at 4-6 kV with the cathode of the jet experiment. The argon RF discharge equilibrium and transport properties were analyzed, and novel jet dynamics were observed.

Although the RF plasma source was conceived as a wave-heated helicon source, scaling measurements and numerical modeling showed that inductive coupling was the dominant energy input mechanism. A one-dimensional time-dependent fluid model was developed to quantitatively explain the expansion of the pre-ionized plasma into the jet experiment chamber. The plasma transitioned from an ionizing phase with depressed neutral emission to a recombining phase with enhanced emission during the course of the experiment, causing fast camera images to be a poor indicator of the density distribution. Under certain conditions, the total visible and infrared brightness and the downstream ion density both increased after the RF power was turned off. The time-dependent emission patterns were used for an indirect measurement of the neutral gas pressure.

The low-mass jets formed with the aid of the pre-ionization system were extremely narrow and collimated near the electrodes, with peak density exceeding that of jets created without pre-ionization. The initial neutral gas distribution prior to plasma breakdown was found to be critical in determining the ultimate jet structure. The visible radius of the dense central jet column was several times narrower than the axial current channel radius, suggesting that the outer portion of the jet must have been force free, with the current parallel to the magnetic field. The studies of non-equilibrium flows and plasma self-organization being carried out at Caltech are relevant to astrophysical jets and fusion energy research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We carried out quantum mechanics (QM) studies aimed at improving the performance of hydrogen fuel cells. This led to predictions of improved materials, some of which were subsequently validated with experiments by our collaborators.

In part I, the challenge was to find a replacement for the Pt cathode that would lead to improved performance for the Oxygen Reduction Reaction (ORR) while remaining stable under operational conditions and decreasing cost. Our design strategy was to find an alloy with composition Pt3M that would lead to surface segregation such that the top layer would be pure Pt, with the second and subsequent layers richer in M. Under operating conditions we expect the surface to have significant O and/or OH chemisorbed on the surface, and hence we searched for M that would remain segregated under these conditions. Using QM we examined surface segregation for 28 Pt3M alloys, where M is a transition metal. We found that only Pt3Os and Pt3Ir showed significant surface segregation when O and OH are chemisorbed on the catalyst surfaces. This result indicates that Pt3Os and Pt3Ir favor formation of a Pt-skin surface layer structure that would resist the acidic electrolyte corrosion during fuel cell operation environments. We chose to focus on Os because the phase diagram for Pt-Ir indicated that Pt-Ir could not form a homogeneous alloy at lower temperature. To determine the performance for ORR, we used QM to examine all intermediates, reaction pathways, and reaction barriers involved in the processes for which protons from the anode reactions react with O2 to form H2O. These QM calculations used our Poisson-Boltzmann implicit solvation model include the effects of the solvent (water with dielectric constant 78 with pH 7 at 298K). We found that the rate determination step (RDS) was the Oad hydration reaction (Oad + H2Oad -> OHad + OHad) in both cases, but that the barrier for pure Pt of 0.50 eV is reduced to 0.48 eV for Pt3Os, which at 80 degrees C would increase the rate by 218%. We collaborated with the Pu-Wei Wu’s group to carry out experiments, where we found that the dealloying process-treated Pt2Os catalyst showed two-fold higher activity at 25 degrees C than pure Pt and that the alloy had 272% improved stability, validating our theoretical predictions.

We also carried out similar QM studies followed by experimental validation for the Os/Pt core-shell catalyst fabricated by the underpotential deposition (UPD) method. The QM results indicated that the RDS for ORR is a compromise between the OOH formation step (0.37 eV for Pt, 0.23 eV for Pt2ML/Os core-shell) and H2O formation steps (0.32 eV for Pt, 0.22 eV for Pt2ML/Os core-shell). We found that Pt2ML/Os has the highest activity (compared to pure Pt and to the Pt3Os alloy) because the 0.37 eV barrier decreases to 0.23 eV. To understand what aspects of the core shell structure lead to this improved performance, we considered the effect on ORR of compressing the alloy slab to the dimensions of pure Pt. However this had little effect, with the same RDS barrier 0.37 eV. This shows that the ligand effect (the electronic structure modification resulting from the Os substrate) plays a more important role than the strain effect, and is responsible for the improved activity of the core- shell catalyst. Experimental materials characterization proves the core-shell feature of our catalyst. The electrochemical experiment for Pt2ML/Os/C showed 3.5 to 5 times better ORR activity at 0.9V (vs. NHE) in 0.1M HClO4 solution at 25 degrees C as compared to those of commercially available Pt/C. The excellent correlation between experimental half potential and the OH binding energies and RDS barriers validate the feasibility of predicting catalyst activity using QM calculation and a simple Langmuir–Hinshelwood model.

In part II, we used QM calculations to study methane stream reforming on a Ni-alloy catalyst surfaces for solid oxide fuel cell (SOFC) application. SOFC has wide fuel adaptability but the coking and sulfur poisoning will reduce its stability. Experimental results suggested that the Ni4Fe alloy improves both its activity and stability compared to pure Ni. To understand the atomistic origin of this, we carried out QM calculations on surface segregation and found that the most stable configuration for Ni4Fe has a Fe atom distribution of (0%, 50%, 25%, 25%, 0%) starting at the bottom layer. We calculated that the binding of C atoms on the Ni4Fe surface is 142.9 Kcal/mol, which is about 10 Kcal/mol weaker compared to the pure Ni surface. This weaker C binding energy is expected to make coke formation less favorable, explaining why Ni4Fe has better coking resistance. This result confirms the experimental observation. The reaction energy barriers for CHx decomposition and C binding on various alloy surface, Ni4X (X=Fe, Co, Mn, and Mo), showed Ni4Fe, Ni4Co, and Fe4Mn all have better coking resistance than pure Ni, but that only Ni4Fe and Fe4Mn have (slightly) improved activity compared to pure Ni.

In part III, we used QM to examine the proton transport in doped perovskite-ceramics. Here we used a 2x2x2 supercell of perovskite with composition Ba8X7M1(OH)1O23 where X=Ce or Zr and M=Y, Gd, or Dy. Thus in each case a 4+ X is replace by a 3+ M plus a proton on one O. Here we predicted the barriers for proton diffusion allowing both includes intra-octahedron and inter-octahedra proton transfer. Without any restriction, we only observed the inter-octahedra proton transfer with similar energy barrier as previous computational work but 0.2 eV higher than experimental result for Y doped zirconate. For one restriction in our calculations is that the Odonor-Oacceptor atoms were kept at fixed distances, we found that the barrier difference between cerates/zirconates with various dopants are only 0.02~0.03 eV. To fully address performance one would need to examine proton transfer at grain boundaries, which will require larger scale ReaxFF reactive dynamics for systems with millions of atoms. The QM calculations used here will be used to train the ReaxFF force field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part 1. Many interesting visual and mechanical phenomena occur in the critical region of fluids, both for the gas-liquid and liquid-liquid transitions. The precise thermodynamic and transport behavior here has some broad consequences for the molecular theory of liquids. Previous studies in this laboratory on a liquid-liquid critical mixture via ultrasonics supported a basically classical analysis of fluid behavior by M. Fixman (e. g., the free energy is assumed analytic in intensive variables in the thermodynamics)--at least when the fluid is not too close to critical. A breakdown in classical concepts is evidenced close to critical, in some well-defined ways. We have studied herein a liquid-liquid critical system of complementary nature (possessing a lower critical mixing or consolute temperature) to all previous mixtures, to look for new qualitative critical behavior. We did not find such new behavior in the ultrasonic absorption ascribable to the critical fluctuations, but we did find extra absorption due to chemical processes (yet these are related to the mixing behavior generating the lower consolute point). We rederived, corrected, and extended Fixman's analysis to interpret our experimental results in these more complex circumstances. The entire account of theory and experiment is prefaced by an extensive introduction recounting the general status of liquid state theory. The introduction provides a context for our present work, and also points out problems deserving attention. Interest in these problems was stimulated by this work but also by work in Part 3.

Part 2. Among variational theories of electronic structure, the Hartree-Fock theory has proved particularly valuable for a practical understanding of such properties as chemical binding, electric multipole moments, and X-ray scattering intensity. It also provides the most tractable method of calculating first-order properties under external or internal one-electron perturbations, either developed explicitly in orders of perturbation theory or in the fully self-consistent method. The accuracy and consistency of first-order properties are poorer than those of zero-order properties, but this is most often due to the use of explicit approximations in solving the perturbed equations, or to inadequacy of the variational basis in size or composition. We have calculated the electric polarizabilities of H2, He, Li, Be, LiH, and N2 by Hartree-Fock theory, using exact perturbation theory or the fully self-consistent method, as dictated by convenience. By careful studies on total basis set composition, we obtained good approximations to limiting Hartree-Fock values of polarizabilities with bases of reasonable size. The values for all species, and for each direction in the molecular cases, are within 8% of experiment, or of best theoretical values in the absence of the former. Our results support the use of unadorned Hartree-Pock theory for static polarizabilities needed in interpreting electron-molecule scattering data, collision-induced light scattering experiments, and other phenomena involving experimentally inaccessible polarizabilities.

Part 3. Numerical integration of the close-coupled scattering equations has been carried out to obtain vibrational transition probabilities for some models of the electronically adiabatic H2-H2 collision. All the models use a Lennard-Jones interaction potential between nearest atoms in the collision partners. We have analyzed the results for some insight into the vibrational excitation process in its dependence on the energy of collision, the nature of the vibrational binding potential, and other factors. We conclude also that replacement of earlier, simpler models of the interaction potential by the Lennard-Jones form adds very little realism for all the complication it introduces. A brief introduction precedes the presentation of our work and places it in the context of attempts to understand the collisional activation process in chemical reactions as well as some other chemical dynamics.