19 resultados para Resolution of problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The final object of this research was to prepare m-nitrobenzoyl malic acid and to separate it, if possible, into the four stereoisomers predicted by the Huggins' theory of the benzene ring. Inasmuch as the quantity of m-nitro- benzoyl chloride available was limited it was thought better to first prepare i-benzoyl malic acid and then attempt to resolve it. The resolution of m-nitrobenzoyl malic acid could probably be accomplished by a similar method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The resolution of the so-called thermodynamic paradox is presented in this paper. It is shown, in direct contradiction to the results of several previously published papers, that the cutoff modes (evanescent modes having complex propagation constants) can carry power in a waveguide containing ferrite. The errors in all previous “proofs” which purport to show that the cutoff modes cannot carry power are uncovered. The boundary value problem underlying the paradox is studied in detail; it is shown that, although the solution is somewhat complicated, there is nothing paradoxical about it.

The general problem of electromagnetic wave propagation through rectangular guides filled inhomogeneously in cross-section with transversely magnetized ferrite is also studied. Application of the standard waveguide techniques reduces the TM part to the well-known self-adjoint Sturm Liouville eigenvalue equation. The TE part, however, leads in general to a non-self-adjoint eigenvalue equation. This equation and the associated expansion problem are studied in detail. Expansion coefficients and actual fields are determined for a particular problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory of bifurcation of solutions to two-point boundary value problems is developed for a system of nonlinear first order ordinary differential equations in which the bifurcation parameter is allowed to appear nonlinearly. An iteration method is used to establish necessary and sufficient conditions for bifurcation and to construct a unique bifurcated branch in a neighborhood of a bifurcation point which is a simple eigenvalue of the linearized problem. The problem of bifurcation at a degenerate eigenvalue of the linearized problem is reduced to that of solving a system of algebraic equations. Cases with no bifurcation and with multiple bifurcation at a degenerate eigenvalue are considered.

The iteration method employed is shown to generate approximate solutions which contain those obtained by formal perturbation theory. Thus the formal perturbation solutions are rigorously justified. A theory of continuation of a solution branch out of the neighborhood of its bifurcation point is presented. Several generalizations and extensions of the theory to other types of problems, such as systems of partial differential equations, are described.

The theory is applied to the problem of the axisymmetric buckling of thin spherical shells. Results are obtained which confirm recent numerical computations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Morphogenesis is a phenomenon of intricate balance and dynamic interplay between processes occurring at a wide range of scales (spatial, temporal and energetic). During development, a variety of physical mechanisms are employed by tissues to simultaneously pattern, move, and differentiate based on information exchange between constituent cells, perhaps more than at any other time during an organism's life. To fully understand such events, a combined theoretical and experimental framework is required to assist in deciphering the correlations at both structural and functional levels at scales that include the intracellular and tissue levels as well as organs and organ systems. Microscopy, especially diffraction-limited light microscopy, has emerged as a central tool to capture the spatio-temporal context of life processes. Imaging has the unique advantage of watching biological events as they unfold over time at single-cell resolution in the intact animal. In this work I present a range of problems in morphogenesis, each unique in its requirements for novel quantitative imaging both in terms of the technique and analysis. Understanding the molecular basis for a developmental process involves investigating how genes and their products- mRNA and proteins-function in the context of a cell. Structural information holds the key to insights into mechanisms and imaging fixed specimens paves the first step towards deciphering gene function. The work presented in this thesis starts with the demonstration that the fluorescent signal from the challenging environment of whole-mount imaging, obtained by in situ hybridization chain reaction (HCR), scales linearly with the number of copies of target mRNA to provide quantitative sub-cellular mapping of mRNA expression within intact vertebrate embryos. The work then progresses to address aspects of imaging live embryonic development in a number of species. While processes such as avian cartilage growth require high spatial resolution and lower time resolution, dynamic events during zebrafish somitogenesis require higher time resolution to capture the protein localization as the somites mature. The requirements on imaging are even more stringent in case of the embryonic zebrafish heart that beats with a frequency of ~ 2-2.5 Hz, thereby requiring very fast imaging techniques based on two-photon light sheet microscope to capture its dynamics. In each of the hitherto-mentioned cases, ranging from the level of molecules to organs, an imaging framework is developed, both in terms of technique and analysis to allow quantitative assessment of the process in vivo. Overall the work presented in this thesis combines new quantitative tools with novel microscopy for the precise understanding of processes in embryonic development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A method is developed to calculate the settling speed of dilute arrays of spheres for the three cases of: I, a random array of freely moving particles; II, a random array of rigidly held particles; and III, a cubic array of particles. The basic idea of the technique is to give a formal representation for the solution and then manipulate this representation in a straightforward manner to obtain the result. For infinite arrays of spheres, our results agree with the results previously found by other authors, and the analysis here appears to be simpler. This method is able to obtain more terms in the answer than was possible by Saffman's unified treatment for point particles. Some results for arbitrary two sphere distributions are presented, and an analysis of the wall effect for particles settling in a tube is given. It is expected that the method presented here can be generalized to solve other types of problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Galaxies evolve throughout the history of the universe from the first star-forming sources, through gas-rich asymmetric structures with rapid star formation rates, to the massive symmetrical stellar systems observed at the present day. Determining the physical processes which drive galaxy formation and evolution is one of the most important questions in observational astrophysics. This thesis presents four projects aimed at improving our understanding of galaxy evolution from detailed measurements of star forming galaxies at high redshift.

We use resolved spectroscopy of gravitationally lensed z ≃ 2 - 3 star forming galaxies to measure their kinematic and star formation properties. The combination of lensing with adaptive optics yields physical resolution of ≃ 100 pc, sufficient to resolve giant Hii regions. We find that ~ 70 % of galaxies in our sample display ordered rotation with high local velocity dispersion indicating turbulent thick disks. The rotating galaxies are gravitationally unstable and are expected to fragment into giant clumps. The size and dynamical mass of giant Hii regions are in agreement with predictions for such clumps indicating that gravitational instability drives the rapid star formation. The remainder of our sample is comprised of ongoing major mergers. Merging galaxies display similar star formation rate, morphology, and local velocity dispersion as isolated sources, but their velocity fields are more chaotic with no coherent rotation.

We measure resolved metallicity in four lensed galaxies at z = 2.0 − 2.4 from optical emission line diagnostics. Three rotating galaxies display radial gradients with higher metallicity at smaller radii, while the fourth is undergoing a merger and has an inverted gradient with lower metallicity at the center. Strong gradients in the rotating galaxies indicate that they are growing inside-out with star formation fueled by accretion of metal-poor gas at large radii. By comparing measured gradients with an appropriate comparison sample at z = 0, we demonstrate that metallicity gradients in isolated galaxies must flatten at later times. The amount of size growth inferred by the gradients is in rough agreement with direct measurements of massive galaxies. We develop a chemical evolution model to interpret these data and conclude that metallicity gradients are established by a gradient in the outflow mass loading factor, combined with radial inflow of metal-enriched gas.

We present the first rest-frame optical spectroscopic survey of a large sample of low-luminosity galaxies at high redshift (L < L*, 1.5 < z < 3.5). This population dominates the star formation density of the universe at high redshifts, yet such galaxies are normally too faint to be studied spectroscopically. We take advantage of strong gravitational lensing magnification to compile observations for a sample of 29 galaxies using modest integration times with the Keck and Palomar telescopes. Balmer emission lines confirm that the sample has a median SFR ∼ 10 M_sun yr^−1 and extends to lower SFR than has been probed by other surveys at similar redshift. We derive the metallicity, dust extinction, SFR, ionization parameter, and dynamical mass from the spectroscopic data, providing the first accurate characterization of the star-forming environment in low-luminosity galaxies at high redshift. For the first time, we directly test the proposal that the relation between galaxy stellar mass, star formation rate, and gas phase metallicity does not evolve. We find lower gas phase metallicity in the high redshift galaxies than in local sources with equivalent stellar mass and star formation rate, arguing against a time-invariant relation. While our result is preliminary and may be biased by measurement errors, this represents an important first measurement that will be further constrained by ongoing analysis of the full data set and by future observations.

We present a study of composite rest-frame ultraviolet spectra of Lyman break galaxies at z = 4 and discuss implications for the distribution of neutral outflowing gas in the circumgalactic medium. In general we find similar spectroscopic trends to those found at z = 3 by earlier surveys. In particular, absorption lines which trace neutral gas are weaker in less evolved galaxies with lower stellar masses, smaller radii, lower luminosity, less dust, and stronger Lyα emission. Typical galaxies are thus expected to have stronger Lyα emission and weaker low-ionization absorption at earlier times, and we indeed find somewhat weaker low-ionization absorption at higher redshifts. In conjunction with earlier results, we argue that the reduced low-ionization absorption is likely caused by lower covering fraction and/or velocity range of outflowing neutral gas at earlier epochs. This result has important implications for the hypothesis that early galaxies were responsible for cosmic reionization. We additionally show that fine structure emission lines are sensitive to the spatial extent of neutral gas, and demonstrate that neutral gas is concentrated at smaller galactocentric radii in higher redshift galaxies.

The results of this thesis present a coherent picture of galaxy evolution at high redshifts 2 ≲ z ≲ 4. Roughly 1/3 of massive star forming galaxies at this period are undergoing major mergers, while the rest are growing inside-out with star formation occurring in gravitationally unstable thick disks. Star formation, stellar mass, and metallicity are limited by outflows which create a circumgalactic medium of metal-enriched material. We conclude by describing some remaining open questions and prospects for improving our understanding of galaxy evolution with future observations of gravitationally lensed galaxies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis considers in detail the dynamics of two oscillators with weak nonlinear coupling. There are three classes of such problems: non-resonant, where the Poincaré procedure is valid to the order considered; weakly resonant, where the Poincaré procedure breaks down because small divisors appear (but do not affect the O(1) term) and strongly resonant, where small divisors appear and lead to O(1) corrections. A perturbation method based on Cole's two-timing procedure is introduced. It avoids the small divisor problem in a straightforward manner, gives accurate answers which are valid for long times, and appears capable of handling all three types of problems with no change in the basic approach.

One example of each type is studied with the aid of this procedure: for the nonresonant case the answer is equivalent to the Poincaré result; for the weakly resonant case the analytic form of the answer is found to depend (smoothly) on the difference between the initial energies of the two oscillators; for the strongly resonant case we find that the amplitudes of the two oscillators vary slowly with time as elliptic functions of ϵ t, where ϵ is the (small) coupling parameter.

Our results suggest that, as one might expect, the dynamical behavior of such systems varies smoothly with changes in the ratio of the fundamental frequencies of the two oscillators. Thus the pathological behavior of Whittaker's adelphic integrals as the frequency ratio is varied appears to be due to the fact that Whittaker ignored the small divisor problem. The energy sharing properties of these systems appear to depend strongly on the initial conditions, so that the systems not ergodic.

The perturbation procedure appears to be applicable to a wide variety of other problems in addition to those considered here.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis explores the design, construction, and applications of the optoelectronic swept-frequency laser (SFL). The optoelectronic SFL is a feedback loop designed around a swept-frequency (chirped) semiconductor laser (SCL) to control its instantaneous optical frequency, such that the chirp characteristics are determined solely by a reference electronic oscillator. The resultant system generates precisely controlled optical frequency sweeps. In particular, we focus on linear chirps because of their numerous applications. We demonstrate optoelectronic SFLs based on vertical-cavity surface-emitting lasers (VCSELs) and distributed-feedback lasers (DFBs) at wavelengths of 1550 nm and 1060 nm. We develop an iterative bias current predistortion procedure that enables SFL operation at very high chirp rates, up to 10^16 Hz/sec. We describe commercialization efforts and implementation of the predistortion algorithm in a stand-alone embedded environment, undertaken as part of our collaboration with Telaris, Inc. We demonstrate frequency-modulated continuous-wave (FMCW) ranging and three-dimensional (3-D) imaging using a 1550 nm optoelectronic SFL.

We develop the technique of multiple source FMCW (MS-FMCW) reflectometry, in which the frequency sweeps of multiple SFLs are "stitched" together in order to increase the optical bandwidth, and hence improve the axial resolution, of an FMCW ranging measurement. We demonstrate computer-aided stitching of DFB and VCSEL sweeps at 1550 nm. We also develop and demonstrate hardware stitching, which enables MS-FMCW ranging without additional signal processing. The culmination of this work is the hardware stitching of four VCSELs at 1550 nm for a total optical bandwidth of 2 THz, and a free-space axial resolution of 75 microns.

We describe our work on the tomographic imaging camera (TomICam), a 3-D imaging system based on FMCW ranging that features non-mechanical acquisition of transverse pixels. Our approach uses a combination of electronically tuned optical sources and low-cost full-field detector arrays, completely eliminating the need for moving parts traditionally employed in 3-D imaging. We describe the basic TomICam principle, and demonstrate single-pixel TomICam ranging in a proof-of-concept experiment. We also discuss the application of compressive sensing (CS) to the TomICam platform, and perform a series of numerical simulations. These simulations show that tenfold compression is feasible in CS TomICam, which effectively improves the volume acquisition speed by a factor ten.

We develop chirped-wave phase-locking techniques, and apply them to coherent beam combining (CBC) of chirped-seed amplifiers (CSAs) in a master oscillator power amplifier configuration. The precise chirp linearity of the optoelectronic SFL enables non-mechanical compensation of optical delays using acousto-optic frequency shifters, and its high chirp rate simultaneously increases the stimulated Brillouin scattering (SBS) threshold of the active fiber. We characterize a 1550 nm chirped-seed amplifier coherent-combining system. We use a chirp rate of 5*10^14 Hz/sec to increase the amplifier SBS threshold threefold, when compared to a single-frequency seed. We demonstrate efficient phase-locking and electronic beam steering of two 3 W erbium-doped fiber amplifier channels, achieving temporal phase noise levels corresponding to interferometric fringe visibilities exceeding 98%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The solution behavior of linear polymer chains is well understood, having been the subject of intense study throughout the previous century. As plastics have become ubiquitous in everyday life, polymer science has grown into a major field of study. The conformation of a polymer in solution depends on the molecular architecture and its interactions with the surroundings. Developments in synthetic techniques have led to the creation of precision-tailored polymeric materials with varied topologies and functionalities. In order to design materials with the desired properties, it is imperative to understand the relationships between polymer architecture and their conformation and behavior. To meet that need, this thesis investigates the conformation and self-assembly of three architecturally complex macromolecular systems with rich and varied behaviors driven by the resolution of intramolecular conflicts. First we describe the development of a robust and facile synthetic approach to reproducible bottlebrush polymers (Chapter 2). The method was used to produce homologous series of bottlebrush polymers with polynorbornene backbones, which revealed the effect of side-chain and backbone length on the overall conformation in both good and theta solvent conditions (Chapter 3). The side-chain conformation was obtained from a series of SANS experiments and determined to be indistinguishable from the behavior of free linear polymer chains. Using deuterium-labeled bottlebrushes, we were able for the first time to directly observe the backbone conformation of a bottlebrush polymer which showed self-avoiding walk behavior. Secondly, a series of SANS experiments was conducted on a homologous series of Side Group Liquid Crystalline Polymers (SGLCPs) in a perdeuterated small molecule liquid crystal (5CB). Monodomain, aligned, dilute samples of SGLCP-b-PS block copolymers were seen to self-assemble into complex micellar structures with mutually orthogonally oriented anisotropies at different length scales (Chapter 4). Finally, we present the results from the first scattering experiments on a set of fuel-soluble, associating telechelic polymers. We observed the formation of supramolecular aggregates in dilute (≤0.5wt%) solutions of telechelic polymers and determined that the choice of solvent has a significant effect on the strength of association and the size of the supramolecules (Chapter 5). A method was developed for the direct estimation of supramolecular aggregation number from SANS data. The insight into structure-property relationships obtained from this work will enable the more targeted development of these molecular architectures for their respective applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis has two major parts. The first part of the thesis will describe a high energy cosmic ray detector -- the High Energy Isotope Spectrometer Telescope (HEIST). HEIST is a large area (0.25 m2sr) balloon-borne isotope spectrometer designed to make high-resolution measurements of isotopes in the element range from neon to nickel (10 ≤ Z ≤ 28) at energies of about 2 GeV/nucleon. The instrument consists of a stack of 12 NaI(Tl) scintilla tors, two Cerenkov counters, and two plastic scintillators. Each of the 2-cm thick NaI disks is viewed by six 1.5-inch photomultipliers whose combined outputs measure the energy deposition in that layer. In addition, the six outputs from each disk are compared to determine the position at which incident nuclei traverse each layer to an accuracy of ~2 mm. The Cerenkov counters, which measure particle velocity, are each viewed by twelve 5-inch photomultipliers using light integration boxes.

HEIST-2 determines the mass of individual nuclei by measuring both the change in the Lorentz factor (Δγ) that results from traversing the NaI stack, and the energy loss (ΔΕ) in the stack. Since the total energy of an isotope is given by Ε = γM, the mass M can be determined by M = ΔΕ/Δγ. The instrument is designed to achieve a typical mass resolution of 0.2 amu.

The second part of this thesis presents an experimental measurement of the isotopic composition of the fragments from the breakup of high energy 40Ar and 56Fe nuclei. Cosmic ray composition studies rely heavily on semi-empirical estimates of the cross-sections for the nuclear fragmentation reactions which alter the composition during propagation through the interstellar medium. Experimentally measured yields of isotopes from the fragmentation of 40Ar and 56Fe are compared with calculated yields based on semi-empirical cross-section formulae. There are two sets of measurements. The first set of measurements, made at the Lawrence Berkeley Laboratory Bevalac using a beam of 287 MeV/nucleon 40Ar incident on a CH2 target, achieves excellent mass resolutionm ≤ 0.2 amu) for isotopes of Mg through K using a Si(Li) detector telescope. The second set of measurements, also made at the Lawrence Berkeley Laboratory Bevalac, using a beam of 583 MeV/nucleon 56FeFe incident on a CH2 target, resolved Cr, Mn, and Fe fragments with a typical mass resolution of ~ 0.25 amu, through the use of the Heavy Isotope Spectrometer Telescope (HIST) which was later carried into space on ISEE-3 in 1978. The general agreement between calculation and experiment is good, but some significant differences are reported here.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wide field-of-view (FOV) microscopy is of high importance to biological research and clinical diagnosis where a high-throughput screening of samples is needed. This thesis presents the development of several novel wide FOV imaging technologies and demonstrates their capabilities in longitudinal imaging of living organisms, on the scale of viral plaques to live cells and tissues.

The ePetri Dish is a wide FOV on-chip bright-field microscope. Here we applied an ePetri platform for plaque analysis of murine norovirus 1 (MNV-1). The ePetri offers the ability to dynamically track plaques at the individual cell death event level over a wide FOV of 6 mm × 4 mm at 30 min intervals. A density-based clustering algorithm is used to analyze the spatial-temporal distribution of cell death events to identify plaques at their earliest stages. We also demonstrate the capabilities of the ePetri in viral titer count and dynamically monitoring plaque formation, growth, and the influence of antiviral drugs.

We developed another wide FOV imaging technique, the Talbot microscope, for the fluorescence imaging of live cells. The Talbot microscope takes advantage of the Talbot effect and can generate a focal spot array to scan the fluorescence samples directly on-chip. It has a resolution of 1.2 μm and a FOV of ~13 mm2. We further upgraded the Talbot microscope for the long-term time-lapse fluorescence imaging of live cell cultures, and analyzed the cells’ dynamic response to an anticancer drug.

We present two wide FOV endoscopes for tissue imaging, named the AnCam and the PanCam. The AnCam is based on the contact image sensor (CIS) technology, and can scan the whole anal canal within 10 seconds with a resolution of 89 μm, a maximum FOV of 100 mm × 120 mm, and a depth-of-field (DOF) of 0.65 mm. We also demonstrate the performance of the AnCam in whole anal canal imaging in both animal models and real patients. In addition to this, the PanCam is based on a smartphone platform integrated with a panoramic annular lens (PAL), and can capture a FOV of 18 mm × 120 mm in a single shot with a resolution of 100─140 μm. In this work we demonstrate the PanCam’s performance in imaging a stained tissue sample.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stable isotope geochemistry is a valuable toolkit for addressing a broad range of problems in the geosciences. Recent technical advances provide information that was previously unattainable or provide unprecedented precision and accuracy. Two such techniques are site-specific stable isotope mass spectrometry and clumped isotope thermometry. In this thesis, I use site-specific isotope and clumped isotope data to explore natural gas development and carbonate reaction kinetics. In the first chapter, I develop an equilibrium thermodynamics model to calculate equilibrium constants for isotope exchange reactions in small organic molecules. This equilibrium data provides a framework for interpreting the more complex data in the later chapters. In the second chapter, I demonstrate a method for measuring site-specific carbon isotopes in propane using high-resolution gas source mass spectrometry. This method relies on the characteristic fragments created during electron ionization, in which I measure the relative isotopic enrichment of separate parts of the molecule. My technique will be applied to a range of organic compounds in the future. For the third chapter, I use this technique to explore diffusion, mixing, and other natural processes in natural gas basins. As time progresses and the mixture matures, different components like kerogen and oil contribute to the propane in a natural gas sample. Each component imparts a distinct fingerprint on the site-specific isotope distribution within propane that I can observe to understand the source composition and maturation of the basin. Finally, in Chapter Four, I study the reaction kinetics of clumped isotopes in aragonite. Despite its frequent use as a clumped isotope thermometer, the aragonite blocking temperature is not known. Using laboratory heating experiments, I determine that the aragonite clumped isotope thermometer has a blocking temperature of 50-100°C. I compare this result to natural samples from the San Juan Islands that exhibit a maximum clumped isotope temperature that matches this blocking temperature. This thesis presents a framework for measuring site-specific carbon isotopes in organic molecules and new constraints on aragonite reaction kinetics. This study represents the foundation of a future generation of geochemical tools for the study of complex geologic systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The rapid growth and development of Los Angeles City and County has been one of the phenomena of the present age. The growth of a city from 50,600 to 576,000, an increase of over 1000% in thirty years is an unprecedented occurrence. It has given rise to a variety of problems of increasing magnitude.

Chief among these are: supply of food, water and shelter development of industry and markets, prevention and removal of downtown congestion and protection of life and property. These, of course, are the problems that any city must face. But in the case of a community which doubles its population every ten years, radical and heroic measures must often be taken.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.

A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.

To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.

An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.

The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.

A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.

The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.