962 resultados para Two variable oregonator model
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
Far-field stresses are those present in a volume of rock prior to excavations being created. Estimates of the orientation and magnitude of far-field stresses, often used in mine design, are generally obtained by single-point measurements of stress, or large-scale, regional trends. Point measurements can be a poor representation of far-field stresses as a result of excavation-induced stresses and geological structures. For these reasons, far-field stress estimates can be associated with high levels of uncertainty. The purpose of this thesis is to investigate the practical feasibility, applications, and limitations of calibrating far-field stress estimates through tunnel deformation measurements captured using LiDAR imaging. A method that estimates the orientation and magnitude of excavation-induced principal stress changes through back-analysis of deformation measurements from LiDAR imaged tunnels was developed and tested using synthetic data. If excavation-induced stress change orientations and magnitudes can be accurately estimated, they can be used in the calibration of far-field stress input to numerical models. LiDAR point clouds have been proven to have a number of underground applications, thus it is desired to explore their use in numerical model calibration. The back-analysis method is founded on the superposition of stresses and requires a two-dimensional numerical model of the deforming tunnel. Principal stress changes of known orientation and magnitude are applied to the model to create calibration curves. Estimation can then be performed by minimizing squared differences between the measured tunnel and sets of calibration curve deformations. In addition to the back-analysis estimation method, a procedure consisting of previously existing techniques to measure tunnel deformation using LiDAR imaging was documented. Under ideal conditions, the back-analysis method estimated principal stress change orientations within ±5° and magnitudes within ±2 MPa. Results were comparable for four different tunnel profile shapes. Preliminary testing using plastic deformation, a rough tunnel profile, and profile occlusions suggests that the method can work under more realistic conditions. The results from this thesis set the groundwork for the continued development of a new, inexpensive, and efficient far-field stress estimate calibration method.
Resumo:
We have analyzed the Nd isotopic composition of both ancient seawater and detrital material from long sequences of carbonated oozes of the South Indian Ocean which are ODP Site 756 (Ninety East Ridge (-30°S), 1518 m water depth) and ODP Site 762 (Northwest Australian margin, 1360 m water depth). The measurements indicate that the epsilon-Nd changes in Indian seawater over the last 35 Ma result from changes in the oceanic circulation, large volcanic and continental weathering Nd inputs. This highlights the diverse nature of those controls and their interconnections in a small area of the ocean. These new records combined with those previously obtained at the equatorial ODP Sites 757 and 707 in the Indian Ocean (Gourlan et al., 2008, doi:10.1016/j.epsl.2007.11.054) established that the distribution of intermediate seawater epsilon-Nd was uniform over most of the Indian Ocean from 35 Ma to 10 Ma within a geographical area extending from 40°S to the equator and from -60°E to 120°E. However, the epsilon-Nd value of Indian Ocean seawater which kept an almost constant value (at about -7 to -8) from 35 to 15 Ma rose by 3 epsilon-Nd units from 15 to 10 Ma. This sharp increase has been caused by a radiogenic Nd enrichment of the water mass originating from the Pacific flowing through the Indonesian Passage. Using a two end-members model we calculated that the Nd transported to the Indian Ocean through the Indonesian Pathway was 1.7 times larger at 10 Ma than at 15 Ma. The Nd isotopic composition of ancient seawater and that of the sediment detrital component appear to be strongly correlated for some specific events. A first evidence occurs between 20 and 15 Ma with two positive spikes recorded in both epsilon-Nd signals that are clearly induced by a volcanic crisis of, most likely, the St. Paul hot-spot. A second evidence is the very large epsilon-Nd decrease recorded at ODP Sites 756 and 762 during the past 10 Ma which has never been previously observed. The synchronism between the epsilon-Nd decrease in seawater from 10 to 5 Ma and evidences of desertification in the western part of the nearly Australian continent suggests enhanced weathering inputs in this ocean from this continent as a result of climatic changes.
Resumo:
The strength and geometry of the Atlantic meridional overturning circulation is tightly coupled to climate on glacial-interglacial and millennial timescales, but has proved difficult to reconstruct, particularly for the Last Glacial Maximum. Today, the return flow from the northern North Atlantic to lower latitudes associated with the Atlantic meridional overturning circulation reaches down to approximately 4,000 m. In contrast, during the Last Glacial Maximum this return flow is thought to have occurred primarily at shallower depths. Measurements of sedimentary 231Pa/230Th have been used to reconstruct the strength of circulation in the North Atlantic Ocean, but the effects of biogenic silica on 231Pa/230Th-based estimates remain controversial. Here we use measurements of 231Pa/230Th ratios and biogenic silica in Holocene-aged Atlantic sediments and simulations with a two-dimensional scavenging model to demonstrate that the geometry and strength of the Atlantic meridional overturning circulation are the primary controls of 231Pa/230Th ratios in modern Atlantic sediments. For the glacial maximum, a simulation of Atlantic overturning with a shallow, but vigorous circulation and bulk water transport at around 2,000 m depth best matched observed glacial Atlantic 231Pa/230Th values. We estimate that the transport of intermediate water during the Last Glacial Maximum was at least as strong as deep water transport today.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.
Highly organized structure in the non-coding region of the psbA minicircle from clade C Symbiodinium
Resumo:
The chloroplast genes of dinoflagellates are distributed among small, circular dsDNA molecules termed minicircles. In this paper, we describe the structure of the non-coding region of the psbA minicircle from Symbiodinium. DNA sequence was obtained from five Symbiodinium strains obtained from four different coral host species (Goniopora tenuidens, Heliofungia actiniformis, Leptastrea purpurea and Pocillopora damicornis), which had previously been determined to be closely related using LSU rDNA region D1/D2 sequence analysis. Eight distinct sequence blocks, consisting of four conserved cores interspersed with two metastable regions and flanked by two variable regions, occurred at similar positions in all strains. Inverted repeats (IRs) occurred in tandem or 'twin' formation within two of the four cores. The metastable regions also consisted of twin IRs and had modular behaviour, being either fully present or completely absent in the different strains. These twin IRs are similar in sequence to double-hairpin elements (DHEs) found in the mitochondrial genomes of some fungi, and may be mobile elements or may serve a functional role in recombination or replication. Within the central unit (consisting of the cores plus the metastable regions), all IRs contained perfect sequence inverses, implying they are highly evolved. IRs were also present outside the central unit but these were imperfect and possessed by individual strains only. A central adenine-rich sequence most closely resembled one in the centre of the non-coding part of Amphidinium operculatum minicircles, and is a potential origin of replication. Sequence polymorphism was extremely high in the variable regions, suggesting that these regions may be useful for distinguishing strains that cannot be differentiated using molecular markers currently available for Symbiodinium.
Resumo:
The authors report the results of two studies that model the antecedents of goal congruence in retail-service settings. They draw the antecedents from extant research and propose that goal congruence is related to employees' perceptions of morale, leadership support, fairness in reward allocation, and empowerment. They hypothesize and test direct and indirect relationships between these constructs and goal congruence. Results of structural equations modeling suggest an important mediating role for morale and interesting areas of variation across retail and service settings.
Resumo:
Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.
Resumo:
We introduce a new class of quantum Monte Carlo methods, based on a Gaussian quantum operator representation of fermionic states. The methods enable first-principles dynamical or equilibrium calculations in many-body Fermi systems, and, combined with the existing Gaussian representation for bosons, provide a unified method of simulating Bose-Fermi systems. As an application relevant to the Fermi sign problem, we calculate finite-temperature properties of the two dimensional Hubbard model and the dynamics in a simple model of coherent molecular dissociation.
Resumo:
A novel class of nonlinear, visco-elastic rheologies has recently been developed by MUHLHAUS et al. (2002a, b). The theory was originally developed for the simulation of large deformation processes including folding and kinking in multi-layered visco-elastic rock. The orientation of the layer surfaces or slip planes in the context of crystallographic slip is determined by the normal vector the so-called director of these surfaces. Here the model (MUHLHAUS et al., 2002a, b) is generalized to include thermal effects; it is shown that in 2-D steady states the director is given by the gradient of the flow potential. The model is applied to anisotropic simple shear where the directors are initially parallel to the shear direction. The relative effects of textural hardening and thermal softening are demonstrated. We then turn to natural convection and compare the time evolution and approximately steady states of isotropic and anisotropic convection for a Rayleigh number Ra=5.64x10(5) for aspect ratios of the experimental domain of 1 and 2, respectively. The isotropic case has a simple steady-state solution, whereas in the orthotropic convection model patterns evolve continuously in the core of the convection cell, which makes only a near-steady condition possible. This near-steady state condition shows well aligned boundary layers, and the number of convection cells which develop appears to be reduced in the orthotropic case. At the moderate Rayleigh numbers explored here we found only minor influences in the change from aspect ratio one to two in the model domain.