899 resultados para two-Gaussian mixture model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The world contains boundaries (e.g., continental edge for terrestrial taxa) that impose geometric constraints on the distribution of species ranges. Thus, contrary to traditional thinking, the expected species richness pattern in absence of ecological or physiographical factors is unlikely to be uniform. Species richness has been shown to peak in the middle of a bounded one-dimensional domain, even in the absence of ecological or physiographical factors. Because species ranges are not linear, an extension of the approach to two dimensions is necessary. Here we present a two-dimensional null model accounting for effects of geometric constraints. We use the model to examine the effects of continental edge on the distribution of terrestrial animals in Africa and compare the predictions with the observed pattern of species richness in birds endemic to the continent. Latitudinal, longitudinal, and two-dimensional patterns of species richness are predicted well from the modeled null effects alone. As expected, null effects are of high significance for wide ranging species only. Our results highlight the conceptual significance of an until recently neglected constraint from continental shape alone and support a more cautious analysis of species richness patterns at this scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Far-field stresses are those present in a volume of rock prior to excavations being created. Estimates of the orientation and magnitude of far-field stresses, often used in mine design, are generally obtained by single-point measurements of stress, or large-scale, regional trends. Point measurements can be a poor representation of far-field stresses as a result of excavation-induced stresses and geological structures. For these reasons, far-field stress estimates can be associated with high levels of uncertainty. The purpose of this thesis is to investigate the practical feasibility, applications, and limitations of calibrating far-field stress estimates through tunnel deformation measurements captured using LiDAR imaging. A method that estimates the orientation and magnitude of excavation-induced principal stress changes through back-analysis of deformation measurements from LiDAR imaged tunnels was developed and tested using synthetic data. If excavation-induced stress change orientations and magnitudes can be accurately estimated, they can be used in the calibration of far-field stress input to numerical models. LiDAR point clouds have been proven to have a number of underground applications, thus it is desired to explore their use in numerical model calibration. The back-analysis method is founded on the superposition of stresses and requires a two-dimensional numerical model of the deforming tunnel. Principal stress changes of known orientation and magnitude are applied to the model to create calibration curves. Estimation can then be performed by minimizing squared differences between the measured tunnel and sets of calibration curve deformations. In addition to the back-analysis estimation method, a procedure consisting of previously existing techniques to measure tunnel deformation using LiDAR imaging was documented. Under ideal conditions, the back-analysis method estimated principal stress change orientations within ±5° and magnitudes within ±2 MPa. Results were comparable for four different tunnel profile shapes. Preliminary testing using plastic deformation, a rough tunnel profile, and profile occlusions suggests that the method can work under more realistic conditions. The results from this thesis set the groundwork for the continued development of a new, inexpensive, and efficient far-field stress estimate calibration method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have analyzed the Nd isotopic composition of both ancient seawater and detrital material from long sequences of carbonated oozes of the South Indian Ocean which are ODP Site 756 (Ninety East Ridge (-30°S), 1518 m water depth) and ODP Site 762 (Northwest Australian margin, 1360 m water depth). The measurements indicate that the epsilon-Nd changes in Indian seawater over the last 35 Ma result from changes in the oceanic circulation, large volcanic and continental weathering Nd inputs. This highlights the diverse nature of those controls and their interconnections in a small area of the ocean. These new records combined with those previously obtained at the equatorial ODP Sites 757 and 707 in the Indian Ocean (Gourlan et al., 2008, doi:10.1016/j.epsl.2007.11.054) established that the distribution of intermediate seawater epsilon-Nd was uniform over most of the Indian Ocean from 35 Ma to 10 Ma within a geographical area extending from 40°S to the equator and from -60°E to 120°E. However, the epsilon-Nd value of Indian Ocean seawater which kept an almost constant value (at about -7 to -8) from 35 to 15 Ma rose by 3 epsilon-Nd units from 15 to 10 Ma. This sharp increase has been caused by a radiogenic Nd enrichment of the water mass originating from the Pacific flowing through the Indonesian Passage. Using a two end-members model we calculated that the Nd transported to the Indian Ocean through the Indonesian Pathway was 1.7 times larger at 10 Ma than at 15 Ma. The Nd isotopic composition of ancient seawater and that of the sediment detrital component appear to be strongly correlated for some specific events. A first evidence occurs between 20 and 15 Ma with two positive spikes recorded in both epsilon-Nd signals that are clearly induced by a volcanic crisis of, most likely, the St. Paul hot-spot. A second evidence is the very large epsilon-Nd decrease recorded at ODP Sites 756 and 762 during the past 10 Ma which has never been previously observed. The synchronism between the epsilon-Nd decrease in seawater from 10 to 5 Ma and evidences of desertification in the western part of the nearly Australian continent suggests enhanced weathering inputs in this ocean from this continent as a result of climatic changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strength and geometry of the Atlantic meridional overturning circulation is tightly coupled to climate on glacial-interglacial and millennial timescales, but has proved difficult to reconstruct, particularly for the Last Glacial Maximum. Today, the return flow from the northern North Atlantic to lower latitudes associated with the Atlantic meridional overturning circulation reaches down to approximately 4,000 m. In contrast, during the Last Glacial Maximum this return flow is thought to have occurred primarily at shallower depths. Measurements of sedimentary 231Pa/230Th have been used to reconstruct the strength of circulation in the North Atlantic Ocean, but the effects of biogenic silica on 231Pa/230Th-based estimates remain controversial. Here we use measurements of 231Pa/230Th ratios and biogenic silica in Holocene-aged Atlantic sediments and simulations with a two-dimensional scavenging model to demonstrate that the geometry and strength of the Atlantic meridional overturning circulation are the primary controls of 231Pa/230Th ratios in modern Atlantic sediments. For the glacial maximum, a simulation of Atlantic overturning with a shallow, but vigorous circulation and bulk water transport at around 2,000 m depth best matched observed glacial Atlantic 231Pa/230Th values. We estimate that the transport of intermediate water during the Last Glacial Maximum was at least as strong as deep water transport today.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors report the results of two studies that model the antecedents of goal congruence in retail-service settings. They draw the antecedents from extant research and propose that goal congruence is related to employees' perceptions of morale, leadership support, fairness in reward allocation, and empowerment. They hypothesize and test direct and indirect relationships between these constructs and goal congruence. Results of structural equations modeling suggest an important mediating role for morale and interesting areas of variation across retail and service settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel class of nonlinear, visco-elastic rheologies has recently been developed by MUHLHAUS et al. (2002a, b). The theory was originally developed for the simulation of large deformation processes including folding and kinking in multi-layered visco-elastic rock. The orientation of the layer surfaces or slip planes in the context of crystallographic slip is determined by the normal vector the so-called director of these surfaces. Here the model (MUHLHAUS et al., 2002a, b) is generalized to include thermal effects; it is shown that in 2-D steady states the director is given by the gradient of the flow potential. The model is applied to anisotropic simple shear where the directors are initially parallel to the shear direction. The relative effects of textural hardening and thermal softening are demonstrated. We then turn to natural convection and compare the time evolution and approximately steady states of isotropic and anisotropic convection for a Rayleigh number Ra=5.64x10(5) for aspect ratios of the experimental domain of 1 and 2, respectively. The isotropic case has a simple steady-state solution, whereas in the orthotropic convection model patterns evolve continuously in the core of the convection cell, which makes only a near-steady condition possible. This near-steady state condition shows well aligned boundary layers, and the number of convection cells which develop appears to be reduced in the orthotropic case. At the moderate Rayleigh numbers explored here we found only minor influences in the change from aspect ratio one to two in the model domain.