991 resultados para two parameter
Resumo:
Transverse galloping is a type of aeroelastic instability characterized by oscillations perpendicular to wind direction, large amplitude and low frequency, which appears in some elastic two-dimensional bluff bodies when they are subjected to an incident flow, provided that the flow velocity exceeds a threshold critical value. Understanding the galloping phenomenon of different cross-sectional geometries is important in a number of engineering applications: for energy harvesting applications the interest relies on strongly unstable configurations but in other cases the purpose is to avoid this type of aeroelastic phenomenon. In this paper the aim is to analyze the transverse galloping behavior of rhombic bodies to understand, on the one hand, the dependence of the instability with a geometrical parameter such as the relative thickness and, on the other hand, why this cross-section shape, that is generally unstable, shows a small range of relative thickness values where it is stable. Particularly, the non-galloping rhombus-shaped prism?s behavior is revised through wind tunnel experiments. The bodies are allowed to freely move perpendicularly to the incoming flow and the amplitude of movement and pressure distributions on the surfaces is measured.
Resumo:
The reason that the indefinite exponential increase in the number of one’s ancestors does not take place is found in the law of sibling interference, which can be expressed by the following simple equation:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}\begin{matrix}{\mathit{N}}_{{\mathit{n}}} \enskip & \\ {\mathit{{\blacksquare}}} \enskip & \\ {\mathit{ASZ}} \enskip & \end{matrix} {\mathrm{\hspace{.167em}{\times}\hspace{.167em}2\hspace{.167em}=\hspace{.167em}}}{\mathit{N_{n+1},}}\end{equation*}\end{document} where Nn is the number of ancestors in the nth generation, ASZ is the average sibling size of these ancestors, and Nn+1 is the number of ancestors in the next older generation (n + 1). Accordingly, the exponential increase in the number of one’s ancestors is an initial anomaly that occurs while ASZ remains at 1. Once ASZ begins to exceed 1, the rate of increase in the number of ancestors is progressively curtailed, falling further and further behind the exponential increase rate. Eventually, ASZ reaches 2, and at that point, the number of ancestors stops increasing for two generations. These two generations, named AN SA and AN SA + 1, are the most critical in the ancestry, for one’s ancestors at that point come to represent all the progeny-produced adults of the entire ancestral population. Thereafter, the fate of one’s ancestors becomes the fate of the entire population. If the population to which one belongs is a successful, slowly expanding one, the number of ancestors would slowly decline as you move toward the remote past. This is because ABZ would exceed 2. Only when ABZ is less than 2 would the number of ancestors increase beyond the AN SA and AN SA + 1 generations. Since the above is an indication of a failing population on the way to extinction, there had to be the previous AN SA involving a far greater number of individuals for such a population. Simulations indicated that for a member of a continuously successful population, the AN SA ancestors might have numbered as many as 5.2 million, the AN SA generation being the 28th generation in the past. However, because of the law of increasingly irrelevant remote ancestors, only a very small fraction of the AN SA ancestors would have left genetic traces in the genome of each descendant of today.
Resumo:
Geological observations, using "free-diving" techniques (Figure I) were made in September, 1960 and March 1961 along two continuous profiles in the outer Kiel Harbor, Germany and at several other spot locations in the Western Baltic Sea. A distinct terrace, cut in Pleistocene glacial till, was found that was covered with varying amounts and types of recent deposits. Hand samples were taken of the sea-floor sediments and grainsize distribution determined for both the sediment as a whole and for its heavy mineral fraction. From the Laboratory and Field observations it was possible to recognize two distinct types of sand; Type I, Sand resulting from transportation over a long period of time and distance and Type 11, Sand resulting from little transportation and found today near to xvhere it was formed. Several criterea related to the agent of movement could be used to classify the nature of the sediment; (1) undisturbed (the sediment Cover of the Pleistocene Terrace is essentially undisturbed), (2) mixed by organisms, (3) transported by water movements (sediment found with ripple marks, etc., and (4) "Scoured" (the movement of individual particles of sediment from around larger boulders causes a slow downward movement or "Creeping" which is due to both the force of gravity and bottom currents. These observations and laboratory studies are discussed concerning their relationship to the formation of residual sediments, the direction of sand transportation, and the intensive erosion on the outer edge of the wave-cut platform found in this part of the Baltic Sea.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The role of the collective antisymmetric state in entanglement creation by spontaneous emission in a system of two non-overlapping two-level atoms has been investigated. Populations of the collective atomic states and the Wootters entanglement measure (concurrence) for two sets of initial atomic conditions are calculated and illustrated graphically. Calculations include the dipole-dipole interaction and a spatial separation between the atoms that the antisymmetric state of the system is included throughout even for small interatomic separations. It is shown that spontaneous emission can lead to a transient entanglement between the atoms even if the atoms were prepared initially in an unentangled state. It is found that the ability of spontaneous emission to create transient entanglement relies on the absence of population in the collective symmetric state of the system. For the initial state of only one atom excited, entanglement builds up rapidly in time and reaches a maximum for parameter values corresponding roughly to zero population in the symmetric state. On the other hand, for the initial condition of both atoms excited, the atoms remain unentangled until the symmetric state is depopulated. A simple physical interpretation of these results is given in terms of the diagonal states of the density matrix of the system. We also study entanglement creation in a system of two non-identical atoms of different transition frequencies. It is found that the entanglement between the atoms can be enhanced compared to that for identical atoms, and can decay with two different time scales resulting from the coherent transfer of the population from the symmetric to the antisymmetric state. In addition, it was found that a decaying initial entanglement between the atoms can display a revival behaviour.
Resumo:
We analyse the relation between the entanglement and spin-squeezing parameter in the two-atom Dicke model and identify the source of the discrepancy recently reported by Banerjee (2001 Preprint quant-ph/0110032) and Zhou et al (2002 J. Opt. B. Quantum Semiclass. Opt. 4 425), namely that one can observe entanglement without spin squeezing. Our calculations demonstrate that there are two criteria for entanglement, one associated with the two-photon coherences that create two-photon entangled states, and the other associated with populations of the collective states. We find that the spin-squeezing parameter correctly predicts entanglement in the two-atom Dicke system only if it is associated with two-photon entangled states, but fails to predict entanglement when it is associated with the entangled symmetric state. This explicitly identifies the source of the discrepancy and explains why the system can be entangled without spin squeezing. We illustrate these findings with three examples of the interaction of the system with thermal, classical squeezed vacuum, and quantum squeezed vacuum fields.
Resumo:
We show that the two definitions of spin squeezing extensively used in the literature [M. Kitagawa and M. Ueda, Phys. Rev. A 47, 5138 (1993) and D.J. Wineland , Phys. Rev. A 50, 67 (1994)] give different predictions of entanglement in the two-atom Dicke system. We analyze differences between the definitions and show that the spin squeezing parameter of Kitagawa and Ueda is a better measure of entanglement than the commonly used spectroscopic spin squeezing parameter. We illustrate this relation by examining different examples of a driven two-atom Dicke system in which spin squeezing and entanglement arise dynamically. We give an explanation of the source of the difference using the negativity criterion for entanglement.
Resumo:
Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.
Resumo:
Adsorption of ethylene and ethane on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers is studied in detail to investigate the packing efficiency, the two-dimensional critical temperature, and the variation of the isosteric heat of adsorption with loading and temperature. Here we used a Monte Carlo simulation method with a grand canonical Monte Carlo ensemble. A number of two-center Lennard-Jones (LJ) potential models are investigated to study the impact of the choice of potential models in the description of adsorption behavior. We chose two 2C-LJ potential models in our investigation of the (i) UA-TraPPE-LJ model of Martin and Siepmann (J. Phys. Chem. B 1998,102, 25692577) for ethane and Wick et al. (J. Phys. Chem. B 2000,104, 8008-8016) for ethylene and (ii) AUA4-LJ model of Ungerer et al. (J. Chem. Phys. 2000,112, 5499-5510) for ethane and Bourasseau et al. (J. Chem. Phys. 2003, 118, 3020-3034) for ethylene. These models are used to study the adsorption of ethane and ethylene on graphitized thermal carbon black. It is found that the solid-fluid binary interaction parameter is a function of adsorbate and temperature, and the adsorption isotherms and heat of adsorption are well described by both the UA-TraPPE and AUA models, although the UA-TraPPE model performs slightly better. However, the local distributions predicted by these two models are slightly different. These two models are used to explore the two-dimensional condensation for the graphitized thermal carbon black, and these values are 110 K for ethylene and 120 K for ethane.
Resumo:
The prediction of watertable fluctuations in a coastal aquifer is important for coastal management. However, most previous approaches have based on the one-dimensional Boussinesq equation, neglecting variations in the coastline and beach slope. In this paper, a closed-form analytical solution for a two-dimensional unconfined coastal aquifer bounded by a rhythmic coastline is derived. In the new model, the effect of beach slope is also included, a feature that has not been considered in previous two-dimensional approximations. Three small parameters, the shallow water parameter (epsilon), the amplitude parameter (a) and coastline parameter (beta) are used in the perturbation approximation. The numerical results demonstrate the significant influence of both the coastline shape and beach slopes on tide-driven coastal groundwater fluctuations. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Optimal sampling times are found for a study in which one of the primary purposes is to develop a model of the pharmacokinetics of itraconazole in patients with cystic fibrosis for both capsule and solution doses. The optimal design is expected to produce reliable estimates of population parameters for two different structural PK models. Data collected at these sampling times are also expected to provide the researchers with sufficient information to reasonably discriminate between the two competing structural models.
Resumo:
The aim of the study presented was to implement a process model to simulate the dynamic behaviour of a pilot-scale process for anaerobic two-stage digestion of sewage sludge. The model implemented was initiated to support experimental investigations of the anaerobic two-stage digestion process. The model concept implemented in the simulation software package MATLAB(TM)/Simulink(R) is a derivative of the IWA Anaerobic Digestion Model No.1 (ADM1) that has been developed by the IWA task group for mathematical modelling of anaerobic processes. In the present study the original model concept has been adapted and applied to replicate a two-stage digestion process. Testing procedures, including balance checks and 'benchmarking' tests were carried out to verify the accuracy of the implementation. These combined measures ensured a faultless model implementation without numerical inconsistencies. Parameters for both, the thermophilic and the mesophilic process stage, have been estimated successfully using data from lab-scale experiments described in literature. Due to the high number of parameters in the structured model, it was necessary to develop a customised procedure that limited the range of parameters to be estimated. The accuracy of the optimised parameter sets has been assessed against experimental data from pilot-scale experiments. Under these conditions, the model predicted reasonably well the dynamic behaviour of a two-stage digestion process in pilot scale. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The critical process parameter for mineral separation is the degree of mineral liberation achieved by comminution. The degree of liberation provides an upper limit of efficiency for any physical separation process. The standard approach to measuring mineral liberation uses mineralogical analysis based two-dimensional sections of particles which may be acquired using a scanning electron microscope and back-scatter electron analysis or from an analysis of an image acquired using an optical microscope. Over the last 100 years, mathematical techniques have been developed to use this two dimensional information to infer three-dimensional information about the particles. For mineral processing, a particle that contains more than one mineral (a composite particle) may appear to be liberated (contain only one mineral) when analysed using only its revealed particle section. The mathematical techniques used to interpret three-dimensional information belong, to a branch of mathematics called stereology. However methods to obtain the full mineral liberation distribution of particles from particle sections are relatively new. To verify these adjustment methods, we require an experimental method which can accurately measure both sectional and three dimensional properties. Micro Cone Beam Tomography provides such a method for suitable particles and hence, provides a way to validate methods used to convert two-dimensional measurements to three dimensional estimates. For this study ore particles from a well-characterised sample were subjected to conventional mineralogical analysis (using particle sections) to estimate three-dimensional properties of the particles. A subset of these particles was analysed using a micro-cone beam tomograph. This paper presents a comparison of the three-dimensional properties predicted from measured two-dimensional sections with the measured three-dimensional properties.
Resumo:
We compare the Q parameter obtained from the semi-analytical model with scalar and vector models for two realistic transmission systems. First a linear system with a compensated dispersion map and second a soliton transmission system.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.