991 resultados para Two parameter
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
BACKGROUND Measurement of HbA1c is the most important parameter to assess glycemic control in diabetic patients. Different point-of-care devices for HbA1c are available. The aim of this study was to evaluate two point-of-care testing (POCT) analyzers (DCA Vantage from Siemens and Afinion from Axis-Shield). We studied the bias and precision as well as interference from carbamylated hemoglobin. METHODS Bias of the POCT analyzers was obtained by measuring 53 blood samples from diabetic patients with a wide range of HbA1c, 4%-14% (20-130 mmol/mol), and comparing the results with those obtained by the laboratory method: HPLC HA 8160 Menarini. Precision was performed by 20 successive determinations of two samples with low 4.2% (22 mmol/mol) and high 9.5% (80 mmol/mol) HbA1c values. The possible interference from carbamylated hemoglobin was studied using 25 samples from patients with chronic renal failure. RESULTS The means of the differences between measurements performed by each POCT analyzer and the laboratory method (95% confidence interval) were: 0.28% (p<0.005) (0.10-0.44) for DCA and 0.27% (p<0.001) (0.19-0.35) for Afinion. Correlation coefficients were: r=0.973 for DCA, and r=0.991 for Afinion. The mean bias observed by using samples from chronic renal failure patients were 0.2 (range -0.4, 0.4) for DCA and 0.2 (-0.2, 0.5) for Afinion. Imprecision results were: CV=3.1% (high HbA1c) and 2.97% (low HbA1c) for DCA, CV=1.95% (high HbA1c) and 2.66% (low HbA1c) for Afinion. CONCLUSIONS Both POCT analyzers for HbA1c show good correlation with the laboratory method and acceptable precision.
Resumo:
Domain growth in a system with nonconserved order parameter is studied. We simulate the usual Ising model for binary alloys with concentration 0.5 on a two-dimensional square lattice by Monte Carlo techniques. Measurements of the energy, jump-acceptance ratio, and order parameters are performed. Dynamics based on the diffusion of a single vacancy in the system gives a growth law faster than the usual Allen-Cahn law. Allowing vacancy jumps to next-nearest-neighbor sites is essential to prevent vacancy trapping in the ordered regions. By measuring local order parameters we show that the vacancy prefers to be in the disordered regions (domain boundaries). This naturally concentrates the atomic jumps in the domain boundaries, accelerating the growth compared with the usual exchange mechanism that causes jumps to be homogeneously distributed on the lattice.
Resumo:
In this paper, we study dynamical aspects of the two-dimensional (2D) gonihedric spin model using both numerical and analytical methods. This spin model has vanishing microscopic surface tension and it actually describes an ensemble of loops living on a 2D surface. The self-avoidance of loops is parametrized by a parameter ¿. The ¿=0 model can be mapped to one of the six-vertex models discussed by Baxter, and it does not have critical behavior. We have found that allowing for ¿¿0 does not lead to critical behavior either. Finite-size effects are rather severe, and in order to understand these effects, a finite-volume calculation for non-self-avoiding loops is presented. This model, like his 3D counterpart, exhibits very slow dynamics, but a careful analysis of dynamical observables reveals nonglassy evolution (unlike its 3D counterpart). We find, also in this ¿=0 case, the law that governs the long-time, low-temperature evolution of the system, through a dual description in terms of defects. A power, rather than logarithmic, law for the approach to equilibrium has been found.
Resumo:
The energy and structure of a dilute hard-disks Bose gas are studied in the framework of a variational many-body approach based on a Jastrow correlated ground-state wave function. The asymptotic behaviors of the radial distribution function and the one-body density matrix are analyzed after solving the Euler equation obtained by a free minimization of the hypernetted chain energy functional. Our results show important deviations from those of the available low density expansions, already at gas parameter values x~0.001 . The condensate fraction in 2D is also computed and found generally lower than the 3D one at the same x.
Resumo:
The least limiting water range (LLWR) has been used as an indicator of soil physical quality as it represents, in a single parameter, the soil physical properties directly linked to plant growth, with the exception of temperature. The usual procedure for obtaining the LLWR involves determination of the water retention curve (WRC) and the soil resistance to penetration curve (SRC) in soil samples with undisturbed structure in the laboratory. Determination of the WRC and SRC using field measurements (in situ ) is preferable, but requires appropriate instrumentation. The objective of this study was to determine the LLWR from the data collected for determination of WRC and SRC in situ using portable electronic instruments, and to compare those determinations with the ones made in the laboratory. Samples were taken from the 0.0-0.1 m layer of a Latossolo Vermelho distrófico (Oxisol). Two methods were used for quantification of the LLWR: the traditional, with measurements made in soil samples with undisturbed structure; and in situ , with measurements of water content (θ), soil water potential (Ψ), and soil resistance to penetration (SR) through the use of sensors. The in situ measurements of θ, Ψ and SR were taken over a period of four days of soil drying. At the same time, samples with undisturbed structure were collected for determination of bulk density (BD). Due to the limitations of measurement of Ψ by tensiometer, additional determinations of θ were made with a psychrometer (in the laboratory) at the Ψ of -1500 kPa. The results show that it is possible to determine the LLWR by the θ, Ψ and SR measurements using the suggested approach and instrumentation. The quality of fit of the SRC was similar in both strategies. In contrast, the θ and Ψ in situ measurements, associated with those measured with a psychrometer, produced a better WRC description. The estimates of the LLWR were similar in both methodological strategies. The quantification of LLWR in situ can be achieved in 10 % of the time required for the traditional method.
Resumo:
We consider a Potts model diluted by fully frustrated Ising spins. The model corresponds to a fully frustrated Potts model with variables having an integer absolute value and a sign. This model presents precursor phenomena of a glass transition in the high-temperature region. We show that the onset of these phenomena can be related to a thermodynamic transition. Furthermore, this transition can be mapped onto a percolation transition. We numerically study the phase diagram in two dimensions (2D) for this model with frustration and without disorder and we compare it to the phase diagram of (i) the model with frustration and disorder and (ii) the ferromagnetic model. Introducing a parameter that connects the three models, we generalize the exact expression of the ferromagnetic Potts transition temperature in 2D to the other cases. Finally, we estimate the dynamic critical exponents related to the Potts order parameter and to the energy.
Resumo:
We deal with a classical predictive mechanical system of two spinless charges where radiation is considered and there are no external fields. The terms (2,2)Paa of the expansion in the charges of the HamiltonJacobi momenta are calculated. Using these, together with known previous results, we can obtain the paa up to the fourth order. Then we have calculated the radiated energy and the 3-momentum in a scattering process as functions of the impact parameter and the incident energy for the former and 3-momentum for the latter. Scattering cross-sections are also calculated. Good agreement with well known results, including those of quantum electrodynamics, has been found.
Resumo:
We present a combined shape and mechanical anisotropy evolution model for a two-phase inclusion-bearing rock subject to large deformation. A single elliptical inclusion embedded in a homogeneous but anisotropic matrix is used to represent a simplified shape evolution enforced on all inclusions. The mechanical anisotropy develops due to the alignment of elongated inclusions. The effective anisotropy is quantified using the differential effective medium (DEM) approach. The model can be run for any deformation path and an arbitrary viscosity ratio between the inclusion and host phase. We focus on the case of simple shear and weak inclusions. The shape evolution of the representative inclusion is largely insensitive to the anisotropy development and to parameter variations in the studied range. An initial hardening stage is observed up to a shear strain of gamma = 1 irrespective of the inclusion fraction. The hardening is followed by a softening stage related to the developing anisotropy and its progressive rotation toward the shear direction. The traction needed to maintain a constant shear rate exhibits a fivefold drop at gamma = 5 in the limiting case of an inviscid inclusion. Numerical simulations show that our analytical model provides a good approximation to the actual evolution of a two-phase inclusion-host composite. However, the inclusions develop complex sigmoidal shapes resulting in the formation of an S-C fabric. We attribute the observed drop in the effective normal viscosity to this structural development. We study the localization potential in a rock column bearing varying fraction of inclusions. In the inviscid inclusion case, a strain jump from gamma = 3 to gamma = 100 is observed for a change of the inclusion fraction from 20% to 33%.
Resumo:
Recent ink dating methods focused mainly on changes in solvent amounts occurring over time. A promising method was developed at the Landeskriminalamt of Munich using thermal desorption (TD) followed by gas chromatography / mass spectrometry (GC/MS) analysis. Sequential extractions of the phenoxyethanol present in ballpoint pen ink entries were carried out at two different temperatures. This method is applied in forensic practice and is currently implemented in several laboratories participating to the InCID group (International Collaboration on Ink Dating). However, harmonization of the method between the laboratories proved to be a particularly sensitive and time consuming task. The main aim of this work was therefore to implement the TD-GC/MS method at the Bundeskriminalamt (Wiesbaden, Germany) in order to evaluate if results were comparable to those obtained in Munich. At first validation criteria such as limits of reliable measurements, linearity and repeatability were determined. Samples were prepared in three different laboratories using the same inks and analyzed using two TDS-GC/MS instruments (one in Munich and one in Wiesbaden). The inter- and intra-laboratory variability of the ageing parameter was determined and ageing curves were compared. While inks stored in similar conditions yielded comparable ageing curves, it was observed that significantly different storage conditions had an influence on the resulting ageing curves. Finally, interpretation models, such as thresholds and trend tests, were evaluated and discussed in view of the obtained results. Trend tests were considered more suitable than threshold models. As both approaches showed limitations, an alternative model, based on the slopes of the ageing curves, was also proposed.
Resumo:
Signal transduction systems mediate the response and adaptation of organisms to environmental changes. In prokaryotes, this signal transduction is often done through Two Component Systems (TCS). These TCS are phosphotransfer protein cascades, and in their prototypical form they are composed by a kinase that senses the environmental signals (SK) and by a response regulator (RR) that regulates the cellular response. This basic motif can be modified by the addition of a third protein that interacts either with the SK or the RR in a way that could change the dynamic response of the TCS module. In this work we aim at understanding the effect of such an additional protein (which we call ‘‘third component’’) on the functional properties of a prototypical TCS. To do so we build mathematical models of TCS with alternative designs for their interaction with that third component. These mathematical models are analyzed in order to identify the differences in dynamic behavior inherent to each design, with respect to functionally relevant properties such as sensitivity to changes in either the parameter values or the molecular concentrations, temporal responsiveness, possibility of multiple steady states, or stochastic fluctuations in the system. The differences are then correlated to the physiological requirements that impinge on the functioning of the TCS. This analysis sheds light on both, the dynamic behavior of synthetically designed TCS, and the conditions under which natural selection might favor each of the designs. We find that a third component that modulates SK activity increases the parameter space where a bistable response of the TCS module to signals is possible, if SK is monofunctional, but decreases it when the SK is bifunctional. The presence of a third component that modulates RR activity decreases the parameter space where a bistable response of the TCS module to signals is possible.
Resumo:
A one-parameter class of simple models of two-dimensional dilaton gravity, which can be exactly solved including back-reaction effects, is investigated at both classical and quantum levels. This family contains the RST model as a special case, and it continuously interpolates between models having a flat (Rindler) geometry and a constant curvature metric with a nontrivial dilaton field. The processes of formation of black hole singularities from collapsing matter and Hawking evaporation are considered in detail. Various physical aspects of these geometries are discussed, including the cosmological interpretation.
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
This study aimed to evaluate the interference of tuberculin test on the gamma-interferon (INFg) assay, to estimate the sensitivity and specificity of the INFg assay in Brazilian conditions, and to simulate multiple testing using the comparative tuberculin test and the INFg assay. Three hundred-fifty cattle from two TB-free and two TB-infected herds were submitted to the comparative tuberculin test and the INFg assay. The comparative tuberculin test was performed using avian and bovine PPD. The INFg assay was performed by the BovigamTM kit (CSL Veterinary, Australia), according to the manufacturer's specifications. Sensitivity and specificity of the INFg assay were assessed by a Bayesian latent class model. These diagnostic parameters were also estimate for multiple testing. The results of INFg assay on D0 and D3 after the comparative tuberculin test were compared by the McNemar's test and kappa statistics. Results of mean optical density from INFg assay on both days were similar. Sensitivity and specificity of the INFg assay showed results varying (95% confidence intervals) from 72 to 100% and 74 to 100% respectively. Sensitivity of parallel testing was over 97.5%, while specificity of serial testing was over 99.7%. The INFg assay proved to be a very useful diagnostic method.