965 resultados para Exact computation
Resumo:
Human N-acetyltransferase 1 (NAT1) is a widely distributed enzyme that catalyses the acetylation of arylamine and hydrazine drugs as well as several known carcinogens, and so its levels in the body may have toxicological importance with regard to drug toxicity and cancer risk. Recently, we showed that p-aminobenzoic acid (PABA) was able to down-regulate human NAT1 in cultured cells, but the exact mechanism by which PABA acts remains unclear. In the present study, we investigated the possibility that PABA-induced down-regulation involves its metabolism to N-OH-PABA, since N-OH-AAF functions as an irreversible inhibitor of hamster and rat NAT1. We show here that N-OH-PABA irreversibly inactivates human NAT1 both in cultured cells and cell cytosols in a time- and concentration-dependent manner. Maximal inactivation in cultured cells occurred within 4 hr of treatment, with a concentration of 30 muM reducing activity by 60 +/- 7%. Dialysis studies showed that inactivation was irreversible, and cofactor (acetyl coenzyme A) but not substrate (PABA) completely protected against inactivation, indicating involvement of the cofactor-binding site. In agreement with these data, kinetic studies revealed a 4-fold increase in cofactor K-m, but no change in substrate K-m for N-OH-PABA-treated cytosols compared to control. We conclude that N-OH-PABA decreases NAT1 activity by a direct interaction with the enzyme and appears to be a result of covalent modification at the cofactor-binding site. This is in contrast to our findings for PABA, which appears to reduce NAT1 activity by down-regulating the enzyme, leading to a decrease in NAT1 protein content. BIOCHEM PHARMACOL 60;12: 1829-1836, 2000. (C) 2000 Elsevier Science Inc.
Resumo:
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for a(n)b(n)c(n), a context-sensitive language. The additional difficulty with a(n)b(n)c(n), compared with the context-free language a(n)b(n), consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.
Resumo:
In order to use the finite element method for solving fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins effectively and efficiently, we have presented, in this paper, the new concept and numerical algorithms to deal with the fundamental issues associated with the fluid-rock interaction problems. These fundamental issues are often overlooked by some purely numerical modelers. (1) Since the fluid-rock interaction problem involves heterogeneous chemical reactions between reactive aqueous chemical species in the pore-fluid and solid minerals in the rock masses, it is necessary to develop the new concept of the generalized concentration of a solid mineral, so that two types of reactive mass transport equations, namely, the conventional mass transport equation for the aqueous chemical species in the pore-fluid and the degenerated mass transport equation for the solid minerals in the rock mass, can be solved simultaneously in computation. (2) Since the reaction area between the pore-fluid and mineral surfaces is basically a function of the generalized concentration of the solid mineral, there is a definite need to appropriately consider the dependence of the dissolution rate of a dissolving mineral on its generalized concentration in the numerical analysis. (3) Considering the direct consequence of the porosity evolution with time in the transient analysis of fluid-rock interaction problems; we have proposed the term splitting algorithm and the concept of the equivalent source/sink terms in mass transport equations so that the problem of variable mesh Peclet number and Courant number has been successfully converted into the problem of constant mesh Peclet and Courant numbers. The numerical results from an application example have demonstrated the usefulness of the proposed concepts and the robustness of the proposed numerical algorithms in dealing with fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This work addresses the question of whether it is possible to define simple pairwise interaction terms to approximate free energies of proteins or polymers. Rather than ask how reliable a potential of mean force is, one can ask how reliable it could possibly be. In a two-dimensional, infinite lattice model system one can calculate exact free energies by exhaustive enumeration. A series of approximations were fitted to exact results to assess the feasibility and utility of pairwise free energy terms. Approximating the true free energy with pairwise interactions gives a poor fit with little transferability between systems of different size. Adding extra artificial terms to the approximation yields better fits, but does not improve the ability to generalize from one system size to another. Furthermore, one cannot distinguish folding from nonfolding sequences via the approximated free energies. Most usefully, the methodology shows how one can assess the utility of various terms in lattice protein/polymer models. (C) 2001 American Institute of Physics.
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
It has recently been stated that the parametrization of the time variables in the one-dimensional (I-D) mixing-frequency electron spin-echo envelope modulation (MIF-ESEEM) experiment is incorrect and hence the wrong frequencies for correlated nuclear transitions are predicted. This paper is a direct response to such a claim, its purpose being to show that the parametrization in land 2-D MIF-ESEEM experiments possesses the same form as that used in other 4-pulse incrementation schemes and predicts the same correlation frequencies. We show that the parametrization represents a shearing transformation of the 2-D time-domain and relate the resulting frequency domain spectrum to the HYSCORE spectrum in terms of a skew-projection. It is emphasized that the parametrization of the time-domain variables may be chosen arbitrarily and affects neither the computation of the correct nuclear frequencies nor the resulting resolution. The usefulness or otherwise of the MIF parameters \gamma\ > 1 is addressed, together with the validity of the original claims of the authors with respect to resolution enhancement in cases of purely homogeneous and inhomogeneous broadening. Numerical simulations are provided to illustrate the main points.
Resumo:
In order to investigate the effect of material anisotropy on convective instability of three-dimensional fluid-saturated faults, an exact analytical solution for the critical Rayleigh number of three-dimensional convective flow has been obtained. Using this critical Rayleigh number, effects of different permeability ratios and thermal conductivity ratios on convective instability of a vertically oriented three-dimensional fault have been examined in detail. It has been recognized that (1) if the fault material is isotropic in the horizontal direction, the horizontal to vertical permeability ratio has a significant effect on the critical Rayleigh number of the three-dimensional fault system, but the horizontal to vertical thermal conductivity ratio has little influence on the convective instability of the system, and (2) if the fault material is isotropic in the fault plane, the thermal conductivity ratio of the fault normal to plane has a considerable effect on the critical Rayleigh number of the three-dimensional fault system, but the effect of the permeability ratio of the fault normal to plane on the critical Rayleigh number of three-dimensional convective flow is negligible.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
The paper presents a theory for modeling flow in anisotropic, viscous rock. This theory has originally been developed for the simulation of large deformation processes including the folding and kinking of multi-layered visco-elastic rock (Muhlhaus et al. [1,2]). The orientation of slip planes in the context of crystallographic slip is determined by the normal vector - the director - of these surfaces. The model is applied to simulate anisotropic mantle convection. We compare the evolution of flow patterns, Nusselt number and director orientations for isotropic and anisotropic rheologies. In the simulations we utilize two different finite element methodologies: The Lagrangian Integration Point Method Moresi et al [8] and an Eulerian formulation, which we implemented into the finite element based pde solver Fastflo (www.cmis.csiro.au/Fastflo/). The reason for utilizing two different finite element codes was firstly to study the influence of an anisotropic power law rheology which currently is not implemented into the Lagrangian Integration point scheme [8] and secondly to study the numerical performance of Eulerian (Fastflo)- and Lagrangian integration schemes [8]. It turned out that whereas in the Lagrangian method the Nusselt number vs time plot reached only a quasi steady state where the Nusselt number oscillates around a steady state value the Eulerian scheme reaches exact steady states and produces a high degree of alignment (director orientation locally orthogonal to velocity vector almost everywhere in the computational domain). In the simulations emergent anisotropy was strongest in terms of modulus contrast in the up and down-welling plumes. Mechanisms for anisotropic material behavior in the mantle dynamics context are discussed by Christensen [3]. The dominant mineral phases in the mantle generally do not exhibit strong elastic anisotropy but they still may be oriented by the convective flow. Thus viscous anisotropy (the main focus of this paper) may or may not correlate with elastic or seismic anisotropy.
Resumo:
We are currently in the midst of a second quantum revolution. The first quantum revolution gave us new rules that govern physical reality. The second quantum revolution will take these rules and use them to develop new technologies. In this review we discuss the principles upon which quantum technology is based and the tools required to develop it. We discuss a number of examples of research programs that could deliver quantum technologies in coming decades including: quantum information technology, quantum electromechanical systems, coherent quantum electronics, quantum optics and coherent matter technology.
Resumo:
We conduct a theoretical analysis to investigate the convective instability of 3-D fluid-saturated geological fault zones when they are heated uniformly from below. In particular, we have derived exact analytical solutions for the critical Rayleigh numbers of different convective flow structures. Using these critical Rayleigh numbers, three interesting convective flow structures have been identified in a geological fault zone system. It has been recognized that the critical Rayleigh numbers of the system have a minimum value only for the fault zone of infinite length, in which the corresponding convective flow structure is a 2-D slender-circle flow. However, if the length of the fault zone is finite, the convective flow in the system must be 3-D. Even if the length of the fault zone is infinite, since the minimum critical Rayleigh number for the 2-D slender-circle flow structure is so close to that for the 3-D convective flow structure, the system may have almost the same chance to pick up the 3-D convective flow structures. Also, because the convection modes are so close for the 3-D convective flow structures, the convective flow may evolve into the 3-D finger-like structures, especially for the case of the fault thickness to height ratio approaching zero. This understanding demonstrates the beautiful aspects of the present analytical solution for the convective instability of 3-D geological fault zones, because the present analytical solution is valid for any value of the ratio of the fault height to thickness. Using the present analytical solution, the conditions, under which different convective flow structures may take place, can be easily determined.
Resumo:
Exact analytical solutions of the critical Rayleigh numbers have been obtained for a hydrothermal system consisting of a horizontal porous layer with temperature-dependent viscosity. The boundary conditions considered are constant temperature and zero vertical Darcy velocity at both the top and bottom of the layer. Not only can the derived analytical solutions be readily used to examine the effect of the temperature-dependent viscosity on the temperature-gradient driven convective flow, but also they can be used to validate the numerical methods such as the finite-element method and finite-difference method for dealing with the same kind of problem. The related analytical and numerical results demonstrated that the temperature-dependent viscosity destabilizes the temperature-gradient driven convective flow and therefore, may affect the ore body formation and mineralization in the upper crust of the Earth. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
We describe a method by which the decoherence time of a solid-state qubit may be measured. The qubit is coded in the orbital degree of freedom of a single electron bound to a pair of donor impurities in a semiconductor host. The qubit is manipulated by adiabatically varying an external electric field. We show that by measuring the total probability of a successful qubit rotation as a function of the control field parameters, the decoherence rate may be determined. We estimate various system parameters, including the decoherence rates due to electromagnetic fluctuations and acoustic phonons. We find that, for reasonable physical parameters, the experiment is possible with existing technology. In particular, the use of adiabatic control fields implies that the experiment can be performed with control electronics with a time resolution of tens of nanoseconds.
Resumo:
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.
Resumo:
We demonstrate complete characterization of a two-qubit entangling process-a linear optics controlled-NOT gate operating with coincident detection-by quantum process tomography. We use a maximum-likelihood estimation to convert the experimental data into a physical process matrix. The process matrix allows an accurate prediction of the operation of the gate for arbitrary input states and a calculation of gate performance measures such as the average gate fidelity, average purity, and entangling capability of our gate, which are 0.90, 0.83, and 0.73, respectively.