968 resultados para Computation laboratories


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Microreview seeks to highlight the molecular diversity present in marine organisms, and illustrate by example some of the challenges encountered in exploring this resource. Marine natural products exhibit an impressive array of structural motifs, many of which are derived from biosynthetic pathways that are uniquely marine, Most importantly some marine metabolites possess noteworthy biological activities, activities that have potential application outside marine ecosystems, such as antibiotics, antiparasitics, anticancer agents etc... The isolation, spectroscopic characterisation and assignment of stereostructures to these unusual metabolites is both challenging and rewarding. Examples featured in this Microreview follow a common theme in that they are all recent accounts of the isolation of natural products from Australian marine sponges, carried out in the laboratories of the author. In addition to presenting brief comments on specific structure elucidation strategies, an effort is made to emphasize techniques for solving stereochemical issues, as well as to speculate on the biosynthetic origins of some of these exotic marine natural products.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to use the finite element method for solving fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins effectively and efficiently, we have presented, in this paper, the new concept and numerical algorithms to deal with the fundamental issues associated with the fluid-rock interaction problems. These fundamental issues are often overlooked by some purely numerical modelers. (1) Since the fluid-rock interaction problem involves heterogeneous chemical reactions between reactive aqueous chemical species in the pore-fluid and solid minerals in the rock masses, it is necessary to develop the new concept of the generalized concentration of a solid mineral, so that two types of reactive mass transport equations, namely, the conventional mass transport equation for the aqueous chemical species in the pore-fluid and the degenerated mass transport equation for the solid minerals in the rock mass, can be solved simultaneously in computation. (2) Since the reaction area between the pore-fluid and mineral surfaces is basically a function of the generalized concentration of the solid mineral, there is a definite need to appropriately consider the dependence of the dissolution rate of a dissolving mineral on its generalized concentration in the numerical analysis. (3) Considering the direct consequence of the porosity evolution with time in the transient analysis of fluid-rock interaction problems; we have proposed the term splitting algorithm and the concept of the equivalent source/sink terms in mass transport equations so that the problem of variable mesh Peclet number and Courant number has been successfully converted into the problem of constant mesh Peclet and Courant numbers. The numerical results from an application example have demonstrated the usefulness of the proposed concepts and the robustness of the proposed numerical algorithms in dealing with fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has recently been stated that the parametrization of the time variables in the one-dimensional (I-D) mixing-frequency electron spin-echo envelope modulation (MIF-ESEEM) experiment is incorrect and hence the wrong frequencies for correlated nuclear transitions are predicted. This paper is a direct response to such a claim, its purpose being to show that the parametrization in land 2-D MIF-ESEEM experiments possesses the same form as that used in other 4-pulse incrementation schemes and predicts the same correlation frequencies. We show that the parametrization represents a shearing transformation of the 2-D time-domain and relate the resulting frequency domain spectrum to the HYSCORE spectrum in terms of a skew-projection. It is emphasized that the parametrization of the time-domain variables may be chosen arbitrarily and affects neither the computation of the correct nuclear frequencies nor the resulting resolution. The usefulness or otherwise of the MIF parameters \gamma\ > 1 is addressed, together with the validity of the original claims of the authors with respect to resolution enhancement in cases of purely homogeneous and inhomogeneous broadening. Numerical simulations are provided to illustrate the main points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are currently in the midst of a second quantum revolution. The first quantum revolution gave us new rules that govern physical reality. The second quantum revolution will take these rules and use them to develop new technologies. In this review we discuss the principles upon which quantum technology is based and the tools required to develop it. We discuss a number of examples of research programs that could deliver quantum technologies in coming decades including: quantum information technology, quantum electromechanical systems, coherent quantum electronics, quantum optics and coherent matter technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a method by which the decoherence time of a solid-state qubit may be measured. The qubit is coded in the orbital degree of freedom of a single electron bound to a pair of donor impurities in a semiconductor host. The qubit is manipulated by adiabatically varying an external electric field. We show that by measuring the total probability of a successful qubit rotation as a function of the control field parameters, the decoherence rate may be determined. We estimate various system parameters, including the decoherence rates due to electromagnetic fluctuations and acoustic phonons. We find that, for reasonable physical parameters, the experiment is possible with existing technology. In particular, the use of adiabatic control fields implies that the experiment can be performed with control electronics with a time resolution of tens of nanoseconds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We demonstrate complete characterization of a two-qubit entangling process-a linear optics controlled-NOT gate operating with coincident detection-by quantum process tomography. We use a maximum-likelihood estimation to convert the experimental data into a physical process matrix. The process matrix allows an accurate prediction of the operation of the gate for arbitrary input states and a calculation of gate performance measures such as the average gate fidelity, average purity, and entangling capability of our gate, which are 0.90, 0.83, and 0.73, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the existence of asymptotically almost periodic classical solutions for a class of abstract neutral integro-differential equation with unbounded delay. A concrete application to partial neutral integro-differential equations which arise in the study of heat conduction in fading memory material is considered. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We evaluated the associations between glycemic therapies and prevalence of diabetic peripheral neuropathy (DPN) at baseline among participants in the Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D) trial on medical and revascularization therapies for coronary artery disease (CAD) and on insulin-sensitizing vs. insulin-providing treatments for diabetes. A total of 2,368 patients with type 2 diabetes and CAD was evaluated. DPN was defined as clinical examination score > 2 using the Michigan Neuropathy Screening Instrument (MNSI). DPN odds ratios across different groups of glycemic therapy were evaluated by multiple logistic regression adjusted for multiple covariates including age, sex, hemoglobin A1c (HbA1c), and diabetes duration. Fifty-one percent of BARI 2D subjects with valid baseline characteristics and MNSI scores had DPN. After adjusting for all variables, use of insulin was significantly associated with DPN (OR = 1.57, 95% CI: 1.15-2.13). Patients on sulfonylurea (SU) or combination of SU/metformin (Met)/thiazolidinediones (TZD) had marginally higher rates of DPN than the Met/TZD group. This cross-sectional study in a cohort of patients with type 2 diabetes and CAD showed association of insulin use with higher DPN prevalence, independent of disease duration, glycemic control, and other characteristics. The causality between a glycemic control strategy and DPN cannot be evaluated in this cross-sectional study, but continued assessment of DPN and randomized therapies in BARI 2D trial may provide further explanations on the development of DPN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The accuracy of multidetector computed tomographic (CT) angiography involving 64 detectors has not been well established. Methods: We conducted a multicenter study to examine the accuracy of 64-row, 0.5-mm multidetector CT angiography as compared with conventional coronary angiography in patients with suspected coronary artery disease. Nine centers enrolled patients who underwent calcium scoring and multidetector CT angiography before conventional coronary angiography. In 291 patients with calcium scores of 600 or less, segments 1.5 mm or more in diameter were analyzed by means of CT and conventional angiography at independent core laboratories. Stenoses of 50% or more were considered obstructive. The area under the receiver-operating-characteristic curve (AUC) was used to evaluate diagnostic accuracy relative to that of conventional angiography and subsequent revascularization status, whereas disease severity was assessed with the use of the modified Duke Coronary Artery Disease Index. Results: A total of 56% of patients had obstructive coronary artery disease. The patient-based diagnostic accuracy of quantitative CT angiography for detecting or ruling out stenoses of 50% or more according to conventional angiography revealed an AUC of 0.93 (95% confidence interval [CI], 0.90 to 0.96), with a sensitivity of 85% (95% CI, 79 to 90), a specificity of 90% (95% CI, 83 to 94), a positive predictive value of 91% (95% CI, 86 to 95), and a negative predictive value of 83% (95% CI, 75 to 89). CT angiography was similar to conventional angiography in its ability to identify patients who subsequently underwent revascularization: the AUC was 0.84 (95% CI, 0.79 to 0.88) for multidetector CT angiography and 0.82 (95% CI, 0.77 to 0.86) for conventional angiography. A per-vessel analysis of 866 vessels yielded an AUC of 0.91 (95% CI, 0.88 to 0.93). Disease severity ascertained by CT and conventional angiography was well correlated (r=0.81; 95% CI, 0.76 to 0.84). Two patients had important reactions to contrast medium after CT angiography. Conclusions: Multidetector CT angiography accurately identifies the presence and severity of obstructive coronary artery disease and subsequent revascularization in symptomatic patients. The negative and positive predictive values indicate that multidetector CT angiography cannot replace conventional coronary angiography at present. (ClinicalTrials.gov number, NCT00738218.).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.