962 resultados para Two variable oregonator model
Resumo:
High levels of inheritable resistance to phosphine in Rhyzopertha dominica have recently, been detected in Australia and hi art effort to isolate the genes responsible For resistance we have used random amplified DNA fingerprinting (RAF) to produce a genetic linkage map of R. dominica. The map consists of 94 dominant DNA markers with art average distance between markers of 4.6 cM and defines nine linkage groups with a total recombination distance of 390.1 cM. We have identified two loci that are responsible for high-level resistance. One provides similar to50x resistance to phosphine while the other provides 12.5x resistance and in combination, the two genes act synergistically to provide a resistance level 250 x greater than that of fully susceptible beetles. The haploid genome size has been determined to be 4.76 x 10(8) bp, resulting in an average physical distance of 1.2 Mbp per map unit. No recombination has been observed between either of the two resistance loci and their adjacent DNA markers in a population of 44 fully resistant F-5 individuals, which indicates that the genes are likely to reside within 0.91 cM (1.1 Mbp) of the DNA markers.
Resumo:
In this paper, we consider testing for additivity in a class of nonparametric stochastic regression models. Two test statistics are constructed and their asymptotic distributions are established. We also conduct a small sample study for one of the test statistics through a simulated example. (C) 2002 Elsevier Science (USA).
Resumo:
The performance of the Oxford University Gun Tunnel has been estimated using a quasi-one-dimensional simulation of the facility gas dynamics. The modelling of the actual facility area variations so as to adequately simulate both shock reflection and flow discharge processes has been considered in some detail. Test gas stagnation pressure and temperature histories are compared with measurements at two different operating conditions - one with nitrogen and the other with carbon dioxide as the test gas. It is demonstrated that both the simulated pressures and temperatures are typically within 3% of the experimental measurements.
Resumo:
The conventional convection-dispersion model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. The extension of this model to include nonlinear kinetics and zonal heterogeneity of the liver is not straightforward and requires numerical solution of partial differential equation, which is not available in standard nonlinear regression analysis software. In this paper, we describe an alternative compartmental model representation of hepatic disposition (including elimination). The model allows the use of standard software for data analysis and accurately describes the outflow concentration-time profile for a vascular marker after bolus injection into the liver. In an evaluation of a number of different compartmental models, the most accurate model required eight vascular compartments, two of them with back mixing. In addition, the model includes two adjacent secondary vascular compartments to describe the tail section of the concentration-time profile for a reference marker. The model has the added flexibility of being easy to modify to model various enzyme distributions and nonlinear elimination. Model predictions of F, MTT, CV2, and concentration-time profile as well as parameter estimates for experimental data of an eliminated solute (palmitate) are comparable to those for the extended convection-dispersion model.
Resumo:
We investigate the difference between classical and quantum dynamics of coupled magnetic dipoles. We prove that in general the dynamics of the classical interaction Hamiltonian differs from the corresponding quantum model, regardless of the initial state. The difference appears as nonpositive-definite diffusion terms in the quantum evolution equation of an appropriate positive phase-space probability density. Thus, it is not possible to express the dynamics in terms of a convolution of a positive transition probability function and the initial condition as can be done in the classical case. It is this feature that enables the quantum system to evolve to an entangled state. We conclude that the dynamics are a quantum element of nuclear magnetic resonance quantum-information processing. There are two limits where our quantum evolution coincides with the classical one: the short-time limit before spin-spin interaction sets in and the long-time limit when phase diffusion is incorporated.
Resumo:
We model the behavior of an ion trap with all ions driven simultaneously and coupled collectively to a heat bath. The equations for this system are similar to the irreversible dynamics of a collective angular momentum system known as the Dicke model. We show how the steady state of the ion trap as a dissipative many-body system driven far from equilibrium can exhibit quantum entanglement. We calculate the entanglement of this steady state for two ions in the trap and in the case of more than two ions we calculate the entanglement between two ions by tracing over all the other ions. The entanglement in the steady state is a maximum for the parameter values corresponding roughly to a bifurcation of a fixed point in the corresponding semiclassical dynamics. We conjecture that this is a general mechanism for entanglement creation in driven dissipative quantum systems.
Resumo:
A repetitive DNA motif was used as a marker to identify novel genes in the mucosal pathogen Moraxella catarrhalis. There is a high prevalence of such repetitive motifs in virulence genes that display phase variable expression. Two repeat containing loci were identified using a digoxigenin-labelled 5'-(CAAC)(6)-3' oligonucleotide probe. The repeats are located in the methylase components of two distinct type III restriction-modification (R-M) systems. We suggest that the phase variable nature of these R-M systems indicates that they have an important role in the biology of M. catarrhalis. (C) 2002 Published by Elsevier Science B.V. on behalf of the Federation of European Microbiological Societies.
Resumo:
Background: Thalamotomy has been reported to be successful in ameliorating the motor symptoms of tremor and/or rigidity in people with Parkinson's disease (PD), emphasising the bona fide contribution of this subcortical nucleus to the neural circuitry subserving motor function. Despite evidence of parallel yet segregated associative and motor cortico-subcortical-cortical circuits, comparatively few studies have investigated the effects of this procedure on cognitive functions. In particular, research pertaining to the impact of thalamotomy on linguistic processes is fundamentally lacking. Aims: The purpose of this research was to investigate the effects of thalamotomy in the language dominant and non-dominant hemispheres on linguistic functioning, relative to operative theoretical models of subcortical participation in language. This paper compares the linguistic profiles of two males with PD, aged 75 years (10 years of formal education) and 62 years (22 years of formal education), subsequent to unilateral thalamotomy procedures within the language dominant and non-dominant hemispheres, respectively. Methods & Procedures: Comprehensive linguistic profiles comprising general and high-level linguistic abilities in addition to on-line semantic processing skills were compiled up to 1 month prior to surgery and 3 months post-operatively, within perceived on'' periods (i.e., when optimally medicated). Pre- and post-operative language performances were compared within-subjects to a group of 16 non-surgical Parkinson's controls (NSPD) and a group of 16 non-neurologically impaired adults (NC). Outcomes & Results: The findings of this research suggest a laterality effect with regard to the contribution of the thalamus to high-level linguistic abilities and, potentially, the temporal processing of semantic information. This outcome supports the application of high-level linguistic assessments and measures of semantic processing proficiency to the clinical management of individuals with dominant thalamic lesions. Conclusions: The results reported lend support to contemporary theories of dominant thalamic participation in language, serving to further elucidate our current understanding of the role of subcortical structures in mediating linguistic processes, relevant to cortical hemispheric dominance.
Resumo:
We present whole-rock and zircon rare earth element (REE) data from two early Archaean gneisses (3.81 Ga and 3.64 Ga) from the Itsaq gneiss complex, south-west Greenland. Both gneisses represent extremely rare examples of unaltered, fresh and relatively undeformed igneous rocks of such antiquity. Cathodoluminescence imaging of their zircons indicates a single crystallisation episode with no evidence for either later metamorphic and/or anatectic reworking or inheritance of earlier grains. Uniform, single-population U/Pb age data confirm the structural simplicity of these zircons. One sample, a 3.64 Ga granodioritic gneiss from the Gothabsfjord, yields a chondrite-normalised REE pattern with a positive slope from La to Lu as well as substantial positive Ce and slight negative Eu anomalies, features generally considered to be typical of igneous zircon. In contrast, the second sample, a 3.81 Ga tonalite from south of the Isua Greenstone Belt, has variable but generally much higher light REE abundances, with similar middle to heavy REE. Calculation of zircon/melt distribution coefficients (D-REE(zircon/melt)) from each sample yields markedly different values for the trivalent REE (i.e. Ce and Eu omitted) and simple application of one set of D-REE(zircon/melt) to model the melt composition for the other sample yields concentrations that are in error by up to two orders of magnitude for the light REE (La-Nd). The observed light REE overabundance in the 3.81 Ga tonalite is a commonly observed feature in terrestrial zircons for which a number of explanations ranging from lattice strain to disequilibrium crystallisation have been proposed and are further investigated herein. Regardless of the cause of light REE overabundance, our study shows that simple application of zircon/melt distribution coefficients is not an unambiguous method for ascertaining original melt composition. In this context, recent studies that use REE data to claim that > 4.3 Ga Hadean detrital zircons originally crystallised from an evolved magma, in turn suggesting the operation of geological processes in the early Earth analogous to those of the present day (e.g. subduction and melting of hydrated oceanic crust), must be regarded with caution. Indeed, comparison of terrestrial Hadean and > 3.9 Ga lunar highland zircons shows remarkable similarities in the light REE, even though subduction processes that have been used to explain the terrestrial zircons have never operated on the Moon. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A central problem in visual perception concerns how humans perceive stable and uniform object colors despite variable lighting conditions (i.e. color constancy). One solution is to 'discount' variations in lighting across object surfaces by encoding color contrasts, and utilize this information to 'fill in' properties of the entire object surface. Implicit in this solution is the caveat that the color contrasts defining object boundaries must be distinguished from the spurious color fringes that occur naturally along luminance-defined edges in the retinal image (i.e. optical chromatic aberration). In the present paper, we propose that the neural machinery underlying color constancy is complemented by an 'error-correction' procedure which compensates for chromatic aberration, and suggest that error-correction may be linked functionally to the experimentally induced illusory colored aftereffects known as McCollough effects (MEs). To test these proposals, we develop a neural network model which incorporates many of the receptive-field (RF) profiles of neurons in primate color vision. The model is composed of two parallel processing streams which encode complementary sets of stimulus features: one stream encodes color contrasts to facilitate filling-in and color constancy; the other stream selectively encodes (spurious) color fringes at luminance boundaries, and learns to inhibit the filling-in of these colors within the first stream. Computer simulations of the model illustrate how complementary color-spatial interactions between error-correction and filling-in operations (a) facilitate color constancy, (b) reveal functional links between color constancy and the ME, and (c) reconcile previously reported anomalies in the local (edge) and global (spreading) properties of the ME. We discuss the broader implications of these findings by considering the complementary functional roles performed by RFs mediating color-spatial interactions in the primate visual system. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.
Resumo:
Fixed-point roundoff noise in digital implementation of linear systems arises due to overflow, quantization of coefficients and input signals, and arithmetical errors. In uniform white-noise models, the last two types of roundoff errors are regarded as uniformly distributed independent random vectors on cubes of suitable size. For input signal quantization errors, the heuristic model is justified by a quantization theorem, which cannot be directly applied to arithmetical errors due to the complicated input-dependence of errors. The complete uniform white-noise model is shown to be valid in the sense of weak convergence of probabilistic measures as the lattice step tends to zero if the matrices of realization of the system in the state space satisfy certain nonresonance conditions and the finite-dimensional distributions of the input signal are absolutely continuous.
Resumo:
This paper employs a two-dimensional variable density flow and transport model to investigate the transport of a dense contaminant plume in an unconfined coastal aquifer. Experimental results are also presented to show the contaminant plume in a freshwater-seawater flow system. Both the numerical and experimental results suggest that the neglect of the seawater interface does not noticeably affect the horizontal migration rate of the plume before it reaches the interface. However, the contaminant will travel further seaward and part of the solute mass will exit under the sea if the higher seawater density is not included. If the seawater density is included, the contaminant will travel upwards towards the beach along the freshwater-saltwater interface as shown experimentally. Neglect of seawater density, therefore, will result in an underestimate of solute mass rate exiting around the coastline. (C) 2002 IMACS. Published by Elsevier Science B.V. All rights reserved.
Resumo:
We report the first steps of a collaborative project between the University of Queensland, Polyflow, Michelin, SK Chemicals, and RMIT University; on simulation, validation and application of a recently introduced constitutive model designed to describe branched polymers. Whereas much progress has been made on predicting the complex flow behaviour of many - in particular linear - polymers, it sometimes appears difficult to predict simultaneously shear thinning and extensional strain hardening behaviour using traditional constitutive models. Recently a new viscoelastic model based on molecular topology, was proposed by McLeish and Larson (1998). We explore the predictive power of a differential multi-mode version of the pom-pom model for the flow behaviour of two commercial polymer melts: a (long-chain branched) low-density polyethylene (LDPE) and a (linear) high-density polyethylene (HDPE). The model responses are compared to elongational recovery experiments published by Langouche and Debbaut (1999), and start-up of simple shear flow, stress relaxation after simple and reverse step strain experiments carried out in our laboratory.
Resumo:
Cloninger's psychobiological model of personality as applied to substance misuse has received mixed support. Contrary to the model, recent data suggest that a combination of high novelty seeking (NS) and high harm avoidance (HA) represents a significant risk for the development of severe substance misuse. A genetic polymorphism previously implicated in severe substance dependence, the A1 allele of the D2 dopamine receptor (DRD2) gene, was examined in relation to NS and HA amongst 203 adolescent boys. Specifically, we hypothesized that subjects with the A1 + allele (A1/A1 and A1/A2 genotypes) would report stronger NS and would exhibit a more positive relationship between NS and HA than those with the A1-allele (A2/A2 genotypes). These predictions were supported. The correlation between NS and HA in 81 A1 + allelic boys (r = 0.27, P = 0.02), and that in the 122 A1- allelic boys (r = -0.15, P = 0.09), indicated that this relationship differed according to allelic status (F = 8.52, P < 0:004). Among those with the A1-allele, the present results are consistent with the traditional view that novelty seeking provides positive reinforcement, or the fulfillment of appetitive drives. In contrast, novelty seeking in those with the A1 + allele appears to include a negative reinforcement or self-medicating function. (C) 2002 Elsevier Science Ltd. All rights reserved.