990 resultados para Random Processes
Resumo:
Y-Ba-Cu-O samples with additions of Y2O3 and CeO2 were quenched during seeded isothermal melt processing and examined by optical microscopy and scanning electron microscopy. Large YBa2Cu3O7-y (Y123) particles in the starting powder were found to form a distinct type of melt during heating, which was unaffected by the Y2O3 or CeO2 additives. This type of melt later formed regions with a low concentration of Y2BaCuO5 (Y211) particles in the Y123 matrix. The maximum growth rate of Y123 that could be sustained in the sample was found to be lower in the melt formed from large Y123 particles, and this may lead to growth accidents and subgrains in some samples.
Resumo:
The development of the new TOGA (titration and off-gas analysis) sensor for the detailed study of biological processes in wastewater treatment systems is outlined. The main innovation of the sensor is the amalgamation of titrimetric and off-gas measurement techniques. The resulting measured signals are: hydrogen ion production rate (HPR), oxygen transfer rate (OTR), nitrogen transfer rate (NTR), and carbon dioxide transfer rate (CTR). While OTR and NTR are applicable to aerobic and anoxic conditions, respectively, HPR and CTR are useful signals under all of the conditions found in biological wastewater treatment systems, namely, aerobic, anoxic and anaerobic. The sensor is therefore a powerful tool for studying the key biological processes under all these conditions. A major benefit from the integration of the titrimetric and off-gas analysis methods is that the acid/base buffering systems, in particular the bicarbonate system, are properly accounted for. Experimental data resulting from the TOGA sensor in aerobic, anoxic, and anaerobic conditions demonstrates the strength of the new sensor. In the aerobic environment, carbon oxidation (using acetate as an example carbon source) and nitrification are studied. Both the carbon and ammonia removal rates measured by the sensor compare very well with those obtained from off-line chemical analysis. Further, the aerobic acetate removal process is examined at a fundamental level using the metabolic pathway and stoichiometry established in the literature, whereby the rate of formation of storage products is identified. Under anoxic conditions, the denitrification process is monitored and, again, the measured rate of nitrogen gas transfer (NTR) matches well with the removal of the oxidised nitrogen compounds (measured chemically). In the anaerobic environment, the enhanced biological phosphorus process was investigated. In this case, the measured sensor signals (HPR and CTR) resulting from acetate uptake were used to determine the ratio of the rates of carbon dioxide production by competing groups of microorganisms, which consequently is a measure of the activity of these organisms. The sensor involves the use of expensive equipment such as a mass spectrometer and requires special gases to operate, thus incurring significant capital and operational costs. This makes the sensor more an advanced laboratory tool than an on-line sensor. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Recently, several groups have investigated quantum analogues of random walk algorithms, both on a line and on a circle. It has been found that the quantum versions have markedly different features to the classical versions. Namely, the variance on the line, and the mixing time on the circle increase quadratically faster in the quantum versions as compared to the classical versions. Here, we propose a scheme to implement the quantum random walk on a line and on a circle in an ion trap quantum computer. With current ion trap technology, the number of steps that could be experimentally implemented will be relatively small. However, we show how the enhanced features of these walks could be observed experimentally. In the limit of strong decoherence, the quantum random walk tends to the classical random walk. By measuring the degree to which the walk remains quantum, '' this algorithm could serve as an important benchmarking protocol for ion trap quantum computers.
Resumo:
In computer simulations of smooth dynamical systems, the original phase space is replaced by machine arithmetic, which is a finite set. The resulting spatially discretized dynamical systems do not inherit all functional properties of the original systems, such as surjectivity and existence of absolutely continuous invariant measures. This can lead to computational collapse to fixed points or short cycles. The paper studies loss of such properties in spatial discretizations of dynamical systems induced by unimodal mappings of the unit interval. The problem reduces to studying set-valued negative semitrajectories of the discretized system. As the grid is refined, the asymptotic behavior of the cardinality structure of the semitrajectories follows probabilistic laws corresponding to a branching process. The transition probabilities of this process are explicitly calculated. These results are illustrated by the example of the discretized logistic mapping.
Resumo:
In recent years, studies on environmental samples with unusual dibenzo-p-dioxin (PCDD) congener profiles were reported from a range of countries. These profiles, characterized by a dominance of octachlorinated dibenzodioxin (OCDD) and relatively low in dibenzofuran (PCDF) concentrations, could not be attributed to known sources or formation processes. In the present study, the processes that result in these unusual profiles were assessed using the concentrations and isomer signatures of PCDDs from dated estuarine sediment cores in Queensland, Australia. Increases in relative concentrations of lower chlorinated PODS and a relative decrease of OCDD were correlated with time of sediment deposition. Preferred lateral, anaerobic dechlorination of OCDD represents a likely pathway for these changes. In Queensland sediments, these transformations result in a distinct dominance of isomers fully chlorinated in the 1,4,6,9-positions (1,4-patterns), and similar 1,4-patterns were observed in sediments from elsewhere. Consequently, these environmental samples may not reflect the signatures of the original source, and a reevaluation of source inputs was undertaken. Natural formation of PCDDs, which has previously been suggested, is discussed; however, based on the present results and literature comparisons, we propose an alternative scenario. This scenario hypothesizes that an anthropogenic PCDD precursor input (e.g. pentachlorophenol) results in the contamination. These results and hypothesis imply further investigations are warrented into possible anthropogenic sources in areas where natural PCDD formation has been suggested.
Resumo:
This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
For Markov processes on the positive integers with the origin as an absorbing state, Ferrari, Kesten, Martinez and Picco studied the existence of quasi-stationary and limiting conditional distributions by characterizing quasi-stationary distributions as fixed points of a transformation Phi on the space of probability distributions on {1, 2,.. }. In the case of a birth-death process, the components of Phi(nu) can be written down explicitly for any given distribution nu. Using this explicit representation, we will show that Phi preserves likelihood ratio ordering between distributions. A conjecture of Kryscio and Lefevre concerning the quasi-stationary distribution of the SIS logistic epidemic follows as a corollary.
Resumo:
Why does species richness vary so greatly across lineages? Traditionally, variation in species richness has been attributed to deterministic processes, although it is equally plausible that it may result from purely stochastic processes. We show that, based on the best available phylogenetic hypothesis, the pattern of cladogenesis among agamid lizards is not consistent with a random model, with some lineages having more species, and others fewer species, than expected by chance. We then use phylogenetic comparative methods to test six types of deterministic explanation for variation in species richness: body size, life history, sexual selection, ecological generalism, range size and latitude. Of eight variables we tested, only sexual size dimorphism and sexual dichromatism predicted species richness. Increases in species richness are associated with increases in sexual dichromatism but reductions in sexual size dimorphism. Consistent with recent comparative studies, we find no evidence that species richness is associated with small body size or high fecundity. Equally, we find no evidence that species richness covaries with ecological generalism, latitude or range size.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
Using a subtractive hybridisation approach, we enriched for genes likely to play a role in embryonic development of the mammalian face and other structures. This was achieved by subtracting cDNA derived from adult mouse liver from that derived from 10.5 dpc mouse embryonic branchial arches 1 and 2. Random sequencing of clones from the resultant library revealed that a high percentage correspond to genes with a previously established role in embryonic development and disease, while 15% represent novel or uncharacterised genes. Whole mount in situ hybridisation analysis of novel genes revealed that approximately 50% have restricted expression during embryonic development. In addition to expression in branchial arches, these genes showed a range of expression domains commonly including neural tube and somites. Notably, all genes analysed were found to be expressed not only in the branchial arches but also in the developing limb buds, providing support for the hypothesis that development of the limbs and face is likely to involve analogous molecular processes. (C) 2003 Wiley-Liss, Inc.
Resumo:
This paper considers the question of which is better: the batch or the continuous activated sludge processes? It is an important question because dissension still exists in the wastewater industry as to the relative merits of each of the processes. A review of perceived differences in the processes from the point of view of two related disciplines, process engineering and biotechnology, is presented together with the results of previous comparative studies. These reviews highlight possible areas where more understanding is required. This is provided in the paper by application of the flexibility index to two case studies. The flexibility index is a useful process design tool that measures the ability of the process to cope with long term changes in operation.
Resumo:
A research program on atmospheric boundary layer processes and local wind regimes in complex terrain was conducted in the vicinity of Lake Tekapo in the southern Alps of New Zealand, during two 1-month field campaigns in 1997 and 1999. The effects of the interaction of thermal and dynamic forcing were of specific interest, with a particular focus on the interaction of thermal forcing of differing scales. The rationale and objectives of the field and modeling program are described, along with the methodology used to achieve them. Specific research aims include improved knowledge of the role of surface forcing associated with varying energy balances across heterogeneous terrain, thermal influences on boundary layer and local wind development, and dynamic influences of the terrain through channeling effects. Data were collected using a network of surface meteorological and energy balance stations, radiosonde and pilot balloon soundings, tethered balloon and kite-based systems, sodar, and an instrumented light aircraft. These data are being used to investigate the energetics of surface heat fluxes, the effects of localized heating/cooling and advective processes on atmospheric boundary layer development, and dynamic channeling. A complementary program of numerical modeling includes application of the Regional Atmospheric Modeling System (RAMS) to case studies characterizing typical boundary layer structures and airflow patterns observed around Lake Tekapo. Some initial results derived from the special observation periods are used to illustrate progress made to date. In spite of the difficulties involved in obtaining good data and undertaking modeling experiments in such complex terrain, initial results show that surface thermal heterogeneity has a significant influence on local atmospheric structure and wind fields in the vicinity of the lake. This influence occurs particularly in the morning. However, dynamic channeling effects and the larger-scale thermal effect of the mountain region frequently override these more local features later in the day.