964 resultados para Ephemeral Computation
Resumo:
Context. Emission lines formed in decretion disks of Be stars often undergo long-term cyclic variations, especially in the violet-to-red (V/R) ratio of their primary components. The underlying structural and dynamical variations of the disks are only partly understood. From observations of the bright Be-shell star. Tau, the possibly broadest and longest data set illustrating the prototype of this behaviour was compiled from our own and archival observations. It comprises optical and infrared spectra, broad-band polarimetry, and interferometric observations. Aims. The dense, long-time monitoring permits a better separation of repetitive and ephemeral variations. The broad wavelength coverage includes lines formed under different physical conditions, i.e. different locations in the disk, so that the dynamics can be probed throughout much of the disk. Polarimetry and interferometry constrain the spatial structure. All together, the objective is a better understand the dynamics and life cycle of decretion disks. Methods. Standard methods of data acquisition, reduction, and analysis were applied. Results. From 3 V/R cycles between 1997 and 2008, a mean cycle length in Ha of 1400-1430 days was derived. After each minimum in V/R, the shell absorption weakens and splits into two components, leading to 3 emission peaks. This phase may make the strongest contribution to the variability in cycle length. There is no obvious connection between the V/R cycle and the 133-day orbital period of the not otherwise detected companion. V/R curves of different lines are shifted in phase. Lines formed on average closer to the central star are ahead of the others. The shell absorption lines fall into 2 categories differing in line width, ionization/excitation potential, and variability of the equivalent width. They seem to form in separate regions of the disk, probably crossing the line of sight at different times. The interferometry has resolved the continuum and the line emission in Br gamma and HeI 2.06. The phasing of the Br gamma emission shows that the photocenter of the line-emitting region lies within the plane of the disk but is offset from the continuum source. The plane of the disk is constant throughout the observed V/R cycles. The observations lay the foundation for the fully self-consistent, one-armed, disk-oscillation model developed in Paper II.
Sensitivity to noise and ergodicity of an assembly line of cellular automata that classifies density
Resumo:
We investigate the sensitivity of the composite cellular automaton of H. Fuks [Phys. Rev. E 55, R2081 (1997)] to noise and assess the density classification performance of the resulting probabilistic cellular automaton (PCA) numerically. We conclude that the composite PCA performs the density classification task reliably only up to very small levels of noise. In particular, it cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise. While the original composite CA is nonergodic, analyses of relaxation times indicate that its noisy version is an ergodic automaton, with the relaxation times decaying algebraically over an extended range of parameters with an exponent very close (possibly equal) to the mean-field value.
Resumo:
We present a scheme for quasiperfect transfer of polariton states from a sender to a spatially separated receiver, both composed of high-quality cavities filled by atomic samples. The sender and the receiver are connected by a nonideal transmission channel -the data bus- modelled by a network of lossy empty cavities. In particular, we analyze the influence of a large class of data-bus topologies on the fidelity and transfer time of the polariton state. Moreover, we also assume dispersive couplings between the polariton fields and the data-bus normal modes in order to achieve a tunneling-like state transfer. Such a tunneling-transfer mechanism, by which the excitation energy of the polariton effectively does not populate the data-bus cavities, is capable of attenuating appreciably the dissipative effects of the data-bus cavities. After deriving a Hamiltonian for the effective coupling between the sender and the receiver, we show that the decay rate of the fidelity is proportional to a cooperativity parameter that weighs the cost of the dissipation rate against the benefit of the effective coupling strength. The increase of the fidelity of the transfer process can be achieved at the expense of longer transfer times. We also show that the dependence of both the fidelity and the transfer time on the network topology is analyzed in detail for distinct regimes of parameters. It follows that the data-bus topology can be explored to control the time of the state-transfer process.
Resumo:
We present a derivation of the Redfield formalism for treating the dissipative dynamics of a time-dependent quantum system coupled to a classical environment. We compare such a formalism with the master equation approach where the environments are treated quantum mechanically. Focusing on a time-dependent spin-1/2 system we demonstrate the equivalence between both approaches by showing that they lead to the same Bloch equations and, as a consequence, to the same characteristic times T(1) and T(2) (associated with the longitudinal and transverse relaxations, respectively). These characteristic times are shown to be related to the operator-sum representation and the equivalent phenomenological-operator approach. Finally, we present a protocol to circumvent the decoherence processes due to the loss of energy (and thus, associated with T(1)). To this end, we simply associate the time dependence of the quantum system to an easily achieved modulated frequency. A possible implementation of the protocol is also proposed in the context of nuclear magnetic resonance.
Resumo:
We propose an alternative fidelity measure (namely, a measure of the degree of similarity) between quantum states and benchmark it against a number of properties of the standard Uhlmann-Jozsa fidelity. This measure is a simple function of the linear entropy and the Hilbert-Schmidt inner product between the given states and is thus, in comparison, not as computationally demanding. It also features several remarkable properties such as being jointly concave and satisfying all of Jozsa's axioms. The trade-off, however, is that it is supermultiplicative and does not behave monotonically under quantum operations. In addition, metrics for the space of density matrices are identified and the joint concavity of the Uhlmann-Jozsa fidelity for qubit states is established.
Resumo:
In this paper, we present an analog of Bell's inequalities violation test for N qubits to be performed in a nuclear magnetic resonance (NMR) quantum computer. This can be used to simulate or predict the results for different Bell's inequality tests, with distinct configurations and a larger number of qubits. To demonstrate our scheme, we implemented a simulation of the violation of the Clauser, Horne, Shimony and Holt (CHSH) inequality using a two-qubit NMR system and compared the results to those of a photon experiment. The experimental results are well described by the quantum mechanics theory and a local realistic hidden variables model (LRHVM) that was specifically developed for NMR. That is why we refer to this experiment as a simulation of Bell's inequality violation. Our result shows explicitly how the two theories can be compatible with each other due to the detection loophole. In the last part of this work, we discuss the possibility of testing some fundamental features of quantum mechanics using NMR with highly polarized spins, where a strong discrepancy between quantum mechanics and hidden variables models can be expected.
Resumo:
The existence of quantum correlation (as revealed by quantum discord), other than entanglement and its role in quantum-information processing (QIP), is a current subject for discussion. In particular, it has been suggested that this nonclassical correlation may provide computational speedup for some quantum algorithms. In this regard, bulk nuclear magnetic resonance (NMR) has been successfully used as a test bench for many QIP implementations, although it has also been continuously criticized for not presenting entanglement in most of the systems used so far. In this paper, we report a theoretical and experimental study on the dynamics of quantum and classical correlations in an NMR quadrupolar system. We present a method for computing the correlations from experimental NMR deviation-density matrices and show that, given the action of the nuclear-spin environment, the relaxation produces a monotonic time decay in the correlations. Although the experimental realizations were performed in a specific quadrupolar system, the main results presented here can be applied to whichever system uses a deviation-density matrix formalism.
Resumo:
The thermal dependence of the zero-bias conductance for the single electron transistor is the target of two independent renormalization-group approaches, both based on the spin-degenerate Anderson impurity model. The first approach, an analytical derivation, maps the Kondo-regime conductance onto the universal conductance function for the particle-hole symmetric model. Linear, the mapping is parametrized by the Kondo temperature and the charge in the Kondo cloud. The second approach, a numerical renormalization-group computation of the conductance as a function the temperature and applied gate voltages offers a comprehensive view of zero-bias charge transport through the device. The first approach is exact in the Kondo regime; the second, essentially exact throughout the parametric space of the model. For illustrative purposes, conductance curves resulting from the two approaches are compared.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
In this paper an alternative approach to the one in Henze (1986) is proposed for deriving the odd moments of the skew-normal distribution considered in Azzalini (1985). The approach is based on a Pascal type triangle, which seems to greatly simplify moments computation. Moreover, it is shown that the likelihood equation for estimating the asymmetry parameter in such model is generated as orthogonal functions to the sample vector. As a consequence, conditions for a unique solution of the likelihood equation are established, which seem to hold in more general setting.
Resumo:
The flowpaths by which water moves from watersheds to streams has important consequences for the runoff dynamics and biogeochemistry of surface waters in the Amazon Basin. The clearing of Amazon forest to cattle pasture has the potential to change runoff sources to streams by shifting runoff to more surficial flow pathways. We applied end-member mixing analysis (EMMA) to 10 small watersheds throughout the Amazon in which solute composition of streamwater and groundwater, overland flow, soil solution, throughfall and rainwater were measured, largely as part of the Large-Scale Biosphere-Atmosphere Experiment in Amazonia. We found a range in the extent to which streamwater samples fell within the mixing space determined by potential flowpath end-members, suggesting that some water sources to streams were not sampled. The contribution of overland flow as a source of stream flow was greater in pasture watersheds than in forest watersheds of comparable size. Increases in overland flow contribution to pasture streams ranged in some cases from 0% in forest to 27-28% in pasture and were broadly consistent with results from hydrometric sampling of Amazon forest and pasture watersheds that indicate 17- to 18-fold increase in the overland flow contribution to stream flow in pastures. In forest, overland flow was an important contribution to stream flow (45-57%) in ephemeral streams where flows were dominated by stormflow. Overland flow contribution to stream flow decreased in importance with increasing watershed area, from 21 to 57% in forest and 60-89% in pasture watersheds of less than 10 ha to 0% in forest and 27-28% in pastures in watersheds greater than 100 ha. Soil solution contributions to stream flow were similar across watershed area and groundwater inputs generally increased in proportion to decreases in overland flow. Application of EMMA across multiple watersheds indicated patterns across gradients of stream size and land cover that were consistent with patterns determined by detailed hydrometric sampling.
Resumo:
Oropharyngeal dysphagia is characterized by any alteration in swallowing dynamics which may lead to malnutrition and aspiration pneumonia. Early diagnosis is crucial for the prognosis of patients with dysphagia, and the best method for swallowing dynamics assessment is swallowing videofluoroscopy, an exam performed with X-rays. Because it exposes patients to radiation, videofluoroscopy should not be performed frequently nor should it be prolonged. This study presents a non-invasive method for the pre-diagnosis of dysphagia based on the analysis of the swallowing acoustics, where the discrete wavelet transform plays an important role to increase sensitivity and specificity in the identification of dysphagic patients. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
This work deals with neural network (NN)-based gait pattern adaptation algorithms for an active lower-limb orthosis. Stable trajectories with different walking speeds are generated during an optimization process considering the zero-moment point (ZMP) criterion and the inverse dynamic of the orthosis-patient model. Additionally, a set of NNs is used to decrease the time-consuming analytical computation of the model and ZMP. The first NN approximates the inverse dynamics including the ZMP computation, while the second NN works in the optimization procedure, giving an adapted desired trajectory according to orthosis-patient interaction. This trajectory adaptation is added directly to the trajectory generator, also reproduced by a set of NNs. With this strategy, it is possible to adapt the trajectory during the walking cycle in an on-line procedure, instead of changing the trajectory parameter after each step. The dynamic model of the actual exoskeleton, with interaction forces included, is used to generate simulation results. Also, an experimental test is performed with an active ankle-foot orthosis, where the dynamic variables of this joint are replaced in the simulator by actual values provided by the device. It is shown that the final adapted trajectory follows the patient intention of increasing the walking speed, so changing the gait pattern. (C) Koninklijke Brill NV, Leiden, 2011
Resumo:
This work presents an analysis of the wavelet-Galerkin method for one-dimensional elastoplastic-damage problems. Time-stepping algorithm for non-linear dynamics is presented. Numerical treatment of the constitutive models is developed by the use of return-mapping algorithm. For spacial discretization we can use wavelet-Galerkin method instead of standard finite element method. This approach allows to locate singularities. The discrete formulation developed can be applied to the simulation of one-dimensional problems for elastic-plastic-damage models. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
This paper presents results of research related to multicriteria decision making under information uncertainty. The Bell-man-Zadeh approach to decision making in a fuzzy environment is utilized for analyzing multicriteria optimization models (< X, M > models) under deterministic information. Its application conforms to the principle of guaranteed result and provides constructive lines in obtaining harmonious solutions on the basis of analyzing associated maxmin problems. This circumstance permits one to generalize the classic approach to considering the uncertainty of quantitative information (based on constructing and analyzing payoff matrices reflecting effects which can be obtained for different combinations of solution alternatives and the so-called states of nature) in monocriteria decision making to multicriteria problems. Considering that the uncertainty of information can produce considerable decision uncertainty regions, the resolving capacity of this generalization does not always permit one to obtain unique solutions. Taking this into account, a proposed general scheme of multicriteria decision making under information uncertainty also includes the construction and analysis of the so-called < X, R > models (which contain fuzzy preference relations as criteria of optimality) as a means for the subsequent contraction of the decision uncertainty regions. The paper results are of a universal character and are illustrated by a simple example. (c) 2007 Elsevier Inc. All rights reserved.