42 resultados para Discrete-time systems
Resumo:
Particle-in-cell (PIC) simulations of relativistic shocks are in principle capable of predicting the spectra of photons that are radiated incoherently by the accelerated particles. The most direct method evaluates the spectrum using the fields given by the Lienard-Wiechart potentials. However, for relativistic particles this procedure is computationally expensive. Here we present an alternative method that uses the concept of the photon formation length. The algorithm is suitable for evaluating spectra both from particles moving in a specific realization of a turbulent electromagnetic field or from trajectories given as a finite, discrete time series by a PIC simulation. The main advantage of the method is that it identifies the intrinsic spectral features and filters out those that are artifacts of the limited time resolution and finite duration of input trajectories.
Resumo:
We study the behaviour of the glued trees algorithm described by Childs et al. in [1] under decoherence. We consider a discrete time reformulation of the continuous time quantum walk protocol and apply a phase damping channel to the coin state, investigating the effect of such a mechanism on the probability of the walker appearing on the target vertex of the graph. We pay particular attention to any potential advantage coming from the use of weak decoherence for the spreading of the walk across the glued trees graph. © 2013 Elsevier B.V.
Resumo:
We introduce a general scheme for sequential one-way quantum computation where static systems with long-living quantum coherence (memories) interact with moving systems that may possess very short coherence times. Both the generation of the cluster state needed for the computation and its consumption by measurements are carried out simultaneously. As a consequence, effective clusters of one spatial dimension fewer than in the standard approach are sufficient for computation. In particular, universal computation requires only a one-dimensional array of memories. The scheme applies to discrete-variable systems of any dimension as well as to continuous-variable ones, and both are treated equivalently under the light of local complementation of graphs. In this way our formalism introduces a general framework that encompasses and generalizes in a unified manner some previous system-dependent proposals. The procedure is intrinsically well suited for implementations with atom-photon interfaces.
Resumo:
We examined a remnant host plant (Primula veris L.) habitat network that was last inhabited by the rare butterfly Hamearis lucina L. in north Wales in 1943, to assess the relative contribution of several spatial parameters to its regional extinction. We first examined relationships between P. veris characteristics and H. lucina eggs in surviving H. lucina populations, and used these to predict the suitability and potential carrying capacity of the habitat network in north Wales. This resulted in an estimate of roughly 4500 eggs (ca 227 adults). We developed a discrete space, discrete time metapopulation model to evaluate the relative contribution of dispersal distance, habitat and environmental stochasticity as possible causes of extinction. We simulated the potential persistence of the butterfly in the current network as well as in three artificial (historical and present) habitat networks that differed in the quantity (current and X3) and fragmentation of the habitat (current and aggregated). We identified that reduced habitat quantity and increased isolation would have increased the probability of regional extinction, in conjunction with environmental stochasticity and H. lucina's dispersal distance. This general trend did not change in a qualitative manner when we modified the ability of dispersing females to stay in, and find suitable habitats (by changing the size of the grid cells used in the model). Contrary to most metapopulation model predictions, system persistence declined with increasing migration rate, suggesting that the mortality of migrating individuals in fragmented landscapes may pose significant risks to system-wide persistence. Based on model predictions for the present landscape we argue that a major programme of habitat restoration would be required for a re-established metapopulation to persist for > 100 years.
Resumo:
The majority, if not all, species have a limited geographic range bounded by a distribution edge. Violent ecotones such as sea coasts clearly produce edges for many species; however such ecotones, while sufficient for the formation of an edge, are not always necessary. We demonstrate this by simulation in discrete time of a spatially structured finite size metapopulation subjected to a spatial gradient in per-unit-time population extinction probability together with spatially structured dispersal and recolonisation. We find that relatively sharp edges separating a homeland or main geographical range from an outland or zone of relatively sparse and ephemeral colonisation can form in gradual environmental gradients. The form and placing of the edge is an emergent property of the metapopulation dynamics. The sharpness of the edge declines with increasing dispersal distance, and is dependent on the relative scales of dispersal distance and gradient length. The space over which the edge develops is short relative to the potential species range. The edge is robust against changes in both the shape of the environmental gradient and to a lesser extent to alterations in the kind of dispersal operating. Persistence times in the absence of environmental gradients are virtually independent of the shape of the dispersal function describing migration. The common finding of bell shaped population density distributions across geographic ranges may occur without the strict necessity of a niche mediated response to a spatially autocorrelated environment.
Resumo:
The measurement of fast changing temperature fluctuations is a challenging problem due to the inherent limited bandwidth of temperature sensors. This results in a measured signal that is a lagged and attenuated version of the input. Compensation can be performed provided an accurate, parameterised sensor model is available. However, to account for the in influence of the measurement environment and changing conditions such as gas velocity, the model must be estimated in-situ. The cross-relation method of blind deconvolution is one approach for in-situ characterisation of sensors. However, a drawback with the method is that it becomes positively biased and unstable at high noise levels. In this paper, the cross-relation method is cast in the discrete-time domain and a bias compensation approach is developed. It is shown that the proposed compensation scheme is robust and yields unbiased estimates with lower estimation variance than the uncompensated version. All results are verified using Monte-Carlo simulations.
Resumo:
Impactive contact between a vibrating string and a barrier is a strongly nonlinear phenomenon that presents several challenges in the design of numerical models for simulation and sound synthesis of musical string instruments. These are addressed here by applying Hamiltonian methods to incorporate distributed contact forces into a modal framework for discrete-time simulation of the dynamics of a stiff, damped string. The resulting algorithms have spectral accuracy, are unconditionally stable, and require solving a multivariate nonlinear equation that is guaranteed to have a unique solution. Exemplifying results are presented and discussed in terms of accuracy, convergence, and spurious high-frequency oscillations.
Resumo:
We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm.
Resumo:
Despite the substantial organisational benefits of integrated IT, the implementation of such systems – and particularly Enterprise Resource Planning (ERP) systems – has tended to be problematic, stimulating an extensive body of research into ERP implementation. This research has remained largely separate from the main IT implementation literature. At the same time, studies of IT implementation have generally adopted either a factor or process approach; both have major limitations. To address these imitations, factor and process perspectives are combined here in a unique model of IT implementation. We argue that • the organisational factors which determine successful implementation differ for integrated and traditional, discrete IT • failure to manage these differences is a major source of integrated IT failure. The factor/process model is used as a framework for proposing differences between discrete and integrated IT.
Resumo:
Discrete Conditional Phase-type (DC-Ph) models are a family of models which represent skewed survival data conditioned on specific inter-related discrete variables. The survival data is modeled using a Coxian phase-type distribution which is associated with the inter-related variables using a range of possible data mining approaches such as Bayesian networks (BNs), the Naïve Bayes Classification method and classification regression trees. This paper utilizes the Discrete Conditional Phase-type model (DC-Ph) to explore the modeling of patient waiting times in an Accident and Emergency Department of a UK hospital. The resulting DC-Ph model takes on the form of the Coxian phase-type distribution conditioned on the outcome of a logistic regression model.
Resumo:
The fundamental controls on the initiation and development of gravel-dominated deposits (beaches and barriers) on paraglacial coasts are particle size and shape, sediment supply, storm wave activity (primarily runup), relative sea-level (RSL) change, and terrestrial basement structure (primarily as it affects accommodation space). This paper examines the stochastic basis for barrier organisation as shown by variation in gravel barrier architecture. We recognise punctuated self-organisation of barrier development that is disrupted by short phases of barrier instability. The latter results from positive feedback causing barrier breakdown when sediment supply is exhausted. We examine published typologies for gravel barriers and advocate a consolidated perspective using rate of RSL change and sediment supply. We also consider the temporal variation in controls on barrier development. These are examined in terms of a simple behavioural model (BARCH) for prograding gravel barrier architecture and its sensitivity to such controls. The nature of macroscale (102–103 years) gravel barrier development, including inherited characteristics that influence barrier genesis, as well as forcing from changing RSL, sediment supply, headland control and barrier inertia, is examined in the context of long-surviving barriers along the southern England coastline.
Resumo:
Architectures and methods for the rapid design of silicon cores for implementing discrete wavelet transforms over a wide range of specifications are described. These architectures are efficient, modular, scalable, and cover orthonormal and biorthogonal wavelet transform families. They offer efficient hardware utilization by exploiting a number of core wavelet filter properties and allow the creation of silicon designs that are highly parameterized, including in terms of wavelet type and wordlengths. Control circuitry is embedded within these systems allowing them to be cascaded for any desired level of decomposition without any interface glue logic. The time to produce chip designs for a specific wavelet application is typically less than a day and these are comparable in area and performance to handcrafted designs. They are also portable across a wide range of silicon foundries and suitable for field programmable gate array and programmable logic data implementation. The approach described has also been extended to wavelet packet transforms.
Resumo:
We introduce and characterise time operators for unilateral shifts and exact endomorphisms. The associated shift representation of evolution is related to the spectral representation by a generalized Fourier transform. We illustrate the results for a simple exact system, namely the Renyi map.