907 resultados para fixed point method
Resumo:
For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.
Resumo:
We study an optoelectronic time-delay oscillator that displays high-speed chaotic behavior with a flat, broad power spectrum. The chaotic state coexists with a linearly stable fixed point, which, when subjected to a finite-amplitude perturbation, loses stability initially via a periodic train of ultrafast pulses. We derive approximate mappings that do an excellent job of capturing the observed instability. The oscillator provides a simple device for fundamental studies of time-delay dynamical systems and can be used as a building block for ultrawide-band sensor networks.
Resumo:
This work demonstrates an example of the importance of an adequate method to sub-sample model results when comparing with in situ measurements. A test of model skill was performed by employing a point-to-point method to compare a multi-decadal hindcast against a sparse, unevenly distributed historic in situ dataset. The point-to-point method masked out all hindcast cells that did not have a corresponding in situ measurement in order to match each in situ measurement against its most similar cell from the model. The application of the point-to-point method showed that the model was successful at reproducing the inter-annual variability of the in situ datasets. Furthermore, this success was not immediately apparent when the measurements were aggregated to regional averages. Time series, data density and target diagrams were employed to illustrate the impact of switching from the regional average method to the point-to-point method. The comparison based on regional averages gave significantly different and sometimes contradicting results that could lead to erroneous conclusions on the model performance. Furthermore, the point-to-point technique is a more correct method to exploit sparse uneven in situ data while compensating for the variability of its sampling. We therefore recommend that researchers take into account for the limitations of the in situ datasets and process the model to resemble the data as much as possible.
Resumo:
Recently Ziman et al. [Phys. Rev. A 65, 042105 (2002)] have introduced a concept of a universal quantum homogenizer which is a quantum machine that takes as input a given (system) qubit initially in an arbitrary state rho and a set of N reservoir qubits initially prepared in the state xi. The homogenizer realizes, in the limit sense, the transformation such that at the output each qubit is in an arbitrarily small neighborhood of the state xi irrespective of the initial states of the system and the reservoir qubits. In this paper we generalize the concept of quantum homogenization for qudits, that is, for d-dimensional quantum systems. We prove that the partial-swap operation induces a contractive map with the fixed point which is the original state of the reservoir. We propose an optical realization of the quantum homogenization for Gaussian states. We prove that an incoming state of a photon field is homogenized in an array of beam splitters. Using Simon's criterion, we study entanglement between outgoing beams from beam splitters. We derive an inseparability condition for a pair of output beams as a function of the degree of squeezing in input beams.
Resumo:
The aim of this research is to compare the adsorption capacity of different types of activated carbons produced by steam activation in small laboratory scale and large industrial scale processes. Equilibrium behaviour of the activated carbons was investigated by performing batch adsorption experiments using bottle-point method. Basic dyes (methylene blue (MB), basic red (BR) and basic yellow (BY)) were used as adsorbates and the maximum adsorptive capacity was determined. Adsorption isotherm models, Langmuir, Freundlich and Redlich-Peterson were used to simulate the equilibrium data at different experimental parameters (pH and adsorbent particle size). It was found that PAC2 (activated carbon produced from New Zealand coal using steam activation) has the highest adsorptive capacity towards MB dye (588 mg/g) followed by F400 (476 mg/g) and PAC 1 (380 mg/g). BR and BY showed higher adsorptive affinity towards PAC2 and F400 than MB. Under comparable conditions, adsorption capacity of basic dyes, MB, BR and BY onto PAC 1, PAC2 and F400 increased in the order: MB <BR <BY. Redlich-Peterson model was found to describe the experimental data over the entire range of concentration under investigation. All the systems show favourable adsorption of the basic dyes with 0 <R-L <I (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We present an efficient and accurate method to study electron detachment from negative ions by a few-cycle linearly polarized laser pulse. The adiabatic saddle-point method of Gribakin and Kuchiev [Phys. Rev. A 55, 3760 (1997)] is adapted to calculate the transition amplitude for a short laser pulse. Its application to a pulse with N optical cycles produces 2(N + 1) saddle points in complex time, which form a characteristic "smile." Numerical calculations are performed for H(-) in a 5-cycle pulse with frequency 0.0043 a.u. and intensities of 10(10), 5 x 10(10), and 10(11) W/cm(2), and for various carrier-envelope phases. We determine the spectrum of the photoelectrons as a function of both energy and emission angle, as well as the angle-integrated energy spectra and total detachment probabilities. Our calculations show that the dominant contribution to the transition amplitude is given by 5-6 central saddle points, which correspond to the strongest part of the pulse. We examine the dependence of the photoelectron angular distributions on the carrier-envelope phase and show that measuring such distributions can provide a way of determining this phase.
Resumo:
A new, front-end image processing chip is presented for real-time small object detection. It has been implemented using a 0.6 µ, 3.3 V CMOS technology and operates on 10-bit input data at 54 megasamples per second. It occupies an area of 12.9 mm×13.6 mm (including pads), dissipates 1.5 W, has 92 I/O pins and is to be housed in a 160-pin ceramic quarter flat-pack. It performs both one- and two-dimensional FIR filtering and a multilayer perceptron (MLP) neural network function using a reconfigurable array of 21 multiplication-accumulation cells which corresponds to a window size of 7×3. The chip can cope with images of 2047 pixels per line and can be cascaded to cope with larger window sizes. The chip performs two billion fixed point multiplications and additions per second.
Resumo:
The recent adiabatic saddle-point method of Shearer et al. [ Phys. Rev. A 84 033409 (2011)] is applied to study strong-field photodetachment of H- by few-cycle linearly polarized laser pulses of frequencies near the two-photon detachment threshold. The behavior of the saddle points in the complex-time plane for a range of laser parameters is explored. A detailed analysis of the influence of laser intensities [(2×1011)–(6.5 × 1011) W/cm2], midinfrared laser wavelengths (1800–2700 nm), and various values of the carrier envelope phase (CEP) on (i) three-dimensional probability detachment distributions, (ii) photoangular distributions (PADs), (iii) energy spectra, and (iv) momentum distributions are presented. Examination of the probability distributions and PADs reveal main lobes and jetlike structures. Bifurcation phenomena in the probability distributions and PADs are also observed as the wavelength and intensity increase. Our simulations show that the (i) probability distributions, (ii) PADs, and (iii) energy spectra are extremely sensitive to the CEP and thus measuring such distributions provides a useful tool for determining this phase. The symmetrical properties of the electron momentum distributions are also found to be strongly correlated with the CEP and this provides an additional robust method for measuring the CEP of a laser pulse. Our calculations further show that for a three-cycle pulse inclusion of all eight saddle points is required in the evaluation of the transition amplitude to yield an accurate description of the photodetachment process. This is in contrast to recent results for a five-cycle pulse.
Resumo:
An adaptation of bungee jumping, 'bungee running', involves participants attempting to run as far as they can whilst connected to an elastic rope which is anchored to a fixed point. Usually considered a safe recreational activity, we report a potentially life-threatening head injury following a bungee running accident.
Resumo:
We consider the behaviour of a set of services in a stressed web environment where performance patterns may be difficult to predict. In stressed environments the performances of some providers may degrade while the performances of others, with elastic resources, may improve. The allocation of web-based providers to users (brokering) is modelled by a strategic non-cooperative angel-daemon game with risk profiles. A risk profile specifies a bound on the number of unreliable service providers within an environment without identifying the names of these providers. Risk profiles offer a means of analysing the behaviour of broker agents which allocate service providers to users. A Nash equilibrium is a fixed point of such a game in which no user can locally improve their choice of provider – thus, a Nash equilibrium is a viable solution to the provider/user allocation problem. Angel daemon games provide a means of reasoning about stressed environments and offer the possibility of designing brokers using risk profiles and Nash equilibria.
Resumo:
Active network scanning injects traffic into a network and observes responses to draw conclusions about the network. Passive network analysis works by looking at network meta data or by analyzing traffic as it traverses a fixed point on the network. It may be infeasible or inappropriate to scan critical infrastructure networks. Techniques exist to uniquely map assets without resorting to active scanning. In many cases, it is possible to characterize and identify network nodes by passively analyzing traffic flows. These techniques are considered in particular with respect to their application to power industry critical infrastructure.
Resumo:
Polyphase IIR structures have recently proven themselves very attractive for very high performance filters that can be designed using very few coefficients. This, combined with their low sensitivity to coefficient quantization in comparison to standard FIR and IIR structures, makes them very applicable for very fast filtering when implemented in fixed-point arithmetic. However, although the mathematical description is very simple, there exist a number of ways to implement such filters. In this paper, we take four of these different implementation structures, analyze the rounding noise originating from the limited arithmetic wordlength of the mathematical operators, and check the internal data growth within the structure. These analyses need to be done to ensure that the performance of the implementation matches the performance of the theoretical design. The theoretical approach that we present has been proven by the results of the fixed-point simulation done in Simulink and verified by an equivalent bit-true implementation in VHDL.
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
The control of a crane carrying its payload by an elastic string corresponds to a task in which precise, indirect control of a subsystem dynamically coupled to a directly controllable subsystem is needed. This task is interesting since the coupled degree of freedom has little damping and it is apt to keep swinging accordingly. The traditional approaches apply the input shaping technology to assist the human operator responsible for the manipulation task. In the present paper a novel adaptive approach applying fixed point transformations based iterations having local basin of attraction is proposed to simultaneously tackle the problems originating from the imprecise dynamic model available for the system to be controlled and the swinging problem, too. The most important phenomenological properties of this approach are also discussed. The control considers the 4th time-derivative of the trajectory of the payload. The operation of the proposed control is illustrated via simulation results.
Resumo:
We prove a one-to-one correspondence between (i) C1+ conjugacy classes of C1+H Cantor exchange systems that are C1+H fixed points of renormalization and (ii) C1+ conjugacy classes of C1+H diffeomorphisms f with a codimension 1 hyperbolic attractor Lambda that admit an invariant measure absolutely continuous with respect to the Hausdorff measure on Lambda. However, we prove that there is no C1+alpha Cantor exchange system, with bounded geometry, that is a C1+alpha fixed point of renormalization with regularity alpha greater than the Hausdorff dimension of its invariant Cantor set.