48 resultados para fixed point method

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power has become a key constraint in current nanoscale integrated circuit design due to the increasing demands for mobile computing and a low carbon economy. As an emerging technology, an inexact circuit design offers a promising approach to significantly reduce both dynamic and static power dissipation for error tolerant applications. Although fixed-point arithmetic circuits have been studied in terms of inexact computing, floating-point arithmetic circuits have not been fully considered although require more power. In this paper, the first inexact floating-point adder is designed and applied to high dynamic range (HDR) image processing. Inexact floating-point adders are proposed by approximately designing an exponent subtractor and mantissa adder. Related logic operations including normalization and rounding modules are also considered in terms of inexact computing. Two HDR images are processed using the proposed inexact floating-point adders to show the validity of the inexact design. HDR-VDP is used as a metric to measure the subjective results of the image addition. Significant improvements have been achieved in terms of area, delay and power consumption. Comparison results show that the proposed inexact floating-point adders can improve power consumption and the power-delay product by 29.98% and 39.60%, respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two different mesoporous films of TiO2 were coated onto a QCM disc and fired at 450o C for 30 min. The first film was derived from a sol-gel paste that was popular in the early days of dye-sensitised solar cell, i.e. dssc, research, a TiO2(sg) film. The other was a commercial colloidal paste used to make examples of the current dssc cell; a TiO2(ds) film. A QCM was used to determine the mass of the TiO2 film deposited on each disc and the increase in the mass of the film when immersed in water/glycerol solutions with wt% values spanning the range 0-70%. The results of this work reveal that with both TiO2 mesoporous films the solution fills the film's pores and acts as a rigid mass, thereby allowing the porosity of each film to be calculated as: 59.1% and 71.6% for the TiO2(sg) and TiO2(ds) films, respectively. These results, coupled with surface area data, allowed the pore radii of the two films to be calculated as: 9.6 and 17.8 nm, respectively. This method is then simplified further, to just a few frequency measurements in water and only air to reveal the same porosity values. The value of the latter ‘one pointmethod for making porosity measurements is discussed briefly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently Ziman et al. [Phys. Rev. A 65, 042105 (2002)] have introduced a concept of a universal quantum homogenizer which is a quantum machine that takes as input a given (system) qubit initially in an arbitrary state rho and a set of N reservoir qubits initially prepared in the state xi. The homogenizer realizes, in the limit sense, the transformation such that at the output each qubit is in an arbitrarily small neighborhood of the state xi irrespective of the initial states of the system and the reservoir qubits. In this paper we generalize the concept of quantum homogenization for qudits, that is, for d-dimensional quantum systems. We prove that the partial-swap operation induces a contractive map with the fixed point which is the original state of the reservoir. We propose an optical realization of the quantum homogenization for Gaussian states. We prove that an incoming state of a photon field is homogenized in an array of beam splitters. Using Simon's criterion, we study entanglement between outgoing beams from beam splitters. We derive an inseparability condition for a pair of output beams as a function of the degree of squeezing in input beams.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this research is to compare the adsorption capacity of different types of activated carbons produced by steam activation in small laboratory scale and large industrial scale processes. Equilibrium behaviour of the activated carbons was investigated by performing batch adsorption experiments using bottle-point method. Basic dyes (methylene blue (MB), basic red (BR) and basic yellow (BY)) were used as adsorbates and the maximum adsorptive capacity was determined. Adsorption isotherm models, Langmuir, Freundlich and Redlich-Peterson were used to simulate the equilibrium data at different experimental parameters (pH and adsorbent particle size). It was found that PAC2 (activated carbon produced from New Zealand coal using steam activation) has the highest adsorptive capacity towards MB dye (588 mg/g) followed by F400 (476 mg/g) and PAC 1 (380 mg/g). BR and BY showed higher adsorptive affinity towards PAC2 and F400 than MB. Under comparable conditions, adsorption capacity of basic dyes, MB, BR and BY onto PAC 1, PAC2 and F400 increased in the order: MB <BR <BY. Redlich-Peterson model was found to describe the experimental data over the entire range of concentration under investigation. All the systems show favourable adsorption of the basic dyes with 0 <R-L <I (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an efficient and accurate method to study electron detachment from negative ions by a few-cycle linearly polarized laser pulse. The adiabatic saddle-point method of Gribakin and Kuchiev [Phys. Rev. A 55, 3760 (1997)] is adapted to calculate the transition amplitude for a short laser pulse. Its application to a pulse with N optical cycles produces 2(N + 1) saddle points in complex time, which form a characteristic "smile." Numerical calculations are performed for H(-) in a 5-cycle pulse with frequency 0.0043 a.u. and intensities of 10(10), 5 x 10(10), and 10(11) W/cm(2), and for various carrier-envelope phases. We determine the spectrum of the photoelectrons as a function of both energy and emission angle, as well as the angle-integrated energy spectra and total detachment probabilities. Our calculations show that the dominant contribution to the transition amplitude is given by 5-6 central saddle points, which correspond to the strongest part of the pulse. We examine the dependence of the photoelectron angular distributions on the carrier-envelope phase and show that measuring such distributions can provide a way of determining this phase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new, front-end image processing chip is presented for real-time small object detection. It has been implemented using a 0.6 µ, 3.3 V CMOS technology and operates on 10-bit input data at 54 megasamples per second. It occupies an area of 12.9 mm×13.6 mm (including pads), dissipates 1.5 W, has 92 I/O pins and is to be housed in a 160-pin ceramic quarter flat-pack. It performs both one- and two-dimensional FIR filtering and a multilayer perceptron (MLP) neural network function using a reconfigurable array of 21 multiplication-accumulation cells which corresponds to a window size of 7×3. The chip can cope with images of 2047 pixels per line and can be cascaded to cope with larger window sizes. The chip performs two billion fixed point multiplications and additions per second.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The recent adiabatic saddle-point method of Shearer et al. [ Phys. Rev. A 84 033409 (2011)] is applied to study strong-field photodetachment of H- by few-cycle linearly polarized laser pulses of frequencies near the two-photon detachment threshold. The behavior of the saddle points in the complex-time plane for a range of laser parameters is explored. A detailed analysis of the influence of laser intensities [(2×1011)–(6.5 × 1011) W/cm2], midinfrared laser wavelengths (1800–2700 nm), and various values of the carrier envelope phase (CEP) on (i) three-dimensional probability detachment distributions, (ii) photoangular distributions (PADs), (iii) energy spectra, and (iv) momentum distributions are presented. Examination of the probability distributions and PADs reveal main lobes and jetlike structures. Bifurcation phenomena in the probability distributions and PADs are also observed as the wavelength and intensity increase. Our simulations show that the (i) probability distributions, (ii) PADs, and (iii) energy spectra are extremely sensitive to the CEP and thus measuring such distributions provides a useful tool for determining this phase. The symmetrical properties of the electron momentum distributions are also found to be strongly correlated with the CEP and this provides an additional robust method for measuring the CEP of a laser pulse. Our calculations further show that for a three-cycle pulse inclusion of all eight saddle points is required in the evaluation of the transition amplitude to yield an accurate description of the photodetachment process. This is in contrast to recent results for a five-cycle pulse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An adaptation of bungee jumping, 'bungee running', involves participants attempting to run as far as they can whilst connected to an elastic rope which is anchored to a fixed point. Usually considered a safe recreational activity, we report a potentially life-threatening head injury following a bungee running accident.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the behaviour of a set of services in a stressed web environment where performance patterns may be difficult to predict. In stressed environments the performances of some providers may degrade while the performances of others, with elastic resources, may improve. The allocation of web-based providers to users (brokering) is modelled by a strategic non-cooperative angel-daemon game with risk profiles. A risk profile specifies a bound on the number of unreliable service providers within an environment without identifying the names of these providers. Risk profiles offer a means of analysing the behaviour of broker agents which allocate service providers to users. A Nash equilibrium is a fixed point of such a game in which no user can locally improve their choice of provider – thus, a Nash equilibrium is a viable solution to the provider/user allocation problem. Angel daemon games provide a means of reasoning about stressed environments and offer the possibility of designing brokers using risk profiles and Nash equilibria.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Active network scanning injects traffic into a network and observes responses to draw conclusions about the network. Passive network analysis works by looking at network meta data or by analyzing traffic as it traverses a fixed point on the network. It may be infeasible or inappropriate to scan critical infrastructure networks. Techniques exist to uniquely map assets without resorting to active scanning. In many cases, it is possible to characterize and identify network nodes by passively analyzing traffic flows. These techniques are considered in particular with respect to their application to power industry critical infrastructure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An optimal day-ahead scheduling method (ODSM) for the integrated urban energy system (IUES) is introduced, which considers the reconfigurable capability of an electric distribution network. The hourly topology of a distribution network, a natural gas network, the energy centers including the combined heat and power (CHP) units, different energy conversion devices and demand responsive loads (DRLs), are optimized to minimize the day-ahead operation cost of the IUES. The hourly reconfigurable capability of the electric distribution network utilizing remotely controlled switches (RCSs) is explored and discussed. The operational constraints from the unbalanced three-phase electric distribution network, the natural gas network, and the energy centers are considered. The interactions between the electric distribution network and the natural gas network take place through conversion of energy among different energy vectors in the energy centers. An energy conversion analysis model for the energy center was developed based on the energy hub model. A hybrid optimization method based on genetic algorithm (GA) and a nonlinear interior point method (IPM) is utilized to solve the ODSM model. Numerical studies demonstrate that the proposed ODSM is able to provide the IUES with an effective and economical day-ahead scheduling scheme and reduce the operational cost of the IUES.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm. 

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The increasing penetration of wind generation on the Island of Ireland has been accompanied by close investigation of low-frequency periodic pulsations contained within the active power flow from different wind farms. A primary concern is excitation of existing low-frequency oscillation modes already present on the system, particularly the 0.75 Hz mode as a consequence of the interconnected Northern and Southern power system networks. Recently grid code requirements on the Northern Ireland power system have been updated stipulating that wind farms connected after 2005 must be able to control the magnitude of oscillations in the range of 0.25 - 1.75 Hz to within 1% of the wind farm's registered output. In order to determine whether wind farm low-frequency oscillations have a negative effect (excite other modes) or possibly a positive impact (damping of existing modes) on the power system, the oscillations at the point of connection must be measured and characterised. Using time - frequency methods, research presented in this paper has been conducted to extract signal features from measured low-frequency active power pulsations produced by wind farms to determine the effective composition of possible oscillatory modes which may have a detrimental effect on system dynamic stability. The paper proposes a combined wavelet-Prony method to extract modal components and determine damping factors. The method is exemplified using real data obtained from wind farm measurements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A flexible, mass-conservative numerical technique for solving the advection-dispersion equation for miscible contaminant transport is presented. The method combines features of puff transport models from air pollution studies with features from the random walk particle method used in water resources studies, providing a deterministic time-marching algorithm which is independent of the grid Peclet number and scales from one to higher dimensions simply. The concentration field is discretised into a number of particles, each of which is treated as a point release which advects and disperses over the time interval. The dispersed puff is itself discretised into a spatial distribution of particles whose masses can be pre-calculated. Concentration within the simulation domain is then calculated from the mass distribution as an average over some small volume. Comparison with analytical solutions for a one-dimensional fixed-duration concentration pulse and for two-dimensional transport in an axisymmetric flow field indicate that the algorithm performs well. For a given level of accuracy the new method has lower computation times than the random walk particle method.