973 resultados para Homography constraint
Resumo:
The book’s main contribution is the bringing together of varied discourses concerning the social policy impact of ageing within the context of fiscal austerity. As the editors rightly state, the economic recession has sharpened the focus of governments on the implication of demographic ageing. It is vital therefore, that the social policy implications of societal ageing are studied and understood within a wider political economy of austerity. Of course the fiscal crisis of the 1970s and the ensuing first wave of neo-liberalism in the Anglo-Saxon countries [in the 1980s] gave us a foretaste of the various ways in which the public burden thesis has been applied with great force to the older population. This recession is different, certainly in Ireland, but a combination of neo-liberal ideology and neo-classical economics is enforcing severe budgetary constraint on a range of countries (within and outside of the Eurozone) in the name of funding deficits. Policy makers appear to be disinterested in both the origins of the 2008 financial crisis and the distributional consequences of their austerity policies. In the absence of official concern social science research has a key role to play.
Resumo:
We consider transmit antenna selection with receive generalized selection combining (TAS/GSC) for cognitive decodeand-forward (DF) relaying in Nakagami-m fading channels. In an effort to assess the performance, the probability density function and the cumulative distribution function of the endto-end SNR are derived using the moment generating function, from which new exact closed-form expressions for the outage probability and the symbol error rate are derived. We then derive a new closed-form expression for the ergodic capacity. More importantly, by deriving the asymptotic expressions for the outage probability and the symbol error rate, as well as the high SNR approximations of the ergodic capacity, we establish new design insights under the two distinct constraint scenarios: 1) proportional interference power constraint, and 2) fixed interference power constraint. Several pivotal conclusions are reached. For the first scenario, the full diversity order of the
outage probability and the symbol error rate is achieved, and the high SNR slope of the ergodic capacity is 1/2. For the second scenario, the diversity order of the outage probability and the symbol error rate is zero with error floors, and the high SNR slope of the ergodic capacity is zero with capacity ceiling.
Resumo:
A novel approach for the multi-objective design optimisation of aerofoil profiles is presented. The proposed method aims to exploit the relative strengths of global and local optimisation algorithms, whilst using surrogate models to limit the number of computationally expensive CFD simulations required. The local search stage utilises a re-parameterisation scheme that increases the flexibility of the geometry description by iteratively increasing the number of design variables, enabling superior designs to be generated with minimal user intervention. Capability of the algorithm is demonstrated via the conceptual design of aerofoil sections for use on a lightweight laminar flow business jet. The design case is formulated to account for take-off performance while reducing sensitivity to leading edge contamination. The algorithm successfully manipulates boundary layer transition location to provide a potential set of aerofoils that represent the trade-offs between drag at cruise and climb conditions in the presence of a challenging constraint set. Variations in the underlying flow physics between Pareto-optimal aerofoils are examined to aid understanding of the mechanisms that drive the trade-offs in objective functions.
Resumo:
Power has become a key constraint in current nanoscale integrated circuit design due to the increasing demands for mobile computing and a low carbon economy. As an emerging technology, an inexact circuit design offers a promising approach to significantly reduce both dynamic and static power dissipation for error tolerant applications. Although fixed-point arithmetic circuits have been studied in terms of inexact computing, floating-point arithmetic circuits have not been fully considered although require more power. In this paper, the first inexact floating-point adder is designed and applied to high dynamic range (HDR) image processing. Inexact floating-point adders are proposed by approximately designing an exponent subtractor and mantissa adder. Related logic operations including normalization and rounding modules are also considered in terms of inexact computing. Two HDR images are processed using the proposed inexact floating-point adders to show the validity of the inexact design. HDR-VDP is used as a metric to measure the subjective results of the image addition. Significant improvements have been achieved in terms of area, delay and power consumption. Comparison results show that the proposed inexact floating-point adders can improve power consumption and the power-delay product by 29.98% and 39.60%, respectively.
Resumo:
Power has become a key constraint in nanoscale inte-grated circuit design due to the increasing demands for mobile computing and higher integration density. As an emerging compu-tational paradigm, an inexact circuit offers a promising approach to significantly reduce both dynamic and static power dissipation for error-tolerant applications. In this paper, an inexact floating-point adder is proposed by approximately designing an exponent sub-tractor and mantissa adder. Related operations such as normaliza-tion and rounding are also dealt with in terms of inexact computing. An upper bound error analysis for the average case is presented to guide the inexact design; it shows that the inexact floating-point adder design is dependent on the application data range. High dynamic range images are then processed using the proposed inexact floating-point adders to show the validity of the inexact design; comparison results show that the proposed inexact floating-point adders can improve the power consumption and power-delay product by 29.98% and 39.60%, respectively.
Resumo:
We investigate a collision-sensitive secondary network that intends to opportunistically aggregate and utilize spectrum of a primary network to achieve higher data rates. In opportunistic spectrum access with imperfect sensing of idle primary spectrum, secondary transmission can collide with primary transmission. When the secondary network aggregates more channels in the presence of the imperfect sensing, collisions could occur more often, limiting the performance obtained by spectrum aggregation. In this context, we aim to address a fundamental query, that is, how much spectrum aggregation is worthy with imperfect sensing. For collision occurrence, we focus on two different types of collision: one is imposed by asynchronous transmission; and the other by imperfect spectrum sensing. The collision probability expression has been derived in closed-form with various secondary network parameters: primary traffic load, secondary user transmission parameters, spectrum sensing errors, and the number of aggregated sub-channels. In addition, the impact of spectrum aggregation on data rate is analysed under the constraint of collision probability. Then, we solve an optimal spectrum aggregation problem and propose the dynamic spectrum aggregation approach to increase the data rate subject to practical collision constraints. Our simulation results show clearly that the proposed approach outperforms the benchmark that passively aggregates sub-channels with lack of collision awareness.
Resumo:
We consider an application scenario where points of interest (PoIs) each have a web presence and where a web user wants to iden- tify a region that contains relevant PoIs that are relevant to a set of keywords, e.g., in preparation for deciding where to go to conve- niently explore the PoIs. Motivated by this, we propose the length- constrained maximum-sum region (LCMSR) query that returns a spatial-network region that is located within a general region of in- terest, that does not exceed a given size constraint, and that best matches query keywords. Such a query maximizes the total weight of the PoIs in it w.r.t. the query keywords. We show that it is NP- hard to answer this query. We develop an approximation algorithm with a (5 + ǫ) approximation ratio utilizing a technique that scales node weights into integers. We also propose a more efficient heuris- tic algorithm and a greedy algorithm. Empirical studies on real data offer detailed insight into the accuracy of the proposed algorithms and show that the proposed algorithms are capable of computingresults efficiently and effectively.
Resumo:
We present grizP1 light curves of 146 spectroscopically confirmed Type Ia supernovae (SNe Ia; 0.03 < z < 0.65) discovered during the first 1.5 yr of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the Hubble Space Telescope Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNe Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only supernovae (SNe) and assuming a constant dark energy equation of state and flatness, yields w = -1.120+0.360-0.206(Stat)+0.2690.291(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields ΩM = 0.280+0.0130.012 and w = -1.166+0.072-0.069 including all identified systematics. The value of w is inconsistent with the cosmological constant value of -1 at the 2.3σ level. Tension endures after removing either the baryon acoustic oscillation (BAO) or the H0 constraint, though it is strongest when including the H0 constraint. If we include WMAP9 cosmic microwave background (CMB) constraints instead of those from Planck, we find w = -1.124+0.083-0.065, which diminishes the discord to <2σ. We cannot conclude whether the tension with flat ΛCDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 SN sample with ∼three times as many SNe should provide more conclusive results.
Resumo:
We present nebular-phase optical and near-infrared spectroscopy of the Type IIP supernova SN 2012aw combined with non-local thermodynamic equilibrium radiative transfer calculations applied to ejecta from stellar evolution/explosion models. Our spectral synthesis models generally show good agreement with the ejecta from a MZAMS = 15 M⊙progenitor star. The emission lines of oxygen, sodium, and magnesium are all consistent with the nucleosynthesis in a progenitor in the 14-18 M⊙ range.We also demonstrate how the evolution of the oxygen cooling lines of [O I] λ5577, [O I] λ6300, and [O I] λ6364 can be used to constrain the mass of oxygen in the non-molecularly cooled ashes to < 1 M⊙, independent of the mixing in the ejecta. This constraint implies that any progenitor model of initial mass greater than 20 M⊙ would be difficult to reconcile with the observed line strengths. A stellar progenitor of around MZAMS = 15 M⊙ can consistently explain the directly measured luminosity of the progenitor star, the observed nebular spectra, and the inferred pre-supernova mass-loss rate.We conclude that there is still no convincing example of a Type IIP supernova showing the nucleosynthesis products expected from an MZAMS > 20 M⊙ progenitor. © 2014 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society.
Resumo:
A simple yet efficient harmony search (HS) method with a new pitch adjustment rule (NPAHS) is proposed for dynamic economic dispatch (DED) of electrical power systems, a large-scale non-linear real time optimization problem imposed by a number of complex constraints. The new pitch adjustment rule is based on the perturbation information and the mean value of the harmony memory, which is simple to implement and helps to enhance solution quality and convergence speed. A new constraint handling technique is also developed to effectively handle various constraints in the DED problem, and the violation of ramp rate limits between the first and last scheduling intervals that is often ignored by existing approaches for DED problems is effectively eliminated. To validate the effectiveness, the NPAHS is first tested on 10 popular benchmark functions with 100 dimensions, in comparison with four HS variants and five state-of-the-art evolutionary algorithms. Then, NPAHS is used to solve three 24-h DED systems with 5, 15 and 54 units, which consider the valve point effects, transmission loss, emission and prohibited operating zones. Simulation results on all these systems show the scalability and superiority of the proposed NPAHS on various large scale problems.
Resumo:
OBJECTIVE: To evaluate the effect of altering a single component of a rehabilitation programme (e.g. adding bilateral practice alone) on functional recovery after stroke, defined using a measure of activity.
DATA SOURCES: A search was conducted of Medline/Pubmed, CINAHL and Web of Science.
REVIEW METHODS: Two reviewers independently assessed eligibility. Randomized controlled trials were included if all participants received the same base intervention, and the experimental group experienced alteration of a single component of the training programme. This could be manipulation of an intrinsic component of training (e.g. intensity) or the addition of a discretionary component (e.g. augmented feedback). One reviewer extracted the data and another independently checked a subsample (20%). Quality was appraised according to the PEDro scale.
RESULTS: Thirty-six studies (n = 1724 participants) were included. These evaluated nine training components: mechanical degrees of freedom, intensity of practice, load, practice schedule, augmented feedback, bilateral movements, constraint of the unimpaired limb, mental practice and mirrored-visual feedback. Manipulation of the mechanical degrees of freedom of the trunk during reaching and the addition of mental practice during upper limb training were the only single components found to independently enhance recovery of function after stroke.
CONCLUSION: This review provides limited evidence to support the supposition that altering a single component of a rehabilitation programme realises greater functional recovery for stroke survivors. Further investigations are required to determine the most effective single components of rehabilitation programmes, and the combinations that may enhance functional recovery.
Resumo:
In this paper, we propose cyclic prefix single carrier (CP-SC) full-duplex transmission in cooperative spectrum sharing to achieve multipath diversity gain and full-duplex spectral efficiency. Integrating full-duplex transmission into cooperative spectrum sharing systems results in two intrinsic problems: 1) the peak interference power constraint at the PUs are concurrently inflicted on the transmit power at the secondary source (SS) and the secondary relays (SRs); and 2) the residual loop interference occurs between the transmit and the receive antennas at the secondary relays. Thus, examining the effects of residual loop interference under peak interference power constraint at the primary users and maximum transmit power constraints at the SS and the SRs is a particularly challenging problem in frequency selective fading channels. To do so, we derive and quantitatively evaluate the exact and the asymptotic outage probability for several relay selection policies in frequency selective fading channels. Our results manifest that a zero diversity gain is obtained with full-duplex.
Resumo:
The preferences of users are important in route search and planning. For example, when a user plans a trip within a city, their preferences can be expressed as keywords shopping mall, restaurant, and museum, with weights 0.5, 0.4, and 0.1, respectively. The resulting route should best satisfy their weighted preferences. In this paper, we take into account the weighted user preferences in route search, and present a keyword coverage problem, which finds an optimal route from a source location to a target location such that the keyword coverage is optimized and that the budget score satisfies a specified constraint. We prove that this problem is NP-hard. To solve this complex problem, we pro- pose an optimal route search based on an A* variant for which we have defined an admissible heuristic function. The experiments conducted on real-world datasets demonstrate both the efficiency and accu- racy of our proposed algorithms.
Resumo:
Peak power consumption is the first order design constraint of data centers. Though peak power consumption is rarely, if ever, observed, the entire data center facility must prepare for it, leading to inefficient usage of its resources. The most prominent way for addressing this issue is to limit the power consumption of the data center IT facility far below its theoretical peak value. Many approaches have been proposed to achieve that, based on the same small set of enforcement mechanisms, but there has been no corresponding work on systematically examining the advantages and disadvantages of each such mechanism. In the absence of such a study,it is unclear what is the optimal mechanism for a given computing environment, which can lead to unnecessarily poor performance if an inappropriate scheme is used. This paper fills this gap by comparing for the first time five widely used power capping mechanisms under the same hardware/software setting. We also explore possible alternative power capping mechanisms beyond what has been previously proposed and evaluate them under the same setup. We systematically analyze the strengths and weaknesses of each mechanism, in terms of energy efficiency, overhead, and predictable behavior. We show how these mechanisms can be combined in order to implement an optimal power capping mechanism which reduces the slow down compared to the most widely used mechanism by up to 88%. Our results provide interesting insights regarding the different trade-offs of power capping techniques, which will be useful for designing and implementing highly efficient power capping in the future.
Resumo:
We show that the X-ray line flux of the Mn Kα line at 5.9 keV from the decay of 55Fe is a promising diagnostic to distinguish between Type Ia supernova (SN Ia) explosion models. Using radiation transport calculations, we compute the line flux for two three-dimensional explosion models: a near-Chandrasekhar mass delayed detonation and a violent merger of two (1.1 and 0.9 M⊙) white dwarfs. Both models are based on solar metallicity zero-age main-sequence progenitors. Due to explosive nuclear burning at higher density, the delayed-detonation model synthesizes ˜3.5 times more radioactive 55Fe than the merger model. As a result, we find that the peak Mn Kα line flux of the delayed-detonation model exceeds that of the merger model by a factor of ˜4.5. Since in both models the 5.9-keV X-ray flux peaks five to six years after the explosion, a single measurement of the X-ray line emission at this time can place a constraint on the explosion physics that is complementary to those derived from earlier phase optical spectra or light curves. We perform detector simulations of current and future X-ray telescopes to investigate the possibilities of detecting the X-ray line at 5.9 keV. Of the currently existing telescopes, XMM-Newton/pn is the best instrument for close (≲1-2 Mpc), non-background limited SNe Ia because of its large effective area. Due to its low instrumental background, Chandra/ACIS is currently the best choice for SNe Ia at distances above ˜2 Mpc. For the delayed-detonation scenario, a line detection is feasible with Chandra up to ˜3 Mpc for an exposure time of 106 s. We find that it should be possible with currently existing X-ray instruments (with exposure times ≲5 × 105 s) to detect both of our models at sufficiently high S/N to distinguish between them for hypothetical events within the Local Group. The prospects for detection will be better with future missions. For example, the proposed Athena/X-IFU instrument could detect our delayed-detonation model out to a distance of ˜5 Mpc. This would make it possible to study future events occurring during its operational life at distances comparable to those of the recent supernovae SN 2011fe (˜6.4 Mpc) and SN 2014J (˜3.5 Mpc).