105 resultados para Constraint


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider an application scenario where points of interest (PoIs) each have a web presence and where a web user wants to iden- tify a region that contains relevant PoIs that are relevant to a set of keywords, e.g., in preparation for deciding where to go to conve- niently explore the PoIs. Motivated by this, we propose the length- constrained maximum-sum region (LCMSR) query that returns a spatial-network region that is located within a general region of in- terest, that does not exceed a given size constraint, and that best matches query keywords. Such a query maximizes the total weight of the PoIs in it w.r.t. the query keywords. We show that it is NP- hard to answer this query. We develop an approximation algorithm with a (5 + ǫ) approximation ratio utilizing a technique that scales node weights into integers. We also propose a more efficient heuris- tic algorithm and a greedy algorithm. Empirical studies on real data offer detailed insight into the accuracy of the proposed algorithms and show that the proposed algorithms are capable of computingresults efficiently and effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present grizP1 light curves of 146 spectroscopically confirmed Type Ia supernovae (SNe Ia; 0.03 < z < 0.65) discovered during the first 1.5 yr of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the Hubble Space Telescope Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNe Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only supernovae (SNe) and assuming a constant dark energy equation of state and flatness, yields w = -1.120+0.360-0.206(Stat)+0.2690.291(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields ΩM = 0.280+0.0130.012 and w = -1.166+0.072-0.069 including all identified systematics. The value of w is inconsistent with the cosmological constant value of -1 at the 2.3σ level. Tension endures after removing either the baryon acoustic oscillation (BAO) or the H0 constraint, though it is strongest when including the H0 constraint. If we include WMAP9 cosmic microwave background (CMB) constraints instead of those from Planck, we find w = -1.124+0.083-0.065, which diminishes the discord to <2σ. We cannot conclude whether the tension with flat ΛCDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 SN sample with ∼three times as many SNe should provide more conclusive results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present nebular-phase optical and near-infrared spectroscopy of the Type IIP supernova SN 2012aw combined with non-local thermodynamic equilibrium radiative transfer calculations applied to ejecta from stellar evolution/explosion models. Our spectral synthesis models generally show good agreement with the ejecta from a MZAMS = 15 Mprogenitor star. The emission lines of oxygen, sodium, and magnesium are all consistent with the nucleosynthesis in a progenitor in the 14-18 M range.We also demonstrate how the evolution of the oxygen cooling lines of [O I] λ5577, [O I] λ6300, and [O I] λ6364 can be used to constrain the mass of oxygen in the non-molecularly cooled ashes to < 1 M, independent of the mixing in the ejecta. This constraint implies that any progenitor model of initial mass greater than 20 M would be difficult to reconcile with the observed line strengths. A stellar progenitor of around MZAMS = 15 M can consistently explain the directly measured luminosity of the progenitor star, the observed nebular spectra, and the inferred pre-supernova mass-loss rate.We conclude that there is still no convincing example of a Type IIP supernova showing the nucleosynthesis products expected from an MZAMS > 20 M progenitor. © 2014 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple yet efficient harmony search (HS) method with a new pitch adjustment rule (NPAHS) is proposed for dynamic economic dispatch (DED) of electrical power systems, a large-scale non-linear real time optimization problem imposed by a number of complex constraints. The new pitch adjustment rule is based on the perturbation information and the mean value of the harmony memory, which is simple to implement and helps to enhance solution quality and convergence speed. A new constraint handling technique is also developed to effectively handle various constraints in the DED problem, and the violation of ramp rate limits between the first and last scheduling intervals that is often ignored by existing approaches for DED problems is effectively eliminated. To validate the effectiveness, the NPAHS is first tested on 10 popular benchmark functions with 100 dimensions, in comparison with four HS variants and five state-of-the-art evolutionary algorithms. Then, NPAHS is used to solve three 24-h DED systems with 5, 15 and 54 units, which consider the valve point effects, transmission loss, emission and prohibited operating zones. Simulation results on all these systems show the scalability and superiority of the proposed NPAHS on various large scale problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the effect of altering a single component of a rehabilitation programme (e.g. adding bilateral practice alone) on functional recovery after stroke, defined using a measure of activity.

DATA SOURCES: A search was conducted of Medline/Pubmed, CINAHL and Web of Science.

REVIEW METHODS: Two reviewers independently assessed eligibility. Randomized controlled trials were included if all participants received the same base intervention, and the experimental group experienced alteration of a single component of the training programme. This could be manipulation of an intrinsic component of training (e.g. intensity) or the addition of a discretionary component (e.g. augmented feedback). One reviewer extracted the data and another independently checked a subsample (20%). Quality was appraised according to the PEDro scale.

RESULTS: Thirty-six studies (n = 1724 participants) were included. These evaluated nine training components: mechanical degrees of freedom, intensity of practice, load, practice schedule, augmented feedback, bilateral movements, constraint of the unimpaired limb, mental practice and mirrored-visual feedback. Manipulation of the mechanical degrees of freedom of the trunk during reaching and the addition of mental practice during upper limb training were the only single components found to independently enhance recovery of function after stroke.

CONCLUSION: This review provides limited evidence to support the supposition that altering a single component of a rehabilitation programme realises greater functional recovery for stroke survivors. Further investigations are required to determine the most effective single components of rehabilitation programmes, and the combinations that may enhance functional recovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose cyclic prefix single carrier (CP-SC) full-duplex transmission in cooperative spectrum sharing to achieve multipath diversity gain and full-duplex spectral efficiency. Integrating full-duplex transmission into cooperative spectrum sharing systems results in two intrinsic problems: 1) the peak interference power constraint at the PUs are concurrently inflicted on the transmit power at the secondary source (SS) and the secondary relays (SRs); and 2) the residual loop interference occurs between the transmit and the receive antennas at the secondary relays. Thus, examining the effects of residual loop interference under peak interference power constraint at the primary users and maximum transmit power constraints at the SS and the SRs is a particularly challenging problem in frequency selective fading channels. To do so, we derive and quantitatively evaluate the exact and the asymptotic outage probability for several relay selection policies in frequency selective fading channels. Our results manifest that a zero diversity gain is obtained with full-duplex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The preferences of users are important in route search and planning. For example, when a user plans a trip within a city, their preferences can be expressed as keywords shopping mall, restaurant, and museum, with weights 0.5, 0.4, and 0.1, respectively. The resulting route should best satisfy their weighted preferences. In this paper, we take into account the weighted user preferences in route search, and present a keyword coverage problem, which finds an optimal route from a source location to a target location such that the keyword coverage is optimized and that the budget score satisfies a specified constraint. We prove that this problem is NP-hard. To solve this complex problem, we pro- pose an optimal route search based on an A* variant for which we have defined an admissible heuristic function. The experiments conducted on real-world datasets demonstrate both the efficiency and accu- racy of our proposed algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Peak power consumption is the first order design constraint of data centers. Though peak power consumption is rarely, if ever, observed, the entire data center facility must prepare for it, leading to inefficient usage of its resources. The most prominent way for addressing this issue is to limit the power consumption of the data center IT facility far below its theoretical peak value. Many approaches have been proposed to achieve that, based on the same small set of enforcement mechanisms, but there has been no corresponding work on systematically examining the advantages and disadvantages of each such mechanism. In the absence of such a study,it is unclear what is the optimal mechanism for a given computing environment, which can lead to unnecessarily poor performance if an inappropriate scheme is used. This paper fills this gap by comparing for the first time five widely used power capping mechanisms under the same hardware/software setting. We also explore possible alternative power capping mechanisms beyond what has been previously proposed and evaluate them under the same setup. We systematically analyze the strengths and weaknesses of each mechanism, in terms of energy efficiency, overhead, and predictable behavior. We show how these mechanisms can be combined in order to implement an optimal power capping mechanism which reduces the slow down compared to the most widely used mechanism by up to 88%. Our results provide interesting insights regarding the different trade-offs of power capping techniques, which will be useful for designing and implementing highly efficient power capping in the future. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that the X-ray line flux of the Mn Kα line at 5.9 keV from the decay of 55Fe is a promising diagnostic to distinguish between Type Ia supernova (SN Ia) explosion models. Using radiation transport calculations, we compute the line flux for two three-dimensional explosion models: a near-Chandrasekhar mass delayed detonation and a violent merger of two (1.1 and 0.9 M⊙) white dwarfs. Both models are based on solar metallicity zero-age main-sequence progenitors. Due to explosive nuclear burning at higher density, the delayed-detonation model synthesizes ˜3.5 times more radioactive 55Fe than the merger model. As a result, we find that the peak Mn Kα line flux of the delayed-detonation model exceeds that of the merger model by a factor of ˜4.5. Since in both models the 5.9-keV X-ray flux peaks five to six years after the explosion, a single measurement of the X-ray line emission at this time can place a constraint on the explosion physics that is complementary to those derived from earlier phase optical spectra or light curves. We perform detector simulations of current and future X-ray telescopes to investigate the possibilities of detecting the X-ray line at 5.9 keV. Of the currently existing telescopes, XMM-Newton/pn is the best instrument for close (≲1-2 Mpc), non-background limited SNe Ia because of its large effective area. Due to its low instrumental background, Chandra/ACIS is currently the best choice for SNe Ia at distances above ˜2 Mpc. For the delayed-detonation scenario, a line detection is feasible with Chandra up to ˜3 Mpc for an exposure time of 106 s. We find that it should be possible with currently existing X-ray instruments (with exposure times ≲5 × 105 s) to detect both of our models at sufficiently high S/N to distinguish between them for hypothetical events within the Local Group. The prospects for detection will be better with future missions. For example, the proposed Athena/X-IFU instrument could detect our delayed-detonation model out to a distance of ˜5 Mpc. This would make it possible to study future events occurring during its operational life at distances comparable to those of the recent supernovae SN 2011fe (˜6.4 Mpc) and SN 2014J (˜3.5 Mpc).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Arc-Length Method is a solution procedure that enables a generic non-linear problem to pass limit points. Some examples are provided of mode-jumping problems solutions using a commercial nite element package, and other investigations are carried out on a simple structure of which the numerical solution can be compared with an analytical one. It is shown that Arc-Length Method is not reliable when bifurcations are present in the primary equilibrium path; also the presence of very sharp snap-backs or special boundary conditions may cause convergence diÆculty at limit points. An improvement to the predictor used in the incremental procedure is suggested, together with a reliable criteria for selecting either solution of the quadratic arc-length constraint. The gap that is sometimes observed between the experimantal load level of mode-jumping and its arc-length prediction is explained through an example.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single component geochemical maps are the most basic representation of spatial elemental distributions and commonly used in environmental and exploration geochemistry. However, the compositional nature of geochemical data imposes several limitations on how the data should be presented. The problems relate to the constant sum problem (closure), and the inherently multivariate relative information conveyed by compositional data. Well known is, for instance, the tendency of all heavy metals to show lower values in soils with significant contributions of diluting elements (e.g., the quartz dilution effect); or the contrary effect, apparent enrichment in many elements due to removal of potassium during weathering. The validity of classical single component maps is thus investigated, and reasonable alternatives that honour the compositional character of geochemical concentrations are presented. The first recommended such method relies on knowledge-driven log-ratios, chosen to highlight certain geochemical relations or to filter known artefacts (e.g. dilution with SiO2 or volatiles). This is similar to the classical normalisation approach to a single element. The second approach uses the (so called) log-contrasts, that employ suitable statistical methods (such as classification techniques, regression analysis, principal component analysis, clustering of variables, etc.) to extract potentially interesting geochemical summaries. The caution from this work is that if a compositional approach is not used, it becomes difficult to guarantee that any identified pattern, trend or anomaly is not an artefact of the constant sum constraint. In summary the authors recommend a chain of enquiry that involves searching for the appropriate statistical method that can answer the required geological or geochemical question whilst maintaining the integrity of the compositional nature of the data. The required log-ratio transformations should be applied followed by the chosen statistical method. Interpreting the results may require a closer working relationship between statisticians, data analysts and geochemists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistics are regularly used to make some form of comparison between trace evidence or deploy the exclusionary principle (Morgan and Bull, 2007) in forensic investigations. Trace evidence are routinely the results of particle size, chemical or modal analyses and as such constitute compositional data. The issue is that compositional data including percentages, parts per million etc. only carry relative information. This may be problematic where a comparison of percentages and other constraint/closed data is deemed a statistically valid and appropriate way to present trace evidence in a court of law. Notwithstanding an awareness of the existence of the constant sum problem since the seminal works of Pearson (1896) and Chayes (1960) and the introduction of the application of log-ratio techniques (Aitchison, 1986; Pawlowsky-Glahn and Egozcue, 2001; Pawlowsky-Glahn and Buccianti, 2011; Tolosana-Delgado and van den Boogaart, 2013) the problem that a constant sum destroys the potential independence of variances and covariances required for correlation regression analysis and empirical multivariate methods (principal component analysis, cluster analysis, discriminant analysis, canonical correlation) is all too often not acknowledged in the statistical treatment of trace evidence. Yet the need for a robust treatment of forensic trace evidence analyses is obvious. This research examines the issues and potential pitfalls for forensic investigators if the constant sum constraint is ignored in the analysis and presentation of forensic trace evidence. Forensic case studies involving particle size and mineral analyses as trace evidence are used to demonstrate the use of a compositional data approach using a centred log-ratio (clr) transformation and multivariate statistical analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses compact-stencil finite difference time domain (FDTD) schemes for approximating the 2D wave equation in the context of digital audio. Stability, accuracy, and efficiency are investigated and new ways of viewing and interpreting the results are discussed. It is shown that if a tight accuracy constraint is applied, implicit schemes outperform explicit schemes. The paper also discusses the relevance to digital waveguide mesh modelling, and highlights the optimally efficient explicit scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyze the performance of amplify-and-forward dual-hop relaying systems in the presence of in-phase and quadrature-phase imbalance (IQI) at the relay node. In particular, an exact analytical expression for and tight lower bounds on the outage probability are derived over independent, non-identically distributed Nakagami-m fading channels. Moreover, tractable upper and lower bounds on the ergodic capacity are presented at arbitrary signal-to-noise ratios (SNRs). Some special cases of practical interest (e.g., Rayleigh and Nakagami-0.5 fading) are also studied. An asymptotic analysis is performed in the high SNR regime, where we observe that IQI results in a ceiling effect on the signal-to-interference-plus-noise ratio (SINR), which depends only on the level of I/Q impairments, i.e., the joint image rejection ratio. Finally, the optimal I/Q amplitude and phase mismatch parameters are provided for maximizing the SINR ceiling, thus improving the system performance. An interesting observation is that, under a fixed total phase mismatch constraint, it is optimal to have the same level of transmitter (TX) and receiver (RX) phase mismatch at the relay node, while the optimal values for the TX and RX amplitude mismatch should be inversely proportional to each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.