24 resultados para Time constraints


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a continuous-time asset price model to capture the timeseries momentum documented recently. The underlying stochastic delay differentialsystem facilitates the analysis of effects of different time horizons used bymomentum trading. By studying an optimal asset allocation problem, we find thatthe performance of time series momentum strategy can be significantly improvedby combining with market fundamentals and timing opportunity with respect tomarket trend and volatility. Furthermore, the results also hold for different timehorizons, the out-of-sample tests and with short-sale constraints. The outperformanceof the optimal strategy is immune to market states, investor sentiment andmarket volatility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering the development of aerospace composite components, designing for reduced manufacturing layup cost and structural complexity is increasingly important. While the advantage of composite materials is the ability to tailor designs to various structural loads for minimum mass, the challenge is obtaining a design that is manufacturable and minimizes local ply incompatibility. The focus of the presented research is understanding how the relationships between mass, manufacturability and design complexity, under realistic loads and design requirements, can be affected by enforcing ply continuity in the design process. Presented are a series of sizing case studies on an upper wing cover, designed using conventional analyses and the tabular laminate design process. Introducing skin ply continuity constraints can generate skin designs with minimal ply discontinuities, fewer ply drops and larger ply areas than designs not constrained for continuity. However, the reduced design freedom associated with the addition of these constraints results in a weight penalty over the total wing cover. Perhaps more interestingly, when considering manual hand layup the reduced design complexity is not translated into a reduced recurring manufacturing cost. In contrast, heavier wing cover designs appear to take more time to layup regardless of the laminate design complexity. © 2012 AIAA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For some time, the satisfiability formulae that have been the most difficult to solve for their size have been crafted to be unsatisfiable by the use of cardinality constraints. Recent solvers have introduced explicit checking of such constraints, rendering previously difficult formulae trivial to solve. A family of unsatisfiable formulae is described that is derived from the sgen4 family but cannot be solved using cardinality constraints detection and reasoning alone. These formulae were found to be the most difficult during the SAT2014 competition by a significant margin and include the shortest unsolved benchmark in the competition, sgen6-1200-5-1.cnf.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of learning Bayesian network structures from data based on score functions that are decomposable. It describes properties that strongly reduce the time and memory costs of many known methods without losing global optimality guarantees. These properties are derived for different score criteria such as Minimum Description Length (or Bayesian Information Criterion), Akaike Information Criterion and Bayesian Dirichlet Criterion. Then a branch-and-bound algorithm is presented that integrates structural constraints with data in a way to guarantee global optimality. As an example, structural constraints are used to map the problem of structure learning in Dynamic Bayesian networks into a corresponding augmented Bayesian network. Finally, we show empirically the benefits of using the properties with state-of-the-art methods and with the new algorithm, which is able to handle larger data sets than before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last 15 years, the supernova community has endeavoured to directly identify progenitor stars for core-collapse supernovae discovered in nearby galaxies. These precursors are often visible as resolved stars in high-resolution images from space-and ground-based telescopes. The discovery rate of progenitor stars is limited by the local supernova rate and the availability and depth of archive images of galaxies, with 18 detections of precursor objects and 27 upper limits. This review compiles these results (from 1999 to 2013) in a distance-limited sample and discusses the implications of the findings. The vast majority of the detections of progenitor stars are of type II-P, II-L, or IIb with one type Ib progenitor system detected and many more upper limits for progenitors of Ibc supernovae (14 in all). The data for these 45 supernovae progenitors illustrate a remarkable deficit of high-luminosity stars above an apparent limit of log L/L-circle dot similar or equal to 5.1 dex. For a typical Salpeter initial mass function, one would expect to have found 13 high-luminosity and high-mass progenitors by now. There is, possibly, only one object in this time-and volume-limited sample that is unambiguously high-mass (the progenitor of SN2009ip) although the nature of that supernovae is still debated. The possible biases due to the influence of circumstellar dust, the luminosity analysis, and sample selection methods are reviewed. It does not appear likely that these can explain the missing high-mass progenitor stars. This review concludes that the community's work to date shows that the observed populations of supernovae in the local Universe are not, on the whole, produced by high-mass (M greater than or similar to 18 M-circle dot) stars. Theoretical explosions of model stars also predict that black hole formation and failed supernovae tend to occur above an initial mass of M similar or equal to 18 M-circle dot. The models also suggest there is no simple single mass division for neutron star or black-hole formation and that there are islands of explodability for stars in the 8-120 M-circle dot range. The observational constraints are quite consistent with the bulk of stars above M similar or equal to 18 M-circle dot collapsing to form black holes with no visible supernovae.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of explosive nucleosynthesis yields in core-collapse supernovae provide tests for explosion models. We investigate constraints on explosive conditions derivable from measured amounts of nickel and iron after radioactive decays using nucleosynthesis networks with parameterized thermodynamic trajectories. The Ni/Fe ratio is for most regimes dominated by the production ratio of Ni-58/(Fe-54 + Ni-56), which tends to grow with higher neutron excess and with higher entropy. For SN 2012ec, a supernova (SN) that produced a Ni/Fe ratio of 3.4 +/- 1.2 times solar, we find that burning of a fuel with neutron excess eta approximate to 6 x 10(-3) is required. Unless the progenitor metallicity is over five times solar, the only layer in the progenitor with such a neutron excess is the silicon shell. SNe producing large amounts of stable nickel thus suggest that this deep-lying layer can be, at least partially, ejected in the explosion. We find that common spherically symmetric models of M-ZAMS less than or similar to 13 M-circle dot stars exploding with a delay time of less than one second (M-cut < 1.5 M-circle dot) are able to achieve such silicon-shell ejection. SNe that produce solar or subsolar Ni/Fe ratios, such as SN 1987A, must instead have burnt and ejected only oxygen-shell material, which allows a lower limit to the mass cut to be set. Finally, we find that the extreme Ni/Fe value of 60-75 times solar derived for the Crab cannot be reproduced by any realistic entropy burning outside the iron core, and neutrino-neutronization obtained in electron capture models remains the only viable explanation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is lack of consistent evidence as to how well PD patients are able to accurately time their movements across space with an external acoustic signal. For years, research based on the finger-tapping paradigm, the most popular paradigm for exploring the brain's ability to time movement, has provided strong evidence that patients are not able to accurately reproduce an isochronous interval [i.e., Ref. (1)]. This was undermined by Spencer and Ivry (2) who suggested a specific deficit in temporal control linked to emergent, rhythmical movement not event-based actions, which primarily involve the cerebellum. In this study, we investigated motor timing of seven idiopathic PD participants in event-based sensorimotor synchronization task. Participants were asked to move their finger horizontally between two predefined target zones to synchronize with the occurrence of two sound events at two time intervals (1.5 and 2.5 s). The width of the targets and the distance between them were manipulated to investigate impact of accuracy demands and movement amplitude on timing performance. The results showed that participants with PD demonstrated specific difficulties when trying to accurately synchronize their movements to a beat. The extent to which their ability to synchronize movement was compromised was found to be related to the severity of PD, but independent of the spatial constraints of the task.