947 resultados para J2 - Time Allocation,


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a theoretical model for the analysis of decisions regarding farm household labour allocation. The agricultural household model is selected as the most appropriate theoretical framework; a model based on the assumption that households behave to maximise utility, which is a function of consumption and leisure, and is subject to time and budget constraints. The model can be used to describe the role of government subsidies in farm household labour allocation decisions; in particular the impact of decoupled subsidies on labour allocation can be examined. Decoupled subsidies are a labour-free payment and as such represent an increase in labour-free income or wealth. An increase in wealth allows farm households to work less while maintaining consumption. On the other hand, decoupled subsidies represent a decline in the return to farm labour and may lead to a substitution effect, i.e., farmers may choose to substitute non-farm work for farm work. The theoretical framework proposed in this paper allows for the examination of these two conflicting effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The application of therapeutic hypothermia (TH) for 12 to 24 hours following out-of-hospital cardiac arrest (OHCA) has been associated with decreased mortality and improved neurological function. However, the optimal duration of cooling is not known. We aimed to investigate whether targeted temperature management (TTM) at 33 ± 1 °C for 48 hours compared to 24 hours results in a better long-term neurological outcome. METHODS The TTH48 trial is an investigator-initiated pragmatic international trial in which patients resuscitated from OHCA are randomised to TTM at 33 ± 1 °C for either 24 or 48 hours. Inclusion criteria are: age older than 17 and below 80 years; presumed cardiac origin of arrest; and Glasgow Coma Score (GCS) <8, on admission. The primary outcome is neurological outcome at 6 months using the Cerebral Performance Category score (CPC) by an assessor blinded to treatment allocation and dichotomised to good (CPC 1-2) or poor (CPC 3-5) outcome. Secondary outcomes are: 6-month mortality, incidence of infection, bleeding and organ failure and CPC at hospital discharge, at day 28 and at day 90 following OHCA. Assuming that 50 % of the patients treated for 24 hours will have a poor outcome at 6 months, a study including 350 patients (175/arm) will have 80 % power (with a significance level of 5 %) to detect an absolute 15 % difference in primary outcome between treatment groups. A safety interim analysis was performed after the inclusion of 175 patients. DISCUSSION This is the first randomised trial to investigate the effect of the duration of TTM at 33 ± 1 °C in adult OHCA patients. We anticipate that the results of this trial will add significant knowledge regarding the management of cooling procedures in OHCA patients. TRIAL REGISTRATION NCT01689077.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims To date, there is no convincing evidence that non-insulin treated patients who undertake self-blood glucose monitoring (SBGM) have better glycaemic control than those who test their urine. This has led to a recommendation that non-insulin dependent patients undertake urine testing, which is the cheaper option. This recommendation does not take account of patients' experiences and views. This study explores the respective merits of urine testing and SBGM from the perspectives of newly diagnosed patients with Type 2 diabetes. Methods Qualitative study using repeat in-depth interviews with 40 patients. Patients were interviewed three times at 6-monthly intervals over 1 year. Patients were recruited from hospital clinics and general practices in Lothian, Scotland. The study was informed by grounded theory, which involves concurrent data collection and analysis. Results Patients reported strongly negative views of urine testing, particularly when they compared it with SBGM. Patients perceived urine testing as less convenient, less hygienic and less accurate than SBGM. Most patients assumed that blood glucose meters were given to those with a more advanced or serious form of diabetes. This could have implications for how they thought about their own disease. Patients often interpreted negative urine results as indicating that they could not have diabetes. Conclusions Professionals should be aware of the meanings and understandings patients attach to the receipt and use of different types of self-monitoring equipment. Guidelines that promote the use of consistent criteria for equipment allocation are required. The manner in which negative urine results are conveyed needs to be reconsidered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Are persistent marketing effects most likely to appear right after the introduction of a product? The authors give an affirmative answer to this question by developing a model that explicitly reports how persistent and transient marketing effects evolve over time. The proposed model provides managers with a valuable tool to evaluate their allocation of marketing expenditures over time. An application of the model to many pharmaceutical products, estimated through (exact initial) Kalman filtering, indicates that both persistent and transient effects occur predominantly immediately after a brand's introduction. Subsequently, the size of the effects declines. The authors theoretically and empirically compare their methodology with methodology based on unit root testing and demonstrate that the need for unit root tests creates difficulties in applying conventional persistence modeling. The authors recommend that marketing models should either accommodate persistent effects that change over time or be applied to mature brands or limited time windows only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report for the first time on the limitations in the operational power range of few-mode fiber based transmission systems, employing 28Gbaud quadrature phase shift keying transponders, over 1,600km. It is demonstrated that if an additional mode is used on a preexisting few-mode transmission link, and allowed to optimize its performance, it will have a significant impact on the pre-existing mode. In particular, we show that for low mode coupling strengths (weak coupling regime), the newly added variable power mode does not considerably impact the fixed power existing mode, with performance penalties less than 2dB (in Q-factor). On the other hand, as mode coupling strength is increased (strong coupling regime), the individual launch power optimization significantly degrades the system performance, with penalties up to ∼6dB. Our results further suggest that mutual power optimization, of both fixed power and variable power modes, reduces power allocation related penalties to less than 3dB, for any given coupling strength, for both high and low differential mode delays. © 2013 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel market-based method, inspired by retail markets, for resource allocation in fully decentralised systems where agents are self-interested. Our market mechanism requires no coordinating node or complex negotiation. The stability of outcome allocations, those at equilibrium, is analysed and compared for three buyer behaviour models. In order to capture the interaction between self-interested agents, we propose the use of competitive coevolution. Our approach is both highly scalable and may be tuned to achieve specified outcome resource allocations. We demonstrate the behaviour of our approach in simulation, where evolutionary market agents act on behalf of service providing nodes to adaptively price their resources over time, in response to market conditions. We show that this leads the system to the predicted outcome resource allocation. Furthermore, the system remains stable in the presence of small changes in price, when buyers' decision functions degrade gracefully. © 2009 The Author(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce self-interested evolutionary market agents, which act on behalf of service providers in a large decentralised system, to adaptively price their resources over time. Our agents competitively co-evolve in the live market, driving it towards the Bertrand equilibrium, the non-cooperative Nash equilibrium, at which all sellers charge their reserve price and share the market equally. We demonstrate that this outcome results in even load-balancing between the service providers. Our contribution in this paper is twofold; the use of on-line competitive co-evolution of self-interested service providers to drive a decentralised market towards equilibrium, and a demonstration that load-balancing behaviour emerges under the assumptions we describe. Unlike previous studies on this topic, all our agents are entirely self-interested; no cooperation is assumed. This makes our problem a non-trivial and more realistic one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A real-time adaptive resource allocation algorithm considering the end user's Quality of Experience (QoE) in the context of video streaming service is presented in this work. An objective no-reference quality metric, namely Pause Intensity (PI), is used to control the priority of resource allocation to users during the scheduling process. An online adjustment has been introduced to adaptively set the scheduler's parameter and maintain a desired trade-off between fairness and efficiency. The correlation between the data rates (i.e. video code rates) demanded by users and the data rates allocated by the scheduler is taken into account as well. The final allocated rates are determined based on the channel status, the distribution of PI values among users, and the scheduling policy adopted. Furthermore, since the user's capability varies as the environment conditions change, the rate adaptation mechanism for video streaming is considered and its interaction with the scheduling process under the same PI metric is studied. The feasibility of implementing this algorithm is examined and the result is compared with the most commonly existing scheduling methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, the results achieved by applying an electromagnetism (EM) inspired metaheuristic to the uncapacitated multiple allocation hub location problem (UMAHLP) are discussed. An appropriate objective function which natively conform with the problem, 1-swap local search and scaling technique conduce to good overall performance.Computational tests demonstrate the reliability of this method, since the EM-inspired metaheuristic reaches all optimal/best known solutions for UMAHLP, except one, in a reasonable time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a Variable Neighborhood Search (VNS) algorithm for solving the Capacitated Single Allocation Hub Location Problem (CSAHLP) is presented. CSAHLP consists of two subproblems; the first is choosing a set of hubs from all nodes in a network, while the other comprises finding the optimal allocation of non-hubs to hubs when a set of hubs is already known. The VNS algorithm was used for the first subproblem, while the CPLEX solver was used for the second. Computational results demonstrate that the proposed algorithm has reached optimal solutions on all 20 test instances for which optimal solutions are known, and this in short computational time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An emergency is a deviation from a planned course of events that endangers people, properties, or the environment. It can be described as an unexpected event that causes economic damage, destruction, and human suffering. When a disaster happens, Emergency Managers are expected to have a response plan to most likely disaster scenarios. Unlike earthquakes and terrorist attacks, a hurricane response plan can be activated ahead of time, since a hurricane is predicted at least five days before it makes landfall. This research looked into the logistics aspects of the problem, in an attempt to develop a hurricane relief distribution network model. We addressed the problem of how to efficiently and effectively deliver basic relief goods to victims of a hurricane disaster. Specifically, where to preposition State Staging Areas (SSA), which Points of Distributions (PODs) to activate, and the allocation of commodities to each POD. Previous research has addressed several of these issues, but not with the incorporation of the random behavior of the hurricane's intensity and path. This research presents a stochastic meta-model that deals with the location of SSAs and the allocation of commodities. The novelty of the model is that it treats the strength and path of the hurricane as stochastic processes, and models them as Discrete Markov Chains. The demand is also treated as stochastic parameter because it depends on the stochastic behavior of the hurricane. However, for the meta-model, the demand is an input that is determined using Hazards United States (HAZUS), a software developed by the Federal Emergency Management Agency (FEMA) that estimates losses due to hurricanes and floods. A solution heuristic has been developed based on simulated annealing. Since the meta-model is a multi-objective problem, the heuristic is a multi-objective simulated annealing (MOSA), in which the initial solution and the cooling rate were determined via a Design of Experiments. The experiment showed that the initial temperature (T0) is irrelevant, but temperature reduction (δ) must be very gradual. Assessment of the meta-model indicates that the Markov Chains performed as well or better than forecasts made by the National Hurricane Center (NHC). Tests of the MOSA showed that it provides solutions in an efficient manner. Thus, an illustrative example shows that the meta-model is practical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation examined the response to termination of CO2 enrichment of a forest ecosystem exposed to long-term elevated atmospheric CO2 condition, and aimed at investigating responses and their underlying mechanisms of two important factors of carbon cycle in the ecosystem, stomatal conductance and soil respiration. Because the contribution of understory vegetation to the entire ecosystem grew with time, we first investigated the effect of elevated CO2 on understory vegetation. Potential growth enhancing effect of elevated CO2 were not observed, and light seemed to be a limiting factor. Secondly, we examined the importance of aerodynamic conductance to determine canopy conductance, and found that its effect can be negligible. Responses of stomatal conductance and soil respiration were assessed using Bayesian state space model. In two years after the termination of CO2 enrichment, stomatal conductance in formerly elevated CO2 returned to ambient level, while soil respiration became smaller than ambient level and did not recovered to ambient in two years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A diverse range of concentrate allocation strategies are adopted on dairy farms. The objectives of this study were to examine the effects on cow performance [dry matter (DM) intake (DMI), milk yield and composition, body tissue changes, and fertility] of adopting 2 contrasting concentrate allocation strategies over the first 140 d of lactation. Seventy-seven Holstein-Friesian dairy cows were allocated to 1 of 2 concentrate allocation strategies at calving, namely group or individual cow. Cows on the group strategy were offered a mixed ration comprising grass silage and concentrates in a 50:50 ratio on a DM basis. Cows on the individual cow strategy were offered a basal mixed ration comprising grass silage and concentrates (the latter included in the mix to achieve a mean intake of 6 kg/cow per day), which was formulated to meet the cow’s energy requirements for maintenance plus 24 kg of milk/cow per day. Additional concentrates were offered via an out-of-parlor feeding system, with the amount offered adjusted weekly based on each individual cow’s milk yield during the previous week. In addition, all cows received a small quantity of straw in the mixed ration part of the diet (approximately 0.3 kg/cow per day), plus 0.5 kg of concentrate twice daily in the milking parlor. Mean concentrate intakes over the study period were similar with each of the 2 allocation strategies (11.5 and 11.7 kg of DM/cow per day for group and individual cow, respectively), although the pattern of intake with each treatment differed over time. Concentrate allocation strategy had no effect on either milk yield (39.3 and 38.0 kg/d for group and individual cow, respectively), milk composition, or milk constituent yield. The milk yield response curves with each treatment were largely aligned with the concentrate DMI curves. Cows on the individual cow treatment had a greater range of concentrate DMI and milk yields than those on the group treatment. With the exception of a tendency for cows on the individual cow treatment to lose more body weight to nadir than cows on the group treatment, concentrate allocation strategy had little effect on either body weight or body condition score over the experimental period. Cows on the individual cow treatment had a higher pregnancy rate to first and second service and tended to have a higher 100-d in calf rate than cows on the group treatment. This study demonstrates that concentrate allocation strategy had little effect on overall production performance.