977 resultados para Production scheduling.
Resumo:
Contemporary cellular standards, such as Long Term Evolution (LTE) and LTE-Advanced, employ orthogonal frequency-division multiplexing (OFDM) and use frequency-domain scheduling and rate adaptation. In conjunction with feedback reduction schemes, high downlink spectral efficiencies are achieved while limiting the uplink feedback overhead. One such important scheme that has been adopted by these standards is best-m feedback, in which every user feeds back its m largest subchannel (SC) power gains and their corresponding indices. We analyze the single cell average throughput of an OFDM system with uniformly correlated SC gains that employs best-m feedback and discrete rate adaptation. Our model incorporates three schedulers that cover a wide range of the throughput versus fairness tradeoff and feedback delay. We show that, for small m, correlation significantly reduces average throughput with best-m feedback. This result is pertinent as even in typical dispersive channels, correlation is high. We observe that the schedulers exhibit varied sensitivities to correlation and feedback delay. The analysis also leads to insightful expressions for the average throughput in the asymptotic regime of a large number of users.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.
Resumo:
Measurement of the self-coupling of the 125 GeV Higgs boson is one of the most crucial tasks for a high luminosity run of the LHC, and it can only be measured in the di-Higgs final state. In the minimal supersymmetric standard model, heavy CP even Higgs (H) can decay into a lighter 125 GeV Higgs boson (h) and, therefore, can influence the rate of di-Higgs production. We investigate the role of single H production in the context of measuring the self-coupling of h. We have found that the H -> hh decay can change the value of Higgs (h) self-coupling substantially, in a low tan beta regime where the mass of the heavy Higgs boson lies between 250 and 600 GeV and, depending on the parameter space, it may be seen as an enhancement of the self-coupling of the 125 GeV Higgs boson.
Resumo:
We analyse the hVV (V = W, Z) vertex in a model independent way using Vh production. To that end, we consider possible corrections to the Standard Model Higgs Lagrangian, in the form of higher dimensional operators which parametrise the effects of new physics. In our analysis, we pay special attention to linear observables that can be used to probe CP violation in the same. By considering the associated production of a Higgs boson with a vector boson (W or Z), we use jet substructure methods to define angular observables which are sensitive to new physics effects, including an asymmetry which is linearly sensitive to the presence of CP odd effects. We demonstrate how to use these observables to place bounds on the presence of higher dimensional operators, and quantify these statements using a log likelihood analysis. Our approach allows one to probe separately the hZZ and hWW vertices, involving arbitrary combinations of BSM operators, at the Large Hadron Collider.
Resumo:
A comparative study of two bacterial strains namely, Bacillus licheniformis and Bacillus firmus in the production of bioflocculants was made. The highest bioflocculant yield of 16.55 g/L was obtained from B. licheniformis (L) and 10 g/L from B. firmus (F). The bioflocculants obtained from the bacterial species were water soluble and insoluble in organic solvents. FTIR spectral analysis revealed the presence of hydroxyl, carboxyl and sugar derivatives in the bioflocculants. Thermal characterization by differential scanning calorimetry (DSC) showed the crystalline transition and the melting point (T-m) at 90-100 degrees C. Effects of bioflocculant dosage and pH on the flocculation of clay fines were evaluated. Highest bioflocculation efficiency on kaolin clay suspensions was observed at an optimum bioflocculant dosage of 5 g/L. The optimum pH range for the maximum bioflocculation was at pH 7-9. Bioflocculants exhibited high efficiency in dye decolorization. The maximum Cr (VI) removal was found to be 85 % for L (bioflocculant dosage at 2 g/L). This study demonstrates that microbial bioflocculants find potential applications in mineral processing such as selective flocculation of mineral fines, decolorization of dye solutions and in the remediation of toxic metal solutions. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Prediction of queue waiting times of jobs submitted to production parallel batch systems is important to provide overall estimates to users and can also help meta-schedulers make scheduling decisions. In this work, we have developed a framework for predicting ranges of queue waiting times for jobs by employing multi-class classification of similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k-Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the class predicted using k-NN and its neighboring classes are used to provide a set of ranges of predicted wait times with probabilities. We have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. Experiments with different production supercomputer job traces show that our prediction strategies can give correct predictions for about 77-87% of the jobs, and also result in about 12% improved accuracy when compared to the next best existing method. Experiments with our meta-scheduling strategy using different production and synthetic job traces for various system sizes, partitioning schemes and different workloads, show that the meta-scheduling strategy gives much improved performance when compared to existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47%.
Resumo:
In this work, a methodology to achieve ordinary-, medium-, and high-strength self-consolidating concrete (SCC) with and without mineral additions is proposed. The inclusion of Class F fly ash increases the density of SCC but retards the hydration rate, resulting in substantial strength gain only after 28 days. This delayed strength gain due to the use of fly ash has been considered in the mixture design model. The accuracy of the proposed mixture design model is validated with the present test data and mixture and strength data obtained from diverse sources reported in the literature.
Resumo:
Bacteria can utilize multiple sources of carbon for growth, and for pathogenic bacteria like Mycobacterium tuberculosis, this ability is crucial for survival within the host. In addition, phenotypic changes are seen in mycobacteria grown under different carbon sources. In this study, we use Raman spectroscopy to analyze the biochemical components present in M. smegmatis cells when grown in three differently metabolized carbon sources. Our results show that carotenoid biosynthesis is enhanced when M. smegmatis is grown in glucose compared to glycerol and acetate. We demonstrate that this difference is most likely due to transcriptional upregulation of the carotenoid biosynthesis operon (crt) mediated by higher levels of the stress-responsive sigma factor SigF. Moreover, we find that increased SigF and carotenoid levels correlate with greater resistance of glucose-grown cells to oxidative stress. Thus, we demonstrate the use of Raman spectroscopy in unraveling unknown aspects of mycobacterial physiology and describe a novel effect of carbon source variation on mycobacteria.
Resumo:
The clever designs of natural transducers are a great source of inspiration for man-made systems. At small length scales, there are many transducers in nature that we are now beginning to understand and learn from. Here, we present an example of such a transducer that is used by field crickets to produce their characteristic song. This transducer uses two distinct components-a file of discrete teeth and a plectrum that engages intermittently to produce a series of impulses forming the loading, and an approximately triangular membrane, called the harp, that acts as a resonator and vibrates in response to the impulse-train loading. The file-and-plectrum act as a frequency multiplier taking the low wing beat frequency as the input and converting it into an impulse-train of sufficiently high frequency close to the resonant frequency of the harp. The forced vibration response results in beats producing the characteristic sound of the cricket song. With careful measurements of the harp geometry and experimental measurements of its mechanical properties (Young's modulus determined from nanoindentation tests), we construct a finite element (FE) model of the harp and carry out modal analysis to determine its natural frequency. We fine tune the model with appropriate elastic boundary conditions to match the natural frequency of the harp of a particular species-Gryllus bimaculatus. We model impulsive loading based on a loading scheme reported in literature and predict the transient response of the harp. We show that the harp indeed produces beats and its frequency content matches closely that of the recorded song. Subsequently, we use our FE model to show that the natural design is quite robust to perturbations in the file. The characteristic song frequency produced is unaffected by variations in the spacing of file-teeth and even by larger gaps. Based on the understanding of how this natural transducer works, one can design and fabricate efficient microscale acoustic devices such as microelectromechanical systems (MEMS) loudspeakers.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.
Resumo:
An energy approach within the framework of thermodynamics is used to model the fatigue process in plain concrete. Fatigue crack growth is an irreversible process associated with an irreversible entropy gain. A closed-form expression for entropy generated during fatigue in terms of energy dissipated is derived using principles of dimensional analysis and self-similarity. An increase in compliance is considered as a measure of damage accumulated during fatigue. The entropy at final fatigue failure is shown to be independent of loading and geometry and is proposed as a material property. A relationship between energy dissipated and number of cycles of fatigue loading is obtained. (C) 2015 American Society of Civil Engineers.
Resumo:
In this paper, we design a new dynamic packet scheduling scheme suitable for differentiated service (DiffServ) network. Designed dynamic benefit weighted scheduling (DBWS) uses a dynamic weighted computation scheme loosely based on weighted round robin (WRR) policy. It predicts the weight required by expedited forwarding (EF) service for the current time slot (t) based on two criteria; (i) previous weight allocated to it at time (t-1), and (ii) the average increase in the queue length of EF buffer. This prediction provides smooth bandwidth allocation to all the services by avoiding overbooking of resources for EF service and still providing guaranteed services for it. The performance is analyzed for various scenarios at high, medium and low traffic conditions. The results show that packet loss is minimized, end to end delay is minimized and jitter is reduced and therefore meet quality of service (QoS) requirement of a network.
Resumo:
We present estimates of single spin asymmetry (SSA) in the electroproduction of taking into account the transverse momentum dependent (TMD) evolution of the gluon Sivers function and using Color Evaporation Model of charmonium production. We estimate SSA for JLab, HERMES, COMPASS and eRHIC energies using recent parameters for the quark Sivers functions which are fitted using an evolution kernel in which the perturbative part is resummed up to next-to-leading logarithms accuracy. We find that these SSAs are much smaller as compared to our first estimates obtained using DGLAP evolution but are comparable to our estimates obtained using TMD evolution where we had used approximate analytical solution of the TMD evolution equation for the purpose.
Resumo:
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application's throughput. In this paper we propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based lookahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from Amazon AWS IaaS public cloud. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.
Resumo:
The time division multiple access (TDMA) based channel access mechanisms perform better than the contention based channel access mechanisms, in terms of channel utilization, reliability and power consumption, specially for high data rate applications in wireless sensor networks (WSNs). Most of the existing distributed TDMA scheduling techniques can be classified as either static or dynamic. The primary purpose of static TDMA scheduling algorithms is to improve the channel utilization by generating a schedule of smaller length. But, they usually take longer time to schedule, and hence, are not suitable for WSNs, in which the network topology changes dynamically. On the other hand, dynamic TDMA scheduling algorithms generate a schedule quickly, but they are not efficient in terms of generated schedule length. In this paper, we propose a novel scheme for TDMA scheduling in WSNs, which can generate a compact schedule similar to static scheduling algorithms, while its runtime performance can be matched with those of dynamic scheduling algorithms. Furthermore, the proposed distributed TDMA scheduling algorithm has the capability to trade-off schedule length with the time required to generate the schedule. This would allow the developers of WSNs, to tune the performance, as per the requirement of prevalent WSN applications, and the requirement to perform re-scheduling. Finally, the proposed TDMA scheduling is fault-tolerant to packet loss due to erroneous wireless channel. The algorithm has been simulated using the Castalia simulator to compare its performance with those of others in terms of generated schedule length and the time required to generate the TDMA schedule. Simulation results show that the proposed algorithm generates a compact schedule in a very less time.