903 resultados para shift scheduling
Resumo:
ACM Computing Classification System (1998): I.2.8, G.1.6.
Resumo:
In this paper a new approach to the resource allocation and scheduling mechanism that reflects the effect of user's Quality of Experience is presented. The proposed scheduling algorithm is examined in the context of 3GPP Long Term Evolution (LTE) system. Pause Intensity (PI) as an objective and no-reference quality assessment metric is employed to represent user's satisfaction in the scheduler of eNodeB. PI is in fact a measurement of discontinuity in the service. The performance of the scheduling method proposed is compared with two extreme cases: maxCI and Round Robin scheduling schemes which correspond to the efficiency and fairness oriented mechanisms, respectively. Our work reveals that the proposed method is able to perform between fairness and efficiency requirements, in favor of higher satisfaction for the users to the desired level. © VDE VERLAG GMBH.
Resumo:
2000 Mathematics Subject Classification: 35P20, 35J10, 35Q40.
Resumo:
We have proposed a similarity matching method (SMM) to obtain the change of Brillouin frequency shift (BFS), in which the change of BFS can be determined from the frequency difference between detecting spectrum and selected reference spectrum by comparing their similarity. We have also compared three similarity measures in the simulation, which has shown that the correlation coefficient is more accurate to determine the change of BFS. Compared with the other methods of determining the change of BFS, the SMM is more suitable for complex Brillouin spectrum profiles. More precise result and much faster processing speed have been verified in our simulation and experiments. The experimental results have shown that the measurement uncertainty of the BFS has been improved to 0.72 MHz by using the SMM, which is almost one-third of that by using the curve fitting method, and the speed of deriving the BFS change by the SMM is 120 times faster than that by the curve fitting method.
Resumo:
For intelligent DC distributed power systems, data communication plays a vital role in system control and device monitoring. To achieve communication in a cost effective way, power/signal dual modulation (PSDM), a method that integrates data transmission with power conversion, can be utilized. In this paper, an improved PSDM method using phase shift full bridge (PSFB) converter is proposed. This method introduces a phase control based freedom in the conventional PSFB control loop to realize communication using the same power conversion circuit. In this way, decoupled data modulation and power conversion are realized without extra wiring and coupling units, and thus the system structure is simplified. More importantly, the signal intensity can be regulated by the proposed perturbation depth, and so this method can adapt to different operating conditions. Application of the proposed method to a DC distributed power system composed of several PSFB converters is discussed. A 2kW prototype system with an embedded 5kbps communication link has been implemented, and the effectiveness of the method is verified by experimental results.
Resumo:
New media platforms have changed the media landscape forever, as they have altered our perceptions of the limits of communication, and reception of information. Platforms such as Facebook, Twitter and WhatsApp enable individuals to circumvent the traditional mass media, converging audience and producer to create millions of ‘citizen journalists’. This new breed of journalist uses these platforms as a way of, not only receiving news, but of instantaneously, and often spontaneously, expressing opinions and venting and sharing emotions, thoughts and feelings. They are liberated from cultural and physical restraints, such as time, space and location, and they are not constrained by factors that impact upon the traditional media, such as editorial control, owner or political bias or the pressures of generating commercial revenue. A consequence of the way in which these platforms have become ingrained within our social culture is that habits, conventions and social norms, that were once informal and transitory manifestations of social life, are now infused within their use. What were casual and ephemeral actions and/or acts of expression, such as conversing with friends or colleagues or swapping/displaying pictures, or exchanging thoughts that were once kept private, or maybe shared with a select few, have now become formalised and potentially permanent, on view for the world to see. Incidentally, ‘traditional’ journalists and media outlets are also utilising new media, as it allows them to react, and disseminate news, instantaneously, within a hyper-competitive marketplace. However, in a world where we are saturated, not only by citizen journalists, but by traditional media outlets, offering access to news and opinion twenty-four hours a day, via multiple new media platforms, there is increased pressure to ‘break’ news fast and first. This paper will argue that new media, and the culture and environment it has created, for citizen journalists, traditional journalists and the media generally, has altered our perceptions of the limits and boundaries of freedom of expression dramatically, and that the corollary to this seismic shift is the impact on the notion of privacy and private life. Consequently, this paper will examine what a reasonable expectation of privacy may now mean, in a new media world.
Resumo:
Ebben a tanulmányban a szerző egy új harmóniakereső metaheurisztikát mutat be, amely a minimális időtartamú erőforrás-korlátos ütemezések halmazán a projekt nettó jelenértékét maximalizálja. Az optimális ütemezés elméletileg két egész értékű (nulla-egy típusú) programozási feladat megoldását jelenti, ahol az első lépésben meghatározzuk a minimális időtartamú erőforrás-korlátos ütemezések időtartamát, majd a második lépésben az optimális időtartamot feltételként kezelve megoldjuk a nettó jelenérték maximalizálási problémát minimális időtartamú erőforrás-korlátos ütemezések halmazán. A probléma NP-hard jellege miatt az egzakt megoldás elfogadható idő alatt csak kisméretű projektek esetében képzelhető el. A bemutatandó metaheurisztika a Csébfalvi (2007) által a minimális időtartamú erőforrás-korlátos ütemezések időtartamának meghatározására és a tevékenységek ennek megfelelő ütemezésére kifejlesztett harmóniakereső metaheurisztika továbbfejlesztése, amely az erőforrás-felhasználási konfliktusokat elsőbbségi kapcsolatok beépítésével oldja fel. Az ajánlott metaheurisztika hatékonyságának és életképességének szemléltetésére számítási eredményeket adunk a jól ismert és népszerű PSPLIB tesztkönyvtár J30 részhalmazán futtatva. Az egzakt megoldás generálásához egy korszerű MILP-szoftvert (CPLEX) alkalmaztunk. _______________ This paper presents a harmony search metaheuristic for the resource-constrained project scheduling problem with discounted cash flows. In the proposed approach, a resource-constrained project is characterized by its „best” schedule, where best means a makespan minimal resource constrained schedule for which the net present value (NPV) measure is maximal. Theoretically the optimal schedule searching process is formulated as a twophase mixed integer linear programming (MILP) problem, which can be solved for small-scale projects in reasonable time. The applied metaheuristic is based on the "conflict repairing" version of the "Sounds of Silence" harmony search metaheuristic developed by Csébfalvi (2007) for the resource-constrained project scheduling problem (RCPSP). In order to illustrate the essence and viability of the proposed harmony search metaheuristic, we present computational results for a J30 subset from the well-known and popular PSPLIB. To generate the exact solutions a state-of-the-art MILP solver (CPLEX) was used.
Resumo:
It is important to the landscape architects to become acquainted with the results of the regional climate models so they can adapt to the warmer and more arid future climate. Modelling the potential distribution area of certain plants, which was the theme of our former research, can be a convenient method to visualize the effects of the climate change. A similar but slightly better method is modelling the Moesz-line, which gives information on distribution and usability of numerous plants simultaneously. Our aim is to display the results on maps and compare the different modelling methods (Line modelling, Distribution modelling, Isotherm modelling). The results are spectacular and meet our expectations: according to two of the three tested methods the Moesz-line will shift from South Slovakia to Central Poland in the next 60 years.
Resumo:
In the year 2001, the Commission on Dietetic Registration (CDR) will begin a new process of recertifying Registered Dietitians (RD) using a self-directed lifelong learning portfolio model. The model, entitled Professional Development 2001 (PD 2001), is designed to increase competency through targeted learning. This portfolio consists of five steps: reflection, learning needs assessment, formulation of a learning plan, maintenance of a learning log, and evaluation of the learning plan. By targeting learning, PD 2001 is predicted to foster more up-to-date practitioners than the current method that requires only a quantity of continuing education hours. This is the first major change in the credentialing system since 1975. The success or failure of the new system will impact the future of approximately 60,000 practitioners. The purpose of this study was to determine the readiness of RDs to change to the new system. Since the model is dependent on setting goals and developing learning plans, this study examined the methods dietitians use to determine their five-year goals and direction in practice. It also determined RD's attitudes towards PD 2001 and identified some of the factors that influenced their beliefs. A dual methodological design using focus groups and questionnaires was utilized. Sixteen focus groups were held during state dietetic association meetings. Demographic data was collected on the 132 registered dietitians who participated in the focus groups using a self-administered questionnaire. The audiotaped sessions were transcribed into 643 pages of text and analyzed using Non-numerical Unstructured Data - Indexing Searching and Theorizing (NUD*IST version 4). Thirty-four of the 132 participants (26%) had formal five-year goals. Fifty-four participants (41%) performed annual self-assessments. In general, dietitians did not currently have professional goals nor conduct self-assessments and they claimed they did not have the skills or confidence to perform these tasks. Major barriers to successful implementation of PD 2001 are uncertainty, misinterpretation, and misinformation about the process and purpose, which in turn contribute to negative impressions. Renewed vigor to provide a positive, accurate message along with presenting goal-setting strategies will be necessary for better acceptance of this professional development process. ^
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
The purpose of this study was to evaluate the effectiveness of an alternate day block schedule design (n = 419) versus a traditional six-period schedule design (n = 623) on the academic achievement of the graduating classes in two schools in which the design was used respectively. Academic achievement was measured by (a) two standardized tests: the Florida Comprehensive Assessment Test Sunshine State Standards (FCAT-SSS) in mathematics and reading for 9th and 10th grade and the Scholastic Reading Inventory Test (SRI) for 9 th, 10th, and 11th grade; (b) three school grades: the mathematics final course grades for 9th, 10th, and 11th grade, the English final course grades for 9th, 10th, 11th, and 12th grade and the graduating GPA. A total of five repeated measure analyses of variance (ANOVAs) were conducted to analyze the difference between the two schools (representing the two designs) with respect to five achievement indicators (FCAT-SSS mathematics scores, FCAT-SSS reading scores, SRI scores, mathematics final course grades, and English final course grades). The between-subject factor for the five ANOVAs was the schedule design and the within-subject factor was the time the tests were taken or the time the course grades were issued. T-tests were performed on all eighth grade achievement indicators to ensure there were no significant differences in achievement between the two cohorts prior to entering high school. An independent samples t-test was conducted to analyze the difference between the two schedule designs with respect to graduating GPA. Achievement in the alternate day block schedule design was significantly higher than in the traditional six-period schedule design for some of the locally assigned school grades. The difference between the two types of schedule designs was not significant for the standardized measures (the FCAT-SSS in reading and mathematics and the SRI). This study concludes that the use of an alternate day block schedule design can be considered an educational tool that can help improve the academic achievement of students as measured by local indicators of achievement; but, apparently the design is not an important factor in achievement as measured by state examinations such as the FCAT-SSS or the SRI.
Resumo:
Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^
Resumo:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^
Resumo:
Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.