916 resultados para real-time scheduling algorithm
Resumo:
The main objective of this paper is to detail the development of a feasible hardware design based on Evolutionary Algorithms (EAs) to determine flight path planning for Unmanned Aerial Vehicles (UAVs) navigating terrain with obstacle boundaries. The design architecture includes the hardware implementation of Light Detection And Ranging (LiDAR) terrain and EA population memories within the hardware, as well as the EA search and evaluation algorithms used in the optimizing stage of path planning. A synthesisable Very-high-speed integrated circuit Hardware Description Language (VHDL) implementation of the design was developed, for realisation on a Field Programmable Gate Array (FPGA) platform. Simulation results show significant speedup compared with an equivalent software implementation written in C++, suggesting that the present approach is well suited for UAV real-time path planning applications.
Resumo:
This paper presents a method for measuring the in-bucket payload volume on a dragline excavator for the purpose of estimating the material's bulk density in real-time. Knowledge of the payload's bulk density can provide feedback to mine planning and scheduling to improve blasting and therefore provide a more uniform bulk density across the excavation site. This allows a single optimal bucket size to be used for maximum overburden removal per dig and in turn reduce costs and emissions in dragline operation and maintenance. The proposed solution uses a range bearing laser to locate and scan full buckets between the lift and dump stages of the dragline cycle. The bucket is segmented from the scene using cluster analysis, and the pose of the bucket is calculated using the Iterative Closest Point (ICP) algorithm. Payload points are identified using a known model and subsequently converted into a height grid for volume estimation. Results from both scaled and full scale implementations show that this method can achieve an accuracy of above 95%.
Resumo:
The elastic task model, a significant development in scheduling of real-time control tasks, provides a mechanism for flexible workload management in uncertain environments. It tells how to adjust the control periods to fulfill the workload constraints. However, it is not directly linked to the quality-of-control (QoC) management, the ultimate goal of a control system. As a result, it does not tell how to make the best use of the system resources to maximize the QoC improvement. To fill in this gap, a new feedback scheduling framework, which we refer to as QoC elastic scheduling, is developed in this paper for real-time process control systems. It addresses the QoC directly through embedding both the QoC management and workload adaptation into a constrained optimization problem. The resulting solution for period adjustment is in a closed-form expressed in QoC measurements, enabling closed-loop feedback of the QoC to the task scheduler. Whenever the QoC elastic scheduler is activated, it improves the QoC the most while still meeting the system constraints. Examples are given to demonstrate the effectiveness of the QoC elastic scheduling.
Resumo:
Network Real-Time Kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real time, and it is enabled by a network of Continuously Operating Reference Stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a Memetic Algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA.
Resumo:
With the recent development of advanced metering infrastructure, real-time pricing (RTP) scheme is anticipated to be introduced in future retail electricity market. This paper proposes an algorithm for a home energy management scheduler (HEMS) to reduce the cost of energy consumption using RTP. The proposed algorithm works in three subsequent phases namely real-time monitoring (RTM), stochastic scheduling (STS) and real-time control (RTC). In RTM phase, characteristics of available controllable appliances are monitored in real-time and stored in HEMS. In STS phase, HEMS computes an optimal policy using stochastic dynamic programming (SDP) to select a set of appliances to be controlled with an objective of the total cost of energy consumption in a house. Finally, in RTC phase, HEMS initiates the control of the selected appliances. The proposed HEMS is unique as it intrinsically considers uncertainties in RTP and power consumption pattern of various appliances. In RTM phase, appliances are categorized according to their characteristics to ease the control process, thereby minimizing the number of control commands issued by HEMS. Simulation results validate the proposed method for HEMS.
Resumo:
Flexible objects such as a rope or snake move in a way such that their axial length remains almost constant. To simulate the motion of such an object, one strategy is to discretize the object into large number of small rigid links connected by joints. However, the resulting discretised system is highly redundant and the joint rotations for a desired Cartesian motion of any point on the object cannot be solved uniquely. In this paper, we revisit an algorithm, based on the classical tractrix curve, to resolve the redundancy in such hyper-redundant systems. For a desired motion of the `head' of a link, the `tail' is moved along a tractrix, and recursively all links of the discretised objects are moved along different tractrix curves. The algorithm is illustrated by simulations of a moving snake, tying of knots with a rope and a solution of the inverse kinematics of a planar hyper-redundant manipulator. The simulations show that the tractrix based algorithm leads to a more `natural' motion since the motion is distributed uniformly along the entire object with the displacements diminishing from the `head' to the `tail'.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Resumo:
A numerically stable sequential Primal–Dual LP algorithm for the reactive power optimisation (RPO) is presented in this article. The algorithm minimises the voltage stability index C 2 [1] of all the load buses to improve the system static voltage stability. Real time requirements such as numerical stability, identification of the most effective subset of controllers for curtailing the number of controllers and their movement can be handled effectively by the proposed algorithm. The algorithm has a natural characteristic of selecting the most effective subset of controllers (and hence curtailing insignificant controllers) for improving the objective. Comparison with transmission loss minimisation objective indicates that the most effective subset of controllers and their solution identified by the static voltage stability improvement objective is not the same as that of the transmission loss minimisation objective. The proposed algorithm is suitable for real time application for the improvement of the system static voltage stability.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
OBJECTIVE - To evaluate an algorithm guiding responses of continuous subcutaneous insulin infusion (CSII)-treated type 1 diabetic patients using real-time continuous glucose monitoring (RT-CGM). RESEARCH DESIGN AND METHODS - Sixty CSII-treated type 1 diabetic participants (aged 13-70 years, including adult and adolescent subgroups, with A1C =9.5%) were randomized in age-, sex-, and A1C-matched pairs. Phase 1 was an open 16-week multicenter randomized controlled trial. Group A was treated with CSII/RT-CGM with the algorithm, and group B was treated with CSII/RT-CGM without the algorithm. The primary outcome was the difference in time in target (4-10 mmol/l) glucose range on 6-day masked CGM. Secondary outcomes were differences in A1C, low (=3.9 mmol/l) glucose CGM time, and glycemic variability. Phase 2 was the week 16-32 follow-up. Group A was returned to usual care, and group B was provided with the algorithm. Glycemia parameters were as above. Comparisons were made between baseline and 16 weeks and 32 weeks. RESULTS - In phase 1, after withdrawals 29 of 30 subjects were left in group A and 28 of 30 subjects were left in group B. The change in target glucose time did not differ between groups. A1C fell (mean 7.9% [95% CI 7.7-8.2to 7.6% [7.2-8.0]; P <0.03) in group A but not in group B (7.8% [7.5-8.1] to 7.7 [7.3-8.0]; NS) with no difference between groups. More subjects in group A achieved A1C =7% than those in group B (2 of 29 to 14 of 29 vs. 4 of 28 to 7 of 28; P = 0.015). In phase 2, one participant was lost from each group. In group A, A1C returned to baseline with RT-CGM discontinuation but did not change in group B, who continued RT-CGM with addition of the algorithm. CONCLUSIONS - Early but not late algorithm provision to type 1 diabetic patients using CSII/RT-CGM did not increase the target glucose time but increased achievement of A1C =7%. Upon RT-CGM cessation, A1C returned to baseline. © 2010 by the American Diabetes Association.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.