911 resultados para Train scheduling
Resumo:
Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.
Resumo:
The quest for robust heuristics that are able to solve more than one problem is ongoing. In this paper, we present, discuss and analyse a technique called Evolutionary Squeaky Wheel Optimisation and apply it to two different personnel scheduling problems. Evolutionary Squeaky Wheel Optimisation improves the original Squeaky Wheel Optimisation’s effectiveness and execution speed by incorporating two additional steps (Selection and Mutation) for added evolution. In the Evolutionary Squeaky Wheel Optimisation, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The aim of the Analysis step is to identify below average solution components by calculating a fitness value for all components. The Selection step then chooses amongst these underperformers and discards some probabilistically based on fitness. The Mutation step further discards a few components at random. Solutions can become incomplete and thus repairs may be required. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, improvements in the Evolutionary Squeaky Wheel Optimisation is achieved by selective solution disruption mixed with iterative improvement and constructive repair. Strong experimental results are reported on two different domains of personnel scheduling: bus and rail driver scheduling and hospital nurse scheduling.
Resumo:
A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse’s assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA)for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
Resumo:
The paper presents a simple method of irrigation scheduling using ICSWAB model for dry land crops. The main inputs to this approache are daily precipitation or irrigation amounts and open pan evaporation (US class 'A' pan-mesh covered). The fixed cumulative evapotranspiration procedure is better than fixed days or fixed percentage soil moisture procedures of irrigation scheduling. Fixed days procedures could be reasonably applied during nonrainy season.
Resumo:
The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.
Resumo:
Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
In aircraft components maintenance shops, components are distributed amongst repair groups and their respective technicians based on the type of repair, on the technicians skills and workload, and on the customer required dates. This distribution planning is typically done in an empirical manner based on the group leader’s past experience. Such a procedure does not provide any performance guarantees, leading frequently to undesirable delays on the delivery of the aircraft components. Among others, a fundamental challenge faced by the group leaders is to decide how to distribute the components that arrive without customer required dates. This paper addresses the problems of prioritizing the randomly arriving of aircraft components (with or without pre-assigned customer required dates) and of optimally distributing them amongst the technicians of the repair groups. We proposed a formula for prioritizing the list of repairs, pointing out the importance of selecting good estimators for the interarrival times between repair requests, the turn-around-times and the man hours for repair. In addition, a model for the assignment and scheduling problem is designed and a preliminary algorithm along with a numerical illustration is presented.
Resumo:
This paper presents a stochastic mixed-integer linear programming approach for solving the self-scheduling problem of a price-taker thermal and wind power producer taking part in a pool-based electricity market. Uncertainty on electricity price and wind power is considered through a set of scenarios. Thermal units are modeled by variable costs, start-up costs and technical operating constraints, such as: ramp up/down limits and minimum up/down time limits. An efficient mixed-integer linear program is presented to develop the offering strategies of the coordinated production of thermal and wind energy generation, aiming to maximize the expected profit. A case study with data from the Iberian Electricity Market is presented and results are discussed to show the effectiveness of the proposed approach.
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
In rural and isolated areas without cellular coverage, Satellite Communication (SatCom) is the best candidate to complement terrestrial coverage. However, the main challenge for future generations of wireless networks will be to meet the growing demand for new services while dealing with the scarcity of frequency spectrum. As a result, it is critical to investigate more efficient methods of utilizing the limited bandwidth; and resource sharing is likely the only choice. The research community’s focus has recently shifted towards the interference management and exploitation paradigm to meet the increasing data traffic demands. In the Downlink (DL) and Feedspace (FS), LEO satellites with an on-board antenna array can offer service to numerous User Terminals (UTs) (VSAT or Handhelds) on-ground in FFR schemes by using cutting-edge digital beamforming techniques. Considering this setup, the adoption of an effective user scheduling approach is a critical aspect given the unusually high density of User terminals on the ground as compared to the on-board available satellite antennas. In this context, one possibility is that of exploiting clustering algorithms for scheduling in LEO MU-MIMO systems in which several users within the same group are simultaneously served by the satellite via Space Division Multiplexing (SDM), and then these different user groups are served in different time slots via Time Division Multiplexing (TDM). This thesis addresses this problem by defining a user scheduling problem as an optimization problem and discusses several algorithms to solve it. In particular, focusing on the FS and user service link (i.e., DL) of a single MB-LEO satellite operating below 6 GHz, the user scheduling problem in the Frequency Division Duplex (FDD) mode is addressed. The proposed State-of-the-Art scheduling approaches are based on graph theory. The proposed solution offers high performance in terms of per-user capacity, Sum-rate capacity, SINR, and Spectral Efficiency.
Resumo:
Fruit crops are an important resource for food security, since more than being nutrient they are also a source of natural antioxidant compounds, such as polyphenols and vitamins. However, fruit crops are also among the cultivations threatened by the harmful effects of climate change This study had the objective of investigating the physiological effects of deficit irrigation on apple (2020-2021), sour cherry (2020-2021-2022) and apricot (2021-2022) trees, with a special focus on fruit nutraceutical quality. On each trial, the main physiological parameters were monitored along the growing season: i) stem and leaf water potentials; ii) leaf gas exchanges; iii) fruit and shoot growth. At harvest, fruit quality was evaluated especially in terms of fruit size, flesh firmness and soluble solids content. Moreover, it was performed: i) total phenolic content determination; ii) anthocyanidin concentration evaluation; and iii) untargeted metabolomic study. Irrigation scheduling in apricot, apple and sour cherry is surely overestimated by the decision support system available in Emilia-Romagna region. The water stress imposed on different fruit crops, each during two years of study, showed as a general conclusion that the decrease in the irrigation water did not show a straightforward decrease in plant physiological performance. This can be due to the miscalculation of the real water needs of the considered fruit crops. For this reason, there is the need to improve this important tool for an appropriate water irrigation management. Furthermore, there is also the need to study the behaviour of fruit crops under more severe deficit irrigations. In fact, it is likely that the application of lower water amounts will enhance the synthesis of specialized metabolites, with positive repercussion on human health. These hypotheses must be verified.
Resumo:
The increase of railways near the urban areas is a significant cause of discomfort for inhabitants due to train-induced vibration and noise. Vibration characteristics can vary widely according to the train type: for high-speed trains, if train speed becomes comparable to the ground wave speed, the vibration level becomes significant; for freight trains, due to their heavier weight and lower speed, the vibration amplitudes are greater and propagate at a more considerable distance from the track; for urban tramways, although the vibration amplitude is relatively low, they can have a negative structural effect on the closest buildings [51]. Therefore, to dampen the vibration level, it is possible to carry out some interventions both on the track and the transmission path. This thesis aims to propose and numerically investigate a novel method to dampen the train-induced vibrations along the transmission path. The method is called "resonant filled-trench (RFT)" and consists of a combination of expanded polystyrene (EPS) geofoam to stabilize the trench wall against the collapse and drowned cylindrical embedded inclusions inside the geofoam, which act as a resonator, reflector, and attenuator. By means of finite element simulations, we show that up to 50% higher attenuation than the open trench is achievable after overcoming the resonance frequency of the inclusion, i.e., 35Hz, which covers the frequency contents of the train-induced vibration. Moreover, depending on the filling material used for the inclusions, trench depth can be reduced up to 17% compared to the open trench showing the same screening performance as the open trench. Also, an RFT with DS inclusion installed in dense sand soil shows a high hindrance performance (i.e., IL≥6dB) when the trench depth is larger than 0.5λ_R while it is 0.6λ_R for the open trench.
Resumo:
Con la presente tesi di laurea si vuole analizzare la nuova procedura di ordini di “Scheduling Agreement” all’interno della realtà aziendale di ACMA S.p.A., azienda produttrice di macchine automatiche. Con il termine Scheduling Agreement si identifica una gestione di ordini con il fornitore, il quale mantiene i pezzi prodotti all’interno del proprio magazzino per massimo 18 mesi. L’accordo si basa sul principio per cui il fornitore ha una maggiore continuità di fornitura mentre ACMA S.p.A. vede un risparmio degli investimenti aziendali e nel tempo di approvvigionamento. Il principale focus dello studio verterà sulla proposta di soluzioni di stoccaggio per ricercare un punto di ottimo che permetta di ridurre al minimo il rischio di stock out e al tempo stesso minimizzi il livello delle scorte. L’analisi partirà con l’identificazione dei prodotti per cui risulta conveniente aprire uno Scheduling. Tale processo valutativo si estrinseca nell’analisi di due parametri fondamentali: frequenza e consumo. Verranno poi analizzati alcuni casi TO BE con il fine di trovare il Lead Time migliore per ridurre al minimo le giacenze ed evitare rotture di stock. Questi studi utilizzano il software di estrapolazione dati “SAP” per poi analizzarli sulla piattaforma Excel e forniscono una valutazione realistica nella gestione di tale procedura in ACMA S.p.A.. La soluzione conclusiva riesce a risolvere il problema di stock out introducendo la “scorta di sicurezza” calcolata sulla domanda futura. In particolare, utilizzando un Lead Time di approvvigionamento di 21 giorni si è riusciti a ridurre del 75% le scorte a magazzino rispetto al caso più cautelativo, mantenendo comunque basso il rischio di stock out. I risultati ottenuti forniranno un effettivo miglioramento delle prestazioni dell’azienda e il modello verrà sfruttato in futuro per l’applicazione iterativa della soluzione descritta, con risparmio di tempo e di fondi aziendali.