960 resultados para Scheduling, heuristic algorithms, blocking flow shop
Resumo:
This article introduces schedulability analysis for global fixed priority scheduling with deferred preemption (gFPDS) for homogeneous multiprocessor systems. gFPDS is a superset of global fixed priority pre-emptive scheduling (gFPPS) and global fixed priority non-pre-emptive scheduling (gFPNS). We show how schedulability can be improved using gFPDS via appropriate choice of priority assignment and final non-pre-emptive region lengths, and provide algorithms which optimize schedulability in this way. Via an experimental evaluation we compare the performance of multiprocessor scheduling using global approaches: gFPDS, gFPPS, and gFPNS, and also partitioned approaches employing FPDS, FPPS, and FPNS on each processor.
Resumo:
Accepted in 13th IEEE Symposium on Embedded Systems for Real-Time Multimedia (ESTIMedia 2015), Amsterdam, Netherlands.
Resumo:
O escalonamento é uma das decisões mais importantes no funcionamento de uma linha de produção. No âmbito desta dissertação foi realizada uma descrição do problema do escalonamento, identificando alguns métodos para a optimização dos problemas de escalonamento. Foi realizado um estudo ao caso do problema de máquina única através do teste de várias instâncias com o objectivo de minimizar o atraso pesado, aplicando uma Meta-Heurística baseada na Pesquisa Local e dois algoritmos baseados no SB. Os resultados obtidos reflectem que os algoritmos baseados no SB apresentaram resultados mais próximos do óptimo, em relação ao algoritmo baseado na PL. Os resultados obtidos permitem sustentar a hipótese de não existirem algoritmos específicos para os problemas de escalonamento. A melhor forma de encontrar uma solução de boa qualidade em tempo útil é experimentar diferentes algoritmos e comparar o desempenho das soluções obtidas.
Resumo:
Thesis submitted in fulfilment of the requirements for the Degree of Master of Science in Computer Science
Resumo:
Documento submetido para revisão pelos pares. A publicar em Journal of Parallel and Distributed Computing. ISSN 0743-7315
Resumo:
Scheduling, job shop, uncertainty, mixed (disjunctive) graph, stability analysis
Resumo:
Report for the scientific sojourn at the University of California at Berkeley between September 2007 to February 2008. The globalization combined with the success of containerization has brought about tremendous increases in the transportation of containers across the world. This leads to an increasing size of container ships which causes higher demands on seaport container terminals and their equipment. In this situation, the success of container terminals resides in a fast transhipment process with reduced costs. For these reasons it is necessary to optimize the terminal’s processes. There are three main logistic processes in a seaport container terminal: loading and unloading of containerships, storage, and reception/deliver of containers from/to the hinterland. Moreover there is an additional process that ensures the interconnection between previous logistic activities: the internal transport subsystem. The aim of this paper is to optimize the internal transport cycle in a marine container terminal managed by straddle carriers, one of the most used container transfer technologies. Three sub-systems are analyzed in detail: the landside transportation, the storage of containers in the yard, and the quayside transportation. The conflicts and decisions that arise from these three subsystems are analytically investigated, and optimization algorithms are proposed. Moreover, simulation has been applied to TCB (Barcelona Container Terminal) to test these algorithms and compare different straddle carrier’s operation strategies, such as single cycle versus double cycle, and different sizes of the handling equipment fleet. The simulation model is explained in detail and the main decision-making algorithms from the model are presented and formulated.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
Genetically engineered bioreporters are an excellent complement to traditional methods of chemical analysis. The application of fluorescence flow cytometry to detection of bioreporter response enables rapid and efficient characterization of bacterial bioreporter population response on a single-cell basis. In the present study, intrapopulation response variability was used to obtain higher analytical sensitivity and precision. We have analyzed flow cytometric data for an arsenic-sensitive bacterial bioreporter using an artificial neural network-based adaptive clustering approach (a single-layer perceptron model). Results for this approach are far superior to other methods that we have applied to this fluorescent bioreporter (e.g., the arsenic detection limit is 0.01 microM, substantially lower than for other detection methods/algorithms). The approach is highly efficient computationally and can be implemented on a real-time basis, thus having potential for future development of high-throughput screening applications.
Resumo:
This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.
Resumo:
Introduction: According to guidelines, patients with coronary artery disease (CAD) should undergo revascularization if myocardial ischemia is present. While coronary angiography (CXA) allows the morphological assessment of CAD, the fractional flow reserve (FFR) has proved to be a complementary invasive test to assess the functional significance of CAD, i.e. to detect ischemia. Perfusion Cardiac Magnetic Resonance (CMR) has turned out to be a robust non-invasive technique to assess myocardial ischemia. The objective: is to compare the cost-effectiveness ratio - defined as the costs per patient correctly diagnosed - of two algorithms used to diagnose hemodynamically significant CAD in relation to the pretest likelihood of CAD: 1) aCMRto assess ischemia before referring positive patients to CXA (CMR + CXA), 2) a CXA in all patients combined with a FFR test in patients with angiographically positive stenoses (CXA + FFR). Methods: The costs, evaluated from the health care system perspective in the Swiss, German, the United Kingdom (UK) and the United States (US) contexts, included public prices of the different tests considered as outpatient procedures, complications' costs and costs induced by diagnosis errors (false negative). The effectiveness criterion wasthe ability to accurately identify apatient with significantCAD.Test performancesused in the model were based on the clinical literature. Using a mathematical model, we compared the cost-effectiveness ratio for both algorithms for hypothetical patient cohorts with different pretest likelihood of CAD. Results: The cost-effectiveness ratio decreased hyperbolically with increasing pretest likelihood of CAD for both strategies. CMR + CXA and CXA + FFR were equally costeffective at a pretest likelihood of CAD of 62% in Switzerland, 67% in Germany, 83% in the UK and 84% in the US with costs of CHF 5'794, Euros 1'472, £ 2'685 and $ 2'126 per patient correctly diagnosed. Below these thresholds, CMR + CXA showed lower costs per patient correctly diagnosed than CXA + FFR. Implications for the health care system/professionals/patients/society These results facilitate decision making for the clinical use of new generations of imaging procedures to detect ischemia. They show to what extent the cost-effectiveness to diagnose CAD depends on the prevalence of the disease.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.