988 resultados para Multicommodity flow algorithms
Resumo:
Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
This paper considers a variant of the classical problem of minimizing makespan in a two-machine flow shop. In this variant, each job has three operations, where the first operation must be performed on the first machine, the second operation can be performed on either machine but cannot be preempted, and the third operation must be performed on the second machine. The NP-hard nature of the problem motivates the design and analysis of approximation algorithms. It is shown that a schedule in which the operations are sequenced arbitrarily, but without inserted machine idle time, has a worst-case performance ratio of 2. Also, an algorithm that constructs four schedules and selects the best is shown to have a worst-case performance ratio of 3/2. A polynomial time approximation scheme (PTAS) is also presented.
Resumo:
A number of two dimensional staggered unstructured discretisation schemes for the solution of fluid flow and heat transfer problems have been developed. All schemes store and solve velocity vector components at cell faces with scalar variables solved at cell centres. The velocity is resolved into face-normal and face-parallel components and the various schemes investigated differ in the treatment of the parallel component. Steady-state and time-dependent fluid flow and thermal energy equations are solved with the well known pressure correction scheme, SIMPLE, employed to couple continuity and momentum. The numerical methods developed are tested on well known benchmark cases: the Lid-Driven Cavity, Natural Convection in a Cavity and Melting of Gallium in a rectangular domain. The results obtained are shown to be comparable to benchmark, but with accuracy dependent on scheme selection.
Resumo:
We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables
Resumo:
We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables.
Resumo:
In this note, we consider the scheduling problem of minimizing the sum of the weighted completion times on a single machine with one non-availability interval on the machine under the non-resumable scenario. Together with a recent 2-approximation algorithm designed by Kacem [I. Kacem, Approximation algorithm for the weighted flow-time minimization on a single machine with a fixed non-availability interval, Computers & Industrial Engineering 54 (2008) 401–410], this paper is the first successful attempt to develop a constant ratio approximation algorithm for this problem. We present two approaches to designing such an algorithm. Our best algorithm guarantees a worst-case performance ratio of 2+ε. © 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new method for complex power flow tracing that can be used for allocating the transmission loss to loads or generators. Two algorithms for upstream tracing (UST) and downstream tracing (DST) of the complex power are introduced. UST algorithm traces the complex power extracted by loads back to source nodes and assigns a fraction of the complex power flow through each line to each load. DST algorithm traces the output of the generators down to the sink nodes determining the contributions of each generator to the complex power flow and losses through each line. While doing so, active- and reactive-power flows as well as complex losses are considered simultaneously, not separately as most of the available methods do. Transmission losses are taken into consideration during power flow tracing. Unbundling line losses are carried out using an equation, which has a physical basis, and considers the coupling between active- and reactive-power flows as well as the cross effects of active and reactive powers on active and reactive losses. The tracing algorithms introduced can be considered direct to a good extent, as there is no need for exhaustive search to determine the flow paths as these are determined in a systematic way during the course of tracing. Results of application of the proposed method are also presented.
Resumo:
The characterization of thermocouple sensors for temperature measurement in variable flow environments is a challenging problem. In this paper, novel difference equation-based algorithms are presented that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. Linear and non-linear least squares formulations of the characterization problem are introduced and compared in terms of their computational complexity, robustness to noise and statistical properties. With the aid of this analysis, least squares optimization procedures that yield unbiased estimates are identified. The main contribution of the paper is the development of a linear two-parameter generalized total least squares formulation of the sensor characterization problem. Monte-Carlo simulation results are used to support the analysis.
Resumo:
Image segmentation plays an important role in the analysis of retinal images as the extraction of the optic disk provides important cues for accurate diagnosis of various retinopathic diseases. In recent years, gradient vector flow (GVF) based algorithms have been used successfully to successfully segment a variety of medical imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods can lead to less accurate segmentation results in certain cases. In this paper, we propose the use of a new mean shift-based GVF segmentation algorithm that drives the internal/external energies towards the correct direction. The proposed method incorporates a mean shift operation within the standard GVF cost function to arrive at a more accurate segmentation. Experimental results on a large dataset of retinal images demonstrate that the presented method optimally detects the border of the optic disc.
Resumo:
This paper presents a lookup circuit with advanced memory techniques and algorithms that examines network packet headers at high throughput rates. Hardware solutions and test scenarios are introduced to evaluate the proposed approach. The experimental results show that the proposed lookup circuit is able to achieve at least 39 million packet header lookups per second, which facilitates the application of next-generation stateful packet classifications at beyond 20Gbps internet traffic throughput rates.
Resumo:
In this paper, we present a methodology for implementing a complete Digital Signal Processing (DSP) system onto a heterogeneous network including Field Programmable Gate Arrays (FPGAs) automatically. The methodology aims to allow design refinement and real time verification at the system level. The DSP application is constructed in the form of a Data Flow Graph (DFG) which provides an entry point to the methodology. The netlist for parts that are mapped onto the FPGA(s) together with the corresponding software and hardware Application Protocol Interface (API) are also generated. Using a set of case studies, we demonstrate that the design and development time can be significantly reduced using the methodology developed.
Resumo:
In recent years, gradient vector flow (GVF) based algorithms have been successfully used to segment a variety of 2-D and 3-D imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods may lead to biased segmentation results. In this paper, we propose MSGVF, a mean shift based GVF segmentation algorithm that can successfully locate the correct borders. MSGVF is developed so that when the contour reaches equilibrium, the various forces resulting from the different energy terms are balanced. In addition, the smoothness constraint of image pixels is kept so that over- or under-segmentation can be reduced. Experimental results on publicly accessible datasets of dermoscopic and optic disc images demonstrate that the proposed method effectively detects the borders of the objects of interest.
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.