934 resultados para Branch and bound algorithms
Resumo:
Das Basisproblem von Arc-Routing Problemen mit mehreren Fahrzeugen ist das Capacitated Arc-Routing Problem (CARP). Praktische Anwendungen des CARP sind z.B. in den Bereichen Müllabfuhr und Briefzustellung zu finden. Das Ziel ist es, einen kostenminimalen Tourenplan zu berechnen, bei dem alle erforderlichen Kanten bedient werden und gleichzeitig die Fahrzeugkapazität eingehalten wird. In der vorliegenden Arbeit wird ein Cut-First Branch-and-Price Second Verfahren entwickelt. In der ersten Phase werden Schnittebenen generiert, die dem Master Problem in der zweiten Phase hinzugefügt werden. Das Subproblem ist ein kürzeste Wege Problem mit Ressourcen und wird gelöst um neue Spalten für das Master Problem zu liefern. Ganzzahlige CARP Lösungen werden durch ein neues hierarchisches Branching-Schema garantiert. Umfassende Rechenstudien zeigen die Effektivität dieses Algorithmus. Kombinierte Standort- und Arc-Routing Probleme ermöglichen eine realistischere Modellierung von Zustellvarianten bei der Briefzustellung. In dieser Arbeit werden jeweils zwei mathematische Modelle für Park and Loop und Park and Loop with Curbline vorgestellt. Die Modelle für das jeweilige Problem unterscheiden sich darin, wie zulässige Transfer Routen modelliert werden. Während der erste Modelltyp Subtour-Eliminationsbedingungen verwendet, werden bei dem zweiten Modelltyp Flussvariablen und Flusserhaltungsbedingungen eingesetzt. Die Rechenstudie zeigt, dass ein MIP-Solver den zweiten Modelltyp oft in kürzerer Rechenzeit lösen kann oder bei Erreichen des Zeitlimits bessere Zielfunktionswerte liefert.
Resumo:
In questa tesi viene analizzato un problema di ottimizzazione proposto da alcuni esercizi commerciali che hanno la necessita` di selezionare e disporre i propri ar- ticoli in negozio. Il problema nasce dall’esigenza di massimizzare il profitto com- plessivo atteso dei prodotti in esposizione, trovando per ognuno una locazione sugli scaffali. I prodotti sono suddivisi in dipartimenti, dai quali solo un ele- mento deve essere selezionato ed esposto. In oltre si prevede la possibilita` di esprimere vincoli sulla locazione e compatibilita` dei prodotti. Il problema risul- tante `e una generalizzazione dei gia` noti Multiple-Choice Knapsack Problem e Multiple Knapsack Problem. Dopo una ricerca esaustiva in letteratura si `e ev- into che questo problema non `e ancora stato studiato. Si `e quindi provveduto a formalizzare il problema mediante un modello di programmazione lineare intera. Si propone un algoritmo esatto per la risoluzione del problema basato su column generation e branch and price. Sono stati formulati quattro modelli differenti per la risoluzione del pricing problem su cui si basa il column generation, per individuare quale sia il piu` efficiente. Tre dei quattro modelli proposti hanno performance comparabili, mentre l’ultimo si `e rivelato piu` inefficiente. Dai risul- tati ottenuti si evince che il metodo risolutivo proposto `e adatto a istanze di dimensione medio-bassa.
Resumo:
Clenshaw’s recurrenee formula is used to derive recursive algorithms for the discrete cosine transform @CT) and the inverse discrete cosine transform (IDCT). The recursive DCT algorithm presented here requires one fewer delay element per coefficient and one fewer multiply operation per coeflident compared with two recently proposed methods. Clenshaw’s recurrence formula provides a unified development for the recursive DCT and IDCT algorithms. The M v e al gorithms apply to arbitrary lengtb algorithms and are appropriate for VLSI implementation.
Resumo:
A main field in biomedical optics research is diffuse optical tomography, where intensity variations of the transmitted light traversing through tissue are detected. Mathematical models and reconstruction algorithms based on finite element methods and Monte Carlo simulations describe the light transport inside the tissue and determine differences in absorption and scattering coefficients. Precise knowledge of the sample's surface shape and orientation is required to provide boundary conditions for these techniques. We propose an integrated method based on structured light three-dimensional (3-D) scanning that provides detailed surface information of the object, which is usable for volume mesh creation and allows the normalization of the intensity dispersion between surface and camera. The experimental setup is complemented by polarization difference imaging to avoid overlaying byproducts caused by inter-reflections and multiple scattering in semitransparent tissue.
Resumo:
The new knowledge environments of the digital age are oen described as places where we are all closely read, with our buying habits, location, and identities available to advertisers, online merchants, the government, and others through our use of the Internet. This is represented as a loss of privacy in which these entities learn about our activities and desires, using means that were unavailable in the pre-digital era. This article argues that the reciprocal nature of digital networks means 1) that the privacy issues that we face online are not radically different from those of the pre-Internet era, and 2) that we need to reconceive of close reading as an activity of which both humans and computer algorithms are capable.
Resumo:
Electroencephalograms (EEG) are often contaminated with high amplitude artifacts limiting the usability of data. Methods that reduce these artifacts are often restricted to certain types of artifacts, require manual interaction or large training data sets. Within this paper we introduce a novel method, which is able to eliminate many different types of artifacts without manual intervention. The algorithm first decomposes the signal into different sub-band signals in order to isolate different types of artifacts into specific frequency bands. After signal decomposition with principal component analysis (PCA) an adaptive threshold is applied to eliminate components with high variance corresponding to the dominant artifact activity. Our results show that the algorithm is able to significantly reduce artifacts while preserving the EEG activity. Parameters for the algorithm do not have to be identified for every patient individually making the method a good candidate for preprocessing in automatic seizure detection and prediction algorithms.
Resumo:
Intraneural Ganglion Cysts expand within in a nerve, causing neurological deficits in afflicted patients. Modeling the propagation of these cysts, originating in the articular branch and then expanding radially outward, will help prove articular theory, and ultimately allow for more purposeful treatment of this condition. In Finite Element Analysis, traditional Lagrangian meshing methods fail to model the excessive deformation that occurs in the propagation of these cysts. This report explores the method of manual adaptive remeshing as a method to allow for the use of Lagrangian meshing, while circumventing the severe mesh distortions typical of using a Lagrangian mesh with a large deformation. Manual adaptive remeshing is the process of remeshing a deformed meshed part and then reapplying loads in order to achieve a larger deformation than a single mesh can achieve without excessive distortion. The methods of manual adaptive remeshing described in this Master’s Report are sufficient in modeling large deformations.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.
Resumo:
Im operativen Betrieb einer Stückgutspeditionsanlage entscheidet der Betriebslenker bzw. der Disponent in einem ersten Schritt darüber, an welche Tore die Fahrzeuge zur Be- und Entladung andocken sollen. Darüber hinaus muss er für jede Tour ein Zeitfenster ausweisen innerhalb dessen sie das jeweilige Tor belegt. Durch die örtliche und zeitliche Fahrzeug-Tor-Zuordnung wird der für den innerbetrieblichen Umschlagprozess erforderliche Ressourcenaufwand in Form von zu fahrenden Wegstrecken oder aber Gabelstaplerstunden bestimmt. Ein Ziel der Planungsaufgabe ist somit, die Zuordnung der Fahrzeuge an die Tore so vorzunehmen, dass dabei minimale innerbetriebliche Wegstrecken entstehen. Dies führt zu einer minimalen Anzahl an benötigten Umschlagmittelressourcen. Darüber hinaus kann es aber auch zweckmäßig sein, die Fahrzeuge möglichst früh an die Tore anzudocken. Jede Tour verfügt über einen individuellen Fahrplan, der Auskunft über den Ankunftszeitpunkt sowie den Abfahrtszeitpunkt der jeweiligen Tour von der Anlage gibt. Nur innerhalb dieses Zeitfensters darf der Disponent die Tour einem der Tore zuweisen. Geschieht die Zuweisung nicht sofort nach Ankunft in der Anlage, so muss das Fahrzeug auf einer Parkfläche warten. Eine Minimierung der Wartezeiten ist wünschenswert, damit das Gelände der Anlage möglichst nicht durch zuviele Fahrzeuge gleichzeitig belastet wird. Es kann vor allem aber auch im Hinblick auf das Reservieren der Tore für zeitkritische Touren sinnvoll sein, Fahrzeuge möglichst früh abzufertigen. Am Lehrstuhl Verkehrssysteme und -logistik (VSL) der Universität Dortmund wurde die Entscheidungssituation im Rahmen eines Forschungsprojekts bei der Stiftung Industrieforschung in Anlehnung an ein zeitdiskretes Mehrgüterflussproblem mit unsplittable flow Bedingungen modelliert. Die beiden Zielsetzungen wurden dabei in einer eindimensionalen Zielfunktion integriert. Das resultierende Mixed Integer Linear Programm (MILP) wurde programmiert und für mittlere Szenarien durch Eingabe in den Optimization Solver CPlex mit dem dort implementierten exakten Branch-and-Cut Verfahren gelöst. Parallel wurde im Rahmen einer Kooperation zwischen dem Lehrstuhl VSL und dem Unternehmen hafa Docking Systems, einem der weltweit führenden Tor und Rampenhersteller, für die gleiche Planungsaufgabe ein heuristisches Scheduling Verfahren sowie ein Dispositionsleitstand namens LoadDock Navigation entwickelt. Der Dispositionsleitstand dient der optimalen Steuerung der Torbelegungen in logistischen Anlagen. In dem Leitstand wird planerische Intelligenz in Form des heuristischen Schedulingverfahrens, technische Neuerungen in der Rampentechnik in Form von Sensoren und das Expertenwissen des Disponenten in einem Tool verbunden. Das mathematische Modell sowie der Prototyp mit der integrierten Heuristik werden im Rahmen dieses Artikels vorgestellt.
Resumo:
This paper presents an empirical study of affine invariant feature detectors to perform matching on video sequences of people with non-rigid surface deformation. Recent advances in feature detection and wide baseline matching have focused on static scenes. Video frames of human movement capture highly non-rigid deformation such as loose hair, cloth creases, skin stretching and free flowing clothing. This study evaluates the performance of six widely used feature detectors for sparse temporal correspondence on single view and multiple view video sequences. Quantitative evaluation is performed of both the number of features detected and their temporal matching against and without ground truth correspondence. Recall-accuracy analysis of feature matching is reported for temporal correspondence on single view and multiple view sequences of people with variation in clothing and movement. This analysis identifies that existing feature detection and matching algorithms are unreliable for fast movement with common clothing.
Resumo:
A nonlinear viscoelastic image registration algorithm based on the demons paradigm and incorporating inverse consistent constraint (ICC) is implemented. An inverse consistent and symmetric cost function using mutual information (MI) as a similarity measure is employed. The cost function also includes regularization of transformation and inverse consistent error (ICE). The uncertainties in balancing various terms in the cost function are avoided by alternatively minimizing the similarity measure, the regularization of the transformation, and the ICE terms. The diffeomorphism of registration for preventing folding and/or tearing in the deformation is achieved by the composition scheme. The quality of image registration is first demonstrated by constructing brain atlas from 20 adult brains (age range 30-60). It is shown that with this registration technique: (1) the Jacobian determinant is positive for all voxels and (2) the average ICE is around 0.004 voxels with a maximum value below 0.1 voxels. Further, the deformation-based segmentation on Internet Brain Segmentation Repository, a publicly available dataset, has yielded high Dice similarity index (DSI) of 94.7% for the cerebellum and 74.7% for the hippocampus, attesting to the quality of our registration method.
Resumo:
This paper presents the application of a variety of techniques to study jet substructure. The performance of various modified jet algorithms, or jet grooming techniques, for several jet types and event topologies is investigated for jets with transverse momentum larger than 300 GeV. Properties of jets subjected to the mass-drop filtering, trimming, and pruning algorithms are found to have reduced sensitivity to multiple proton-proton interactions, are more stable at high luminosity and improve the physics potential of searches for heavy boosted objects. Studies of the expected discrimination power of jet mass and jet substructure observables in searches for new physics are also presented. Event samples enriched in boosted W and Z bosons and top-quark pairs are used to study both the individual jet invariant mass scales and the efficacy of algorithms to tag boosted hadronic objects. The analyses presented use the full 2011 ATLAS dataset, corresponding to an integrated luminosity of 4.7 +/- 0.1 /fb from proton-proton collisions produced by the Large Hadron Collider at a center-of-mass energy of sqrt(s) = 7 TeV.
Resumo:
Passive positioning systems produce user location information for third-party providers of positioning services. Since the tracked wireless devices do not participate in the positioning process, passive positioning can only rely on simple, measurable radio signal parameters, such as timing or power information. In this work, we provide a passive tracking system for WiFi signals with an enhanced particle filter using fine-grained power-based ranging. Our proposed particle filter provides an improved likelihood function on observation parameters and is equipped with a modified coordinated turn model to address the challenges in a passive positioning system. The anchor nodes for WiFi signal sniffing and target positioning use software defined radio techniques to extract channel state information to mitigate multipath effects. By combining the enhanced particle filter and a set of enhanced ranging methods, our system can track mobile targets with an accuracy of 1.5m for 50% and 2.3m for 90% in a complex indoor environment. Our proposed particle filter significantly outperforms the typical bootstrap particle filter, extended Kalman filter and trilateration algorithms.
Resumo:
This paper presents a software prototype of a personal digital assistant 2.0. Based on soft computing methods and cognitive computing this mobile application prototype improves calendar and mobility management in cognitive cities. Applying fuzzy cognitive maps and evolutionary algorithms, the prototype represents a next step towards the realization of cognitive cities (i.e., smart cities enhanced with cognition). A user scenario and a test version of the prototype are included for didactical reasons.
Resumo:
Fractures of the pelvic ring are comparatively rare with an incidence of 2-8 % of all fractures depending on the study in question. The severity of pelvic ring fractures can be very different ranging from simple and mostly "harmless" type A fractures up to life-threatening complex type C fractures. Although it was previously postulated that high-energy trauma was necessary to induce a pelvic ring fracture, over the past decades it became more and more evident, not least from data in the pelvic trauma registry of the German Society for Trauma Surgery (DGU), that low-energy minor trauma can also cause pelvic ring fractures of osteoporotic bone and in a rapidly increasing population of geriatric patients insufficiency fractures of the pelvic ring are nowadays observed with no preceding trauma.Even in large trauma centers the number of patients with pelvic ring fractures is mostly insufficient to perform valid and sufficiently powerful monocentric studies on epidemiological, diagnostic or therapeutic issues. For this reason, in 1991 the first and still the only registry worldwide for the documentation and evaluation of pelvic ring fractures was introduced by the Working Group Pelvis (AG Becken) of the DGU. Originally, the main objectives of the documentation were epidemiological and diagnostic issues; however, in the course of time it developed into an increasingly expanding dataset with comprehensive parameters on injury patterns, operative and conservative therapy regimens and short-term and long-term outcome of patients. Originally starting with 10 institutions, in the meantime more than 30 hospitals in Germany and other European countries participate in the documentation of data. In the third phase of the registry alone, which was started in 2004, data from approximately 15,000 patients with pelvic ring and acetabular fractures were documented. In addition to the scientific impact of the pelvic trauma registry, which is reflected in the numerous national and international publications, the dramatically changing epidemiology of pelvic ring fractures, further developments in diagnostics and the changes in operative procedures over time could be demonstrated. Last but not least the now well-established diagnostic and therapeutic algorithms for pelvic ring fractures, which could be derived from the information collated in registry studies, reflect the clinical impact of the registry.