923 resultados para Mixed integer programming model
Resumo:
A linear programming model is used to optimally assign highway segments to highway maintenance garages using existing facilities. The model is also used to determine possible operational savings or losses associated with four alternatives for expanding, closing and/or relocating some of the garages in a study area. The study area contains 16 highway maintenance garages and 139 highway segments. The study recommends alternative No. 3 (close Tama and Blairstown garages and relocate new garage at Jct. U.S. 30 and Iowa 21) at an annual operational savings of approximately $16,250. These operational savings, however, are only the guidelines for decisionmakers and are subject to the required assumptions of the model used and limitations of the study.
Resumo:
In this paper, we consider active sampling to label pixels grouped with hierarchical clustering. The objective of the method is to match the data relationships discovered by the clustering algorithm with the user's desired class semantics. The first is represented as a complete tree to be pruned and the second is iteratively provided by the user. The active learning algorithm proposed searches the pruning of the tree that best matches the labels of the sampled points. By choosing the part of the tree to sample from according to current pruning's uncertainty, sampling is focused on most uncertain clusters. This way, large clusters for which the class membership is already fixed are no longer queried and sampling is focused on division of clusters showing mixed labels. The model is tested on a VHR image in a multiclass classification setting. The method clearly outperforms random sampling in a transductive setting, but cannot generalize to unseen data, since it aims at optimizing the classification of a given cluster structure.
Resumo:
Tieto- ja teleliikenneverkkojen konvergenssi on tuonut uusia vaatimuksia palvelukehitysympäristöille ja aiheuttanut haasteita ympäristöjen kehitykselle. Moderneilla palvelukehitysympäristöillä on pystyttävä tuottamaan nopeasti monimutkaisia ja samalla varmatoimisia palveluja. Lisäksi moniprotokollapalveluiden luontiympäristöjen on mukauduttava uusiin olosuhteisiin, jotta palveluntarjoajat pysyisivät kilpailukykyisinä. Tämän työn tarkoituksena oli etsiä menetelmiä ja apuvälineitä nopeaan ja luotettavaan konvergoivissa verkoissa tarjottavien palveluiden luontiin. Työssä tutustuttiin markkinoilla oleviin palvelukehitysympäristöihin ja esiteltiin Intellitel OSN:n palvelukehitysympäristö ja sen palvelunluontimalli, joka tukee palvelunkehitystä läpi koko palvelunluontiprosessin. Työn käytäntöosuudessa parannettiin Intellitelin palvelunluontimallia ja palvelukehitysympäristön tarjoamia työkaluja ja apuohjelmia. Työssä toteutettiin Intellitelin palvelukehitysympäristöllä vaiheittain palvelunluontimallin mukaisesti numeronmuunnospalvelu.
Resumo:
Techniques of evaluation of risks coming from inherent uncertainties to the agricultural activity should accompany planning studies. The risk analysis should be carried out by risk simulation using techniques as the Monte Carlo method. This study was carried out to develop a computer program so-called P-RISCO for the application of risky simulations on linear programming models, to apply to a case study, as well to test the results comparatively to the @RISK program. In the risk analysis it was observed that the average of the output variable total net present value, U, was considerably lower than the maximum U value obtained from the linear programming model. It was also verified that the enterprise will be front to expressive risk of shortage of water in the month of April, what doesn't happen for the cropping pattern obtained by the minimization of the irrigation requirement in the months of April in the four years. The scenario analysis indicated that the sale price of the passion fruit crop exercises expressive influence on the financial performance of the enterprise. In the comparative analysis it was verified the equivalence of P-RISCO and @RISK programs in the execution of the risk simulation for the considered scenario.
Resumo:
The objective of the present study was to determine to what extent, if any, swimming training applied before immobilization in a cast interferes with the rehabilitation process in rat muscles. Female Wistar rats, mean weight 260.52 ± 16.26 g, were divided into 4 groups of 6 rats each: control, 6 weeks under baseline conditions; trained, swimming training for 6 weeks; trained-immobilized, swimming training for 6 weeks and then immobilized for 1 week; trained-immobilized-rehabilitated, swimming training for 6 weeks, immobilized for 1 week and then remobilized with swimming for 2 weeks. The animals were then sacrificed and the soleus and tibialis anterior muscles were dissected, frozen in liquid nitrogen and processed histochemically (H&E and mATPase). Data were analyzed statistically by the mixed effects linear model (P < 0.05). Cytoarchitectural changes such as degenerative characteristics in the immobilized group and regenerative characteristics such as centralized nucleus, fiber size variation and cell fragmentation in the groups submitted to swimming were more significant in the soleus muscle. The diameters of the lesser soleus type 1 and type 2A fibers were significantly reduced in the trained-immobilized group compared to the trained group (P < 0.001). In the tibialis anterior, there was an increase in the number of type 2B fibers and a reduction in type 2A fibers when trained-immobilized rats were compared to trained rats (P < 0.001). In trained-immobilized-rehabilitated rats, there was a reduction in type 2B fibers and an increase in type 2A fibers compared to trained-immobilized rats (P < 0.009). We concluded that swimming training did not minimize the deleterious effects of immobilization on the muscles studied and that remobilization did not favor tissue re-adaptation.
Resumo:
Transportation plays a major role in the gross domestic product of various nations. There are, however, many obstacles hindering the transportation sector. Cost-efficiency along with proper delivery times, high frequency and reliability are not a straightforward task. Furthermore, environmental friendliness has increased the importance of the whole transportation sector. This development will change roles inside the transportation sector. Even now, but especially in the future, decisions regarding the transportation sector will be partly based on emission levels and other externalities originating from transportation in addition to pure transportation costs. There are different factors, which could have an impact on the transportation sector. IMO’s sulphur regulation is estimated to increase the costs of short sea shipping in the Baltic Sea. Price development of energy could change the roles of different transport modes. Higher awareness of the environmental impacts originating from transportation could also have an impact on the price level of more polluting transport modes. According to earlier research, increased inland transportation, modal shift and slowsteaming can be possible results of these changes in the transportation sector. Possible changes in the transportation sector and ways to settle potential obstacles are studied in this dissertation. Furthermore, means to improve cost-efficiency and to decrease environmental impacts originating from transportation are researched. Hypothetical Finnish dry port network and Rail Baltica transport corridor are studied in this dissertation. Benefits and disadvantages are studied with different methodologies. These include gravitational models, which were optimized with linear integer programming, discrete-event and system dynamics simulation, an interview study and a case study. Geographical focus is on the Baltic Sea Region, but the results can be adapted to other geographical locations with discretion. Results indicate that the dry port concept has benefits, but optimization regarding the location and the amount of dry ports plays an important role. In addition, the utilization of dry ports for freight transportation should be carefully operated, since only a certain amount of total freight volume can be cost-efficiently transported through dry ports. If dry ports are created and located without proper planning, they could actually increase transportation costs and delivery times of the whole transportation system. With an optimized dry port network, transportation costs can be lowered in Finland with three to five dry ports. Environmental impacts can be lowered with up to nine dry ports. If more dry ports are added to the system, the benefits become very minor, i.e. payback time of investments becomes extremely long. Furthermore, dry port network could support major transport corridors such as Rail Baltica. Based on an analysis of statistics and interview study, there could be enough freight volume available for Rail Baltica, especially, if North-West Russia is part of the Northern end of the corridor. Transit traffic to and from Russia (especially through the Baltic States) plays a large role. It could be possible to increase transit traffic through Finland by connecting the potential Finnish dry port network and the studied transport corridor. Additionally, sulphur emission regulation is assumed to increase the attractiveness of Rail Baltica in the year 2015. Part of the transit traffic could be rerouted along Rail Baltica instead of the Baltic Sea, since the price level of sea transport could increase due to the sulphur regulation. Both, the hypothetical Finnish dry port network and Rail Baltica transport corridor could benefit each other. The dry port network could gain more market share from Russia, but also from Central Europe, which is the other end of Rail Baltica. In addition, further Eastern countries could also be connected to achieve higher potential freight volume by rail.
Resumo:
Hub Location Problems play vital economic roles in transportation and telecommunication networks where goods or people must be efficiently transferred from an origin to a destination point whilst direct origin-destination links are impractical. This work investigates the single allocation hub location problem, and proposes a genetic algorithm (GA) approach for it. The effectiveness of using a single-objective criterion measure for the problem is first explored. Next, a multi-objective GA employing various fitness evaluation strategies such as Pareto ranking, sum of ranks, and weighted sum strategies is presented. The effectiveness of the multi-objective GA is shown by comparison with an Integer Programming strategy, the only other multi-objective approach found in the literature for this problem. Lastly, two new crossover operators are proposed and an empirical study is done using small to large problem instances of the Civil Aeronautics Board (CAB) and Australian Post (AP) data sets.
Resumo:
Previous studies on the determinants of the choice of college major have assumed a constant probability of success across majors or a constant earnings stream across majors. Our model disregards these two restrictive assumptions in computing an expected earnings variable to explain the probability that a student will choose a specific major among four choices of concentrations. The construction of an expected earnings variable requires information on the student s perceived probability of success, the predicted earnings of graduates in all majors and the student s expected earnings if he (she) fails to complete a college program. Using data from the National Longitudinal Survey of Youth, we evaluate the chances of success in all majors for all the individuals in the sample. Second, the individuals' predicted earnings of graduates in all majors are obtained using Rumberger and Thomas's (1993) regression estimates from a 1987 Survey of Recent College Graduates. Third, we obtain idiosyncratic estimates of earnings alternative of not attending college or by dropping out with a condition derived from our college major decision-making model applied to our sample of college students. Finally, with a mixed multinominal logit model, we explain the individuals' choice of a major. The results of the paper show that the expected earnings variable is essential in the choice of a college major. There are, however, significant differences in the impact of expected earnings by gender and race.
Resumo:
Nous présentons une nouvelle approche pour formuler et calculer le temps de séparation des événements utilisé dans l’analyse et la vérification de différents systèmes cycliques et acycliques sous des contraintes linéaires-min-max avec des composants ayant des délais finis et infinis. Notre approche consiste à formuler le problème sous la forme d’un programme entier mixte, puis à utiliser le solveur Cplex pour avoir les temps de séparation entre les événements. Afin de démontrer l’utilité en pratique de notre approche, nous l’avons utilisée pour la vérification et l’analyse d’une puce asynchrone d’Intel de calcul d’équations différentielles. Comparée aux travaux précédents, notre approche est basée sur une formulation exacte et elle permet non seulement de calculer le maximum de séparation, mais aussi de trouver un ordonnancement cyclique et de calculer les temps de séparation correspondant aux différentes périodes possibles de cet ordonnancement.
Resumo:
Les problèmes de conception de réseaux ont reçu un intérêt particulier et ont été largement étudiés de par leurs nombreuses applications dans différents domaines, tels que les transports et les télécommunications. Nous nous intéressons dans ce mémoire au problème de conception de réseaux avec coûts d’ajout de capacité. Il s’agit d’installer un ensemble d’équipements sur un réseau en vue de satisfaire la demande, tout en respectant les contraintes de capacité, chaque arc pouvant admettre plusieurs équipements. L’objectif est de minimiser les coûts variables de transport des produits et les coûts fixes d’installation ou d’augmentation de capacité des équipements. La méthode que nous envisageons pour résoudre ce problème est basée sur les techniques utilisées en programmation linéaire en nombres entiers, notamment celles de génération de colonnes et de coupes. Ces méthodes sont introduites dans un algorithme général de branch-and-bound basé sur la relaxation linéaire. Nous avons testé notre méthode sur quatre groupes d’instances de tailles différentes, et nous l’avons comparée à CPLEX, qui constitue un des meilleurs solveurs permettant de résoudre des problèmes d’optimisation, ainsi qu’à une méthode existante dans la littérature combinant des méthodes exactes et heuristiques. Notre méthode a été plus performante que ces deux méthodes, notamment pour les instances de très grandes tailles.
Resumo:
Dans ce mémoire, nous abordons le problème de l’ensemble dominant connexe de cardinalité minimale. Nous nous penchons, en particulier, sur le développement de méthodes pour sa résolution basées sur la programmation par contraintes et la programmation en nombres entiers. Nous présentons, en l’occurrence, une heuristique et quelques méthodes exactes pouvant être utilisées comme heuristiques si on limite leur temps d’exécution. Nous décrivons notamment un algorithme basé sur l’approche de décomposition de Benders, un autre combinant cette dernière avec une stratégie d’investigation itérative, une variante de celle-ci utilisant la programmation par contraintes, et enfin une méthode utilisant uniquement la programmation par contraintes. Des résultats expérimentaux montrent que ces méthodes sont efficaces puisqu’elles améliorent les méthodes connues dans la littérature. En particulier, la méthode de décomposition de Benders avec une stratégie d’investigation itérative fournit les résultats les plus performants.
Resumo:
As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
Exam questions and solutions in PDF
Resumo:
Exam questions and solutions in LaTex