11 resultados para Distributed Generator, Network Loss, Primal-Dual Interior Point Algorithm, Sitting and Sizing

em Massachusetts Institute of Technology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most logistics network design models assume exogenous customer demand that is independent of the service time or level. This paper examines the benefits of segmenting demand according to lead-time sensitivity of customers. To capture lead-time sensitivity in the network design model, we use a facility grouping method to ensure that the different demand classes are satisfied on time. In addition, we perform a series of computational experiments to develop a set of managerial insights for the network design decision making process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The release of growth factors from tissue engineering scaffolds provides signals that influence the migration, differentiation, and proliferation of cells. The incorporation of a drug delivery platform that is capable of tunable release will give tissue engineers greater versatility in the direction of tissue regeneration. We have prepared a novel composite of two biomaterials with proven track records - apatite and poly(lactic-co-glycolic acid) (PLGA) – as a drug delivery platform with promising controlled release properties. These composites have been tested in the delivery of a model protein, bovine serum albumin (BSA), as well as therapeutic proteins, recombinant human bone morphogenetic protein-2 (rhBMP-2) and rhBMP-6. The controlled release strategy is based on the use of a polymer with acidic degradation products to control the dissolution of the basic apatitic component, resulting in protein release. Therefore, any parameter that affects either polymer degradation or apatite dissolution can be used to control protein release. We have modified the protein release profile systematically by varying the polymer molecular weight, polymer hydrophobicity, apatite loading, apatite particle size, and other material and processing parameters. Biologically active rhBMP-2 was released from these composite microparticles over 100 days, in contrast to conventional collagen sponge carriers, which were depleted in approximately 2 weeks. The released rhBMP-2 was able to induce elevated alkaline phosphatase and osteocalcin expression in pluripotent murine embryonic fibroblasts. To augment tissue engineering scaffolds with tunable and sustained protein release capabilities, these composite microparticles can be dispersed in the scaffolds in different combinations to obtain a superposition of the release profiles. We have loaded rhBMP-2 into composite microparticles with a fast release profile, and rhBMP-6 into slow-releasing composite microparticles. An equi-mixture of these two sets of composite particles was then injected into a collagen sponge, allowing for dual release of the proteins from the collagenous scaffold. The ability of these BMP-loaded scaffolds to induce osteoblastic differentiation in vitro and ectopic bone formation in a rat model is being investigated. We anticipate that these apatite-polymer composite microparticles can be extended to the delivery of other signalling molecules, and can be incorporated into other types of tissue engineering scaffolds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a natural norm associated with a starting point of the homogeneous self-dual (HSD) embedding model for conic convex optimization. In this norm two measures of the HSD model’s behavior are precisely controlled independent of the problem instance: (i) the sizes of ε-optimal solutions, and (ii) the maximum distance of ε-optimal solutions to the boundary of the cone of the HSD variables. This norm is also useful in developing a stopping-rule theory for HSD-based interior-point methods such as SeDuMi. Under mild assumptions, we show that a standard stopping rule implicitly involves the sum of the sizes of the ε-optimal primal and dual solutions, as well as the size of the initial primal and dual infeasibility residuals. This theory suggests possible criteria for developing starting points for the homogeneous self-dual model that might improve the resulting solution time in practice

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on autonomous intelligent systems has focused on how robots can robustly carry out missions in uncertain and harsh environments with very little or no human intervention. Robotic execution languages such as RAPs, ESL, and TDL improve robustness by managing functionally redundant procedures for achieving goals. The model-based programming approach extends this by guaranteeing correctness of execution through pre-planning of non-deterministic timed threads of activities. Executing model-based programs effectively on distributed autonomous platforms requires distributing this pre-planning process. This thesis presents a distributed planner for modelbased programs whose planning and execution is distributed among agents with widely varying levels of processor power and memory resources. We make two key contributions. First, we reformulate a model-based program, which describes cooperative activities, into a hierarchical dynamic simple temporal network. This enables efficient distributed coordination of robots and supports deployment on heterogeneous robots. Second, we introduce a distributed temporal planner, called DTP, which solves hierarchical dynamic simple temporal networks with the assistance of the distributed Bellman-Ford shortest path algorithm. The implementation of DTP has been demonstrated successfully on a wide range of randomly generated examples and on a pursuer-evader challenge problem in simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Jellybean Machine is a scalable MIMD concurrent processor consisting of special purpose RISC processors loosely coupled into a low latency network. I have developed an operating system to provide the supportive environment required to efficiently coordinate the collective power of the distributed processing elements. The system services are developed in detail, and may be of interest to other designers of fine grain, distributed memory processing networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Saliency Network proposed by Shashua and Ullman is a well-known approach to the problem of extracting salient curves from images while performing gap completion. This paper analyzes the Saliency Network. The Saliency Network is attractive for several reasons. First, the network generally prefers long and smooth curves over short or wiggly ones. While computing saliencies, the network also fills in gaps with smooth completions and tolerates noise. Finally, the network is locally connected, and its size is proportional to the size of the image. Nevertheless, our analysis reveals certain weaknesses with the method. In particular, we show cases in which the most salient element does not lie on the perceptually most salient curve. Furthermore, in some cases the saliency measure changes its preferences when curves are scaled uniformly. Also, we show that for certain fragmented curves the measure prefers large gaps over a few small gaps of the same total size. In addition, we analyze the time complexity required by the method. We show that the number of steps required for convergence in serial implementations is quadratic in the size of the network, and in parallel implementations is linear in the size of the network. We discuss problems due to coarse sampling of the range of possible orientations. We show that with proper sampling the complexity of the network becomes cubic in the size of the network. Finally, we consider the possibility of using the Saliency Network for grouping. We show that the Saliency Network recovers the most salient curve efficiently, but it has problems with identifying any salient curve other than the most salient one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a DHT-based grid resource indexing and discovery (DGRID) approach. With DGRID, resource-information data is stored on its own administrative domain and each domain, represented by an index server, is virtualized to several nodes (virtual servers) subjected to the number of resource types it has. Then, all nodes are arranged as a structured overlay network or distributed hash table (DHT). Comparing to existing grid resource indexing and discovery schemes, the benefits of DGRID include improving the security of domains, increasing the availability of data, and eliminating stale data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article studies the static pricing problem of a network service provider who has a fixed capacity and faces different types of customers (classes). Each type of customers can have its own capacity constraint but it is assumed that all classes have the same resource requirement. The provider must decide a static price for each class. The customer types are characterized by their arrival process, with a price-dependant arrival rate, and the random time they remain in the system. Many real-life situations could fit in this framework, for example an Internet provider or a call center, but originally this problem was thought for a company that sells phone-cards and needs to set the price-per-minute for each destination. Our goal is to characterize the optimal static prices in order to maximize the provider's revenue. We note that the model here presented, with some slight modifications and additional assumptions can be used in those cases when the objective is to maximize social welfare.