5 resultados para Policy Design, Analysis, and Evaluation
em DRUM (Digital Repository at the University of Maryland)
Resumo:
Gemstone Team SnowMelt
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
This dissertation presents work done in the design, modeling, and fabrication of magnetically actuated microrobot legs. Novel fabrication processes for manufacturing multi-material compliant mechanisms have been used to fabricate effective legged robots at both the meso and micro scales, where the meso scale refers to the transition between macro and micro scales. This work discusses the development of a novel mesoscale manufacturing process, Laser Cut Elastomer Refill (LaCER), for prototyping millimeter-scale multi-material compliant mechanisms with elastomer hinges. Additionally discussed is an extension of previous work on the development of a microscale manufacturing process for fabricating micrometer-sale multi-material compliant mechanisms with elastomer hinges, with the added contribution of a method for incorporating magnetic materials for mechanism actuation using externally applied fields. As both of the fabrication processes outlined make significant use of highly compliant elastomer hinges, a fast, accurate modeling method for these hinges was desired for mechanism characterization and design. An analytical model was developed for this purpose, making use of the pseudo rigid-body (PRB) model and extending its utility to hinges with significant stretch component, such as those fabricated from elastomer materials. This model includes 3 springs with stiffnesses relating to material stiffness and hinge geometry, with additional correction factors for aspects particular to common multi-material hinge geometry. This model has been verified against a finite element analysis model (FEA), which in turn was matched to experimental data on mesoscale hinges manufactured using LaCER. These modeling methods have additionally been verified against experimental data from microscale hinges manufactured using the Si/elastomer/magnetics MEMS process. The development of several mechanisms is also discussed: including a mesoscale LaCER-fabricated hexapedal millirobot capable of walking at 2.4 body lengths per second; prototyped mesoscale LaCER-fabricated underactuated legs with asymmetrical features for improved performance; 1 centimeter cubed LaCER-fabricated magnetically-actuated hexapods which use the best-performing underactuated leg design to locomote at up to 10.6 body lengths per second; five microfabricated magnetically actuated single-hinge mechanisms; a 14-hinge, 11-link microfabricated gripper mechanism; a microfabricated robot leg mechansim demonstrated clearing a step height of 100 micrometers; and a 4 mm x 4 mm x 5 mm, 25 mg microfabricated magnetically-actuated hexapod, demonstrated walking at up to 2.25 body lengths per second.
Resumo:
Despite the extensive implementation of Superstreets on congested arterials, reliable methodologies for such designs remain unavailable. The purpose of this research is to fill the information gap by offering reliable tools to assist traffic professionals in the design of Superstreets with and without signal control. The entire tool developed in this thesis consists of three models. The first model is used to determine the minimum U-turn offset length for an Un-signalized Superstreet, given the arterial headway distribution of the traffic flows and the distribution of critical gaps among drivers. The second model is designed to estimate the queue size and its variation on each critical link in a signalized Superstreet, based on the given signal plan and the range of observed volumes. Recognizing that the operational performance of a Superstreet cannot be achieved without an effective signal plan, the third model is developed to produce a signal optimization method that can generate progression offsets for heavy arterial flows moving into and out of such an intersection design.