50 resultados para Robotics Education, Distributed Control, Automonous Robots, Programming, Computer Architecture


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new computational tool is presented in this paper for suboptimal control design of a class of nonlinear distributed parameter systems. First proper orthogonal decomposition based problem-oriented basis functions are designed, which are then used in a Galerkin projection to come up with a low-order lumped parameter approximation. Next, a suboptimal controller is designed using the emerging /spl thetas/-D technique for lumped parameter systems. This time domain sub-optimal control solution is then mapped back to the distributed domain using the same basis functions, which essentially leads to a closed form solution for the controller in a state feedback form. Numerical results for a real-life nonlinear temperature control problem indicate that the proposed method holds promise as a good suboptimal control design technique for distributed parameter systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a dan-based evolutionary approach for solving control problems. Three selected control problems, viz. linear-quadratic, harvest, and push-cart problems, are solved using the proposed approach. Results are compared with those of the evolutionary programming (EP) approach. In most of the cases, the proposed approach is successful in obtaining (near) optimal solutions for these selected problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We address the optimal control problem of a very general stochastic hybrid system with both autonomous and impulsive jumps. The planning horizon is infinite and we use the discounted-cost criterion for performance evaluation. Under certain assumptions, we show the existence of an optimal control. We then derive the quasivariational inequalities satisfied by the value function and establish well-posedness. Finally, we prove the usual verification theorem of dynamic programming.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In achieving higher instruction level parallelism, software pipelining increases the register pressure in the loop. The usefulness of the generated schedule may be restricted to cases where the register pressure is less than the available number of registers. Spill instructions need to be introduced otherwise. But scheduling these spill instructions in the compact schedule is a difficult task. Several heuristics have been proposed to schedule spill code. These heuristics may generate more spill code than necessary, and scheduling them may necessitate increasing the initiation interval. We model the problem of register allocation with spill code generation and scheduling in software pipelined loops as a 0-1 integer linear program. The formulation minimizes the increase in initiation interval (II) by optimally placing spill code and simultaneously minimizes the amount of spill code produced. To the best of our knowledge, this is the first integrated formulation for register allocation, optimal spill code generation and scheduling for software pipelined loops. The proposed formulation performs better than the existing heuristics by preventing an increase in II in 11.11% of the loops and generating 18.48% less spill code on average among the loops extracted from Perfect Club and SPEC benchmarks with a moderate increase in compilation time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The specific objective of this paper is to develop direct digital control strategies for an ammonia reactor using quadratic regulator theory and compare the performance of the resultant control system with that under conventional PID regulators. The controller design studies are based on a ninth order state-space model obtained from the exact nonlinear distributed model using linearization and lumping approximations. The evaluation of these controllers with reference to their disturbance rejection capabilities and transient response characteristics, is carried out using hybrid computer simulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Flexible Manufacturing Systems (FMS), widely considered as the manufacturing technology of the future, are gaining increasing importance due to the immense advantages they provide in terms of cost, quality and productivity over the conventional manufacturing. An FMS is a complex interconnection of capital intensive resources and high levels of system performance is very crucial for survival in a competing environment.Discrete event simulation is one of the most popular methods for performance evaluation of FMS during planning, design and operation phases. Indeed fast simulators are suggested for selection of optimal strategies for flow control (which part type to enter and at what instant), AGV scheduling (which vehicle to carry which part), routing (which machine to process the part) and part selection (which part for processing next). In this paper we develop a C-net based model for an FMS and use the same for distributed discrete event simulation. We illustrate using examples the efficacy of destributed discrete event simulation for the performance evaluation of FMSs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Online remote visualization and steering of critical weather applications like cyclone tracking are essential for effective and timely analysis by geographically distributed climate science community. A steering framework for controlling the high-performance simulations of critical weather events needs to take into account both the steering inputs of the scientists and the criticality needs of the application including minimum progress rate of simulations and continuous visualization of significant events. In this work, we have developed an integrated user-driven and automated steering framework INST for simulations, online remote visualization, and analysis for critical weather applications. INST provides the user control over various application parameters including region of interest, resolution of simulation, and frequency of data for visualization. Unlike existing efforts, our framework considers both the steering inputs and the criticality of the application, namely, the minimum progress rate needed for the application, and various resource constraints including storage space and network bandwidth to decide the best possible parameter values for simulations and visualization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Critical applications like cyclone tracking and earthquake modeling require simultaneous high-performance simulations and online visualization for timely analysis. Faster simulations and simultaneous visualization enable scientists provide real-time guidance to decision makers. In this work, we have developed an integrated user-driven and automated steering framework that simultaneously performs numerical simulations and efficient online remote visualization of critical weather applications in resource-constrained environments. It considers application dynamics like the criticality of the application and resource dynamics like the storage space, network bandwidth and available number of processors to adapt various application and resource parameters like simulation resolution, simulation rate and the frequency of visualization. We formulate the problem of finding an optimal set of simulation parameters as a linear programming problem. This leads to 30% higher simulation rate and 25-50% lesser storage consumption than a naive greedy approach. The framework also provides the user control over various application parameters like region of interest and simulation resolution. We have also devised an adaptive algorithm to reduce the lag between the simulation and visualization times. Using experiments with different network bandwidths, we find that our adaptive algorithm is able to reduce lag as well as visualize the most representative frames.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Opportunistic selection selects the node that improves the overall system performance the most. Selecting the best node is challenging as the nodes are geographically distributed and have only local knowledge. Yet, selection must be fast to allow more time to be spent on data transmission, which exploits the selected node's services. We analyze the impact of imperfect power control on a fast, distributed, splitting based selection scheme that exploits the capture effect by allowing the transmitting nodes to have different target receive powers and uses information about the total received power to speed up selection. Imperfect power control makes the received power deviate from the target and, hence, affects performance. Our analysis quantifies how it changes the selection probability, reduces the selection speed, and leads to the selection of no node or a wrong node. We show that the effect of imperfect power control is primarily driven by the ratio of target receive powers. Furthermore, we quantify its effect on the net system throughput.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Programming for parallel architectures that do not have a shared address space is extremely difficult due to the need for explicit communication between memories of different compute devices. A heterogeneous system with CPUs and multiple GPUs, or a distributed-memory cluster are examples of such systems. Past works that try to automate data movement for distributed-memory architectures can lead to excessive redundant communication. In this paper, we propose an automatic data movement scheme that minimizes the volume of communication between compute devices in heterogeneous and distributed-memory systems. We show that by partitioning data dependences in a particular non-trivial way, one can generate data movement code that results in the minimum volume for a vast majority of cases. The techniques are applicable to any sequence of affine loop nests and works on top of any choice of loop transformations, parallelization, and computation placement. The data movement code generated minimizes the volume of communication for a particular configuration of these. We use a combination of powerful static analyses relying on the polyhedral compiler framework and lightweight runtime routines they generate, to build a source-to-source transformation tool that automatically generates communication code. We demonstrate that the tool is scalable and leads to substantial gains in efficiency. On a heterogeneous system, the communication volume is reduced by a factor of 11X to 83X over state-of-the-art, translating into a mean execution time speedup of 1.53X. On a distributed-memory cluster, our scheme reduces the communication volume by a factor of 1.4X to 63.5X over state-of-the-art, resulting in a mean speedup of 1.55X. In addition, our scheme yields a mean speedup of 2.19X over hand-optimized UPC codes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, a C-0 interior penalty method has been proposed and analyzed for distributed optimal control problems governed by the biharmonic operator. The state and adjoint variables are discretized using continuous piecewise quadratic finite elements while the control variable is discretized using piecewise constant approximations. A priori and a posteriori error estimates are derived for the state, adjoint and control variables under minimal regularity assumptions. Numerical results justify the theoretical results obtained. The a posteriori error estimators are useful in adaptive finite element approximation and the numerical results indicate that the sharp error estimators work efficiently in guiding the mesh refinement. (C) 2014 Elsevier Ltd. All rights reserved.