990 resultados para Task constraints


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gauss and Fourier have together provided us with the essential techniques for symbolic computation with linear arithmetic constraints over the reals and the rationals. These variable elimination techniques for linear constraints have particular significance in the context of constraint logic programming languages that have been developed in recent years. Variable elimination in linear equations (Guassian Elimination) is a fundamental technique in computational linear algebra and is therefore quite familiar to most of us. Elimination in linear inequalities (Fourier Elimination), on the other hand, is intimately related to polyhedral theory and aspects of linear programming that are not quite as familiar. In addition, the high complexity of elimination in inequalities has forces the consideration of intricate specializations of Fourier's original method. The intent of this survey article is to acquaint the reader with these connections and developments. The latter part of the article dwells on the thesis that variable elimination in linear constraints over the reals extends quite naturally to constraints in certain discrete domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For an articulated manipulator with joint rotation constraints, we show that the maximum workspace is not necessarily obtained for equal link lengths but is also determined by the range and mean positions of the joint motions. We present expressions for sectional area, workspace volume, overlap volume and work area in terms of link ratios, mean positions and ranges of joint motion. We present a numerical procedure to obtain the maximum rectangular area that can be embedded in the workspace of an articulated manipulator with joint motion constraints. We demonstrate the use of analytical expressions and the numerical plots in the kinematic design of an articulated manipulator with joint rotation constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimizing a shell and tube heat exchanger for a given duty is an important and relatively difficult task. There is a need for a simple, general and reliable method for realizing this task. The authors present here one such method for optimizing single phase shell-and-tube heat exchangers with given geometric and thermohydraulic constraints. They discuss the problem in detail. Then they introduce a basic algorithm for optimizing the exchanger. This algorithm is based on data from an earlier study of a large collection of feasible designs generated for different process specifications. The algorithm ensures a near-optimal design satisfying the given heat duty and geometric constraints. The authors also provide several sub-algorithms to satisfy imposed velocity limitations. They illustrate how useful these sub-algorithms are with several examples where the exchanger weight is minimized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermodynamic constraints on component chemical potentials in three-phase fields introduced by the various isograms suggested in the literature are derived for a ternary system containing compounds. When compositions of two compounds lie on an isogram, it is associated with specific characteristics which can be used to obtain further understanding of the interplay of thermodynamic factors that determine phase equilibria. When two compounds are shared by adjacent three-phase fields, the constraints are dictated by binary compositions generated by the intersection of a line passing through the shared compounds with the sides of the ternary triangle. Generalized expressions for an arbitrary line through the triangle are presented. These are consistent with special relations obtained along Kohler, Colinet and Jacob isograms. Five axioms are introduced and proved. They provide valuable tools for checking consistency of thermodynamic measurements and for deriving thermodynamic properties from phase diagrams. (C) 1997 Elsevier Science S.A.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oligomeric copper(I) clusters are formed by the insertion reaction of copper(I) aryloxides into heterocumulenes. The effect of varying the steric demands of the heterocumulene and the aryloxy group on the nuclearity of the oligomers formed has been probed. Reactions with copper(I)2-methoxyphenoxide and copper(I)2-methylphenoxide with PhNCS result in the formation of hexameric complexes hexakis[N-phenylimino(aryloxy)methanethiolato copper(I)] 3 and 4 respectively. Single crystal X-ray data confirmed the structure of 3. Similar insertion reactions of CS2 with the copper(I) aryloxides formed by 2,6-di-tert-butyl-4-methylphenol and 2,6-dimethylphenol result in oligomeric copper(I) complexes 7 and 8 having the (aryloxy)thioxanthate ligand. Complex 7 was confirmed to be a tetramer from single crystal X-ray crystallography. Reactions carried out with 2-mercaptopyrimidine, which has ligating properties similar to N-alkylimino(aryloxy)methanethiolate, result in the formation of an insoluble polymeric complex 11. The fluorescence spectra of oligomeric complexes are helpful in determining their nuclearity. Ir has been shown that a decrease in the steric requirements of either the heterocumulene or aryloxy parts of the ligand can compensate for steric constraints acid facilitate oligomerization. (C) 1999 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report here results from a dynamo model developed on the lines of the Babcock-Leighton idea that the poloidal field is generated at the surface of the Sun from the decay of active regions. In this model magnetic buoyancy is handled with a realistic recipe - wherein toroidal flux is made to erupt from the overshoot layer wherever it exceeds a specified critical field B-C (10(5) G). The erupted toroidal field is then acted upon by the alpha-effect near the surface to give rise to the poloidal field. In this paper we study the effect of buoyancy on the dynamo generated magnetic fields. Specifically, we show that the mechanism of buoyant eruption and the subsequent depletion of the toroidal field inside the overshoot layer, is capable of constraining the magnitude and distribution of the magnetic field there. We also believe that a critical study of this mechanism may give us new information regarding the solar interior and end with an example, where we propose a method for estimating an upper limit of the difusivity within the overshoot layer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We know, from the classical work of Tarski on real closed fields, that elimination is, in principle, a fundamental engine for mechanized deduction. But, in practice, the high complexity of elimination algorithms has limited their use in the realization of mechanical theorem proving. We advocate qualitative theorem proving, where elimination is attractive since most processes of reasoning take place through the elimination of middle terms, and because the computational complexity of the proof is not an issue. Indeed what we need is the existence of the proof and not its mechanization. In this paper, we treat the linear case and illustrate the power of this paradigm by giving extremely simple proofs of two central theorems in the complexity and geometry of linear programming.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of the paper is to make use of statistical digital human model to better understand the nature of reach probability of points in the taskspace. The concept of task-dependent boundary manikin is introduced to geometrically characterize the extreme individuals in the given population who would accomplish the task. For a given point of interest and task, the map of the acceptable variation in anthropometric parameters is superimposed with the distribution of the same parameters in the given population to identify the extreme individuals. To illustrate the concept, the task space mapping is done for the reach probability of human arms. Unlike the boundary manikins, who are completely defined by the population, the dimensions of these manikins will vary with task, say, a point to be reached, as in the present case. Hence they are referred to here as the task-dependent boundary manikins. Simulations with these manikins would help designers to visualize how differently the extreme individuals would perform the task. Reach probability at the points in a 3D grid in the operational space is computed; for objects overlaid in this grid, approximate probabilities are derived from the grid for rendering them with colors indicating the reach probability. The method may also help in providing a rational basis for selection of personnel for a given task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this paper is on designing useful compliant micro-mechanisms of high-aspect-ratio which can be microfabricated by the cost-effective wet etching of (110) orientation silicon (Si) wafers. Wet etching of (110) Si imposes constraints on the geometry of the realized mechanisms because it allows only etch-through in the form of slots parallel to the wafer's flat with a certain minimum length. In this paper, we incorporate this constraint in the topology optimization and obtain compliant designs that meet the specifications on the desired motion for given input forces. Using this design technique and wet etching, we show that we can realize high-aspect-ratio compliant micro-mechanisms. For a (110) Si wafer of 250 µm thickness, the minimum length of the etch opening to get a slot is found to be 866 µm. The minimum achievable width of the slot is limited by the resolution of the lithography process and this can be a very small value. This is studied by conducting trials with different mask layouts on a (110) Si wafer. These constraints are taken care of by using a suitable design parameterization rather than by imposing the constraints explicitly. Topology optimization, as is well known, gives designs using only the essential design specifications. In this work, we show that our technique also gives manufacturable mechanism designs along with lithography mask layouts. Some designs obtained are transferred to lithography masks and mechanisms are fabricated on (110) Si wafers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses a search problem with multiple limited capability search agents in a partially connected dynamical networked environment under different information structures. A self assessment-based decision-making scheme for multiple agents is proposed that uses a modified negotiation scheme with low communication overheads. The scheme has attractive features of fast decision-making and scalability to large number of agents without increasing the complexity of the algorithm. Two models of the self assessment schemes are developed to study the effect of increase in information exchange during decision-making. Some analytical results on the maximum number of self assessment cycles, effect of increasing communication range, completeness of the algorithm, lower bound and upper bound on the search time are also obtained. The performance of the various self assessment schemes in terms of total uncertainty reduction in the search region, using different information structures is studied. It is shown that the communication requirement for self assessment scheme is almost half of the negotiation schemes and its performance is close to the optimal solution. Comparisons with different sequential search schemes are also carried out. Note to Practitioners-In the futuristic military and civilian applications such as search and rescue, surveillance, patrol, oil spill, etc., a swarm of UAVs can be deployed to carry out the mission for information collection. These UAVs have limited sensor and communication ranges. In order to enhance the performance of the mission and to complete the mission quickly, cooperation between UAVs is important. Designing cooperative search strategies for multiple UAVs with these constraints is a difficult task. Apart from this, another requirement in the hostile territory is to minimize communication while making decisions. This adds further complexity to the decision-making algorithms. In this paper, a self-assessment-based decision-making scheme, for multiple UAVs performing a search mission, is proposed. The agents make their decisions based on the information acquired through their sensors and by cooperation with neighbors. The complexity of the decision-making scheme is very low. It can arrive at decisions fast with low communication overheads, while accommodating various information structures used for increasing the fidelity of the uncertainty maps. Theoretical results proving completeness of the algorithm and the lower and upper bounds on the search time are also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.