126 resultados para Visibility problems
Resumo:
A posteriori error estimation and adaptive refinement technique for fracture analysis of 2-D/3-D crack problems is the state-of-the-art. The objective of the present paper is to propose a new a posteriori error estimator based on strain energy release rate (SERR) or stress intensity factor (SIF) at the crack tip region and to use this along with the stress based error estimator available in the literature for the region away from the crack tip. The proposed a posteriori error estimator is called the K-S error estimator. Further, an adaptive mesh refinement (h-) strategy which can be used with K-S error estimator has been proposed for fracture analysis of 2-D crack problems. The performance of the proposed a posteriori error estimator and the h-adaptive refinement strategy have been demonstrated by employing the 4-noded, 8-noded and 9-noded plane stress finite elements. The proposed error estimator together with the h-adaptive refinement strategy will facilitate automation of fracture analysis process to provide reliable solutions.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
A new formulation is suggested for the fixed end-point regulator problem, which, in conjunction with the recently developed integration-free algorithms, provides an efficient means of obtaining numerical solutions to such problems.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Resumo:
In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.
Resumo:
An a priori error analysis of discontinuous Galerkin methods for a general elliptic problem is derived under a mild elliptic regularity assumption on the solution. This is accomplished by using some techniques from a posteriori error analysis. The model problem is assumed to satisfy a GAyenrding type inequality. Optimal order L (2) norm a priori error estimates are derived for an adjoint consistent interior penalty method.
Resumo:
We consider functions that map the open unit disc conformally onto the complement of a bounded convex set. We call these functions concave univalent functions. In 1994, Livingston presented a characterization for these functions. In this paper, we observe that there is a minor flaw with this characterization. We obtain certain sharp estimates and the exact set of variability involving Laurent and Taylor coefficients for concave functions. We also present the exact set of variability of the linear combination of certain successive Taylor coefficients of concave functions.
Resumo:
A polygon is said to be a weak visibility polygon if every point of the polygon is visible from some point of an internal segment. In this paper we derive properties of shortest paths in weak visibility polygons and present a characterization of weak visibility polygons in terms of shortest paths between vertices. These properties lead to the following efficient algorithms: (i) an O(E) time algorithm for determining whether a simple polygon P is a weak visibility polygon and for computing a visibility chord if it exist, where E is the size of the visibility graph of P and (ii) an O(n2) time algorithm for computing the maximum hidden vertex set in an n-sided polygon weakly visible from a convex edge.
Resumo:
The set of attainable laws of the joint state-control process of a controlled diffusion is analyzed from a convex analytic viewpoint. Various equivalence relations depending on one-dimensional marginals thereof are defined on this set and the corresponding equivalence classes are studied.
Resumo:
In this work, we present a new monolithic strategy for solving fluid-structure interaction problems involving incompressible fluids, within the context of the finite element method. This strategy, similar to the continuum dynamics, conserves certain properties, and thus provides a rational basis for the design of the time-stepping strategy; detailed proofs of the conservation of these properties are provided. The proposed algorithm works with displacement and velocity variables for the structure and fluid, respectively, and introduces no new variables to enforce velocity or traction continuity. Any existing structural dynamics algorithm can be used without change in the proposed method. Use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. An analytical solution is presented for one of the benchmark problems used in the literature, namely, the piston problem. A number of benchmark problems including problems involving free surfaces such as sloshing and the breaking dam problem are used to demonstrate the good performance of the proposed method. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The plane stress solution for the interaction analysis of a framed structure, with a foundation beam, resting on a layered soil has been studied using both theoretical and photoelastic methods. The theoretical analysis has been done by using a combined analytical and finite element method. In this, the analytical solution has been used for the semi-infinite layered medium and finite element method for the framed structure. The experimental investigation has been carried out using two-dimensional photoelasticity in which modelling of the layered semi-infinite plane and a method to obtain contact pressure distribution have been discussed. The theoretical and experimental results in respect of contact pressure distribution between the foundation beam and layered soil medium, the fibre stresses in the foundation beam and framed structure have been compared. These results have also been compared with theoretical results obtained by idealizing the layered semi-infinite plane as (a) a Winkler model and (b) an equivalent homogeneous semi-infinite medium
Resumo:
In linear elastic fracture mechanics (LEFM), Irwin's crack closure integral (CCI) is one of the signficant concepts for the estimation of strain energy release rates (SERR) G, in individual as well as mixed-mode configurations. For effective utilization of this concept in conjunction with the finite element method (FEM), Rybicki and Kanninen [Engng Fracture Mech. 9, 931 938 (1977)] have proposed simple and direct estimations of the CCI in terms of nodal forces and displacements in the elements forming the crack tip from a single finite element analysis instead of the conventional two configuration analyses. These modified CCI (MCCI) expressions are basically element dependent. A systematic derivation of these expressions using element stress and displacement distributions is required. In the present work, a general procedure is given for the derivation of MCCI expressions in 3D problems with cracks. Further, a concept of sub-area integration is proposed which facilitates evaluation of SERR at a large number of points along the crack front without refining the finite element mesh. Numerical data are presented for two standard problems, a thick centre-cracked tension specimen and a semi-elliptical surface crack in a thick slab. Estimates for the stress intensity factor based on MCCI expressions corresponding to eight-noded brick elements are obtained and compared with available results in the literature.