963 resultados para ABSTRACT PARABOLIC PROBLEMS
Resumo:
We prove global existence of nonnegative solutions to the one dimensional degenerate parabolic problems containing a singular term. We also show the global quenching phenomena for L1 initial datums. Moreover, the free boundary problem is considered in this paper.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
Research on naïve physics investigates children’s intuitive understanding of physical objects, phenomena and processes. Children, and also many adults, were found to have a misconception of inertia, called impetus theory. In order to investigate the development of this naïve concept and the mechanism underlying it, four age groups (5-year-olds, 2nd graders, 5th graders, and 8th graders) were included in this research. Modified experimental tasks were used to explore the effects of daily experience, perceptual cues and general information-processing ability on children’s understanding of inertia. The results of this research are: 1) Five- to thirteen-year-olds’ understanding of inertia problems which were constituted by two ogjects moving at the same spped undergoes an L-shaped developmental trend; Children’s performance became worse as they got older, and their performance in the experiment did not necessarily ascend with the improvement of their cognitive abilities. 2) The L-shaped developmental curve suggests that children in different ages used different strategies to solve inertia problems: Five- to eight-year-olds only used heuristic strategy, while eleven- to thirteen-year-olds solved problems by analyzing the details of inertia motion. 3) The different performance between familiar and unfamiliar problems showed that older children were not able to spontaneously transfer their knowledge and experience from daily action and observation of inertia to unfamiliar, abstract inertia problems. 4) Five- to eight-year-olds showed straight and fragmented pattern, while more eleven- to thirteen-year-olds showed standard impetus theory and revised impetus theory pattern, which showed that younger children were influenced by perceptual cues and their understanding of inertia was fragmented, while older children had coherent impetus theory. 5) When the perceptual cues were controlled, even 40 percent 5 years olds showed the information-processing ability to analyze the distance, speed and time of two objects traveling in two different directions at the same time, demonstrating that they have achieved a necessary level to theorize their naïve concept of inertia.
Resumo:
Los problemas críticos de la pesquería de arrastre de menor escala, son las capturas de ejemplares juveniles, alta presencia de descartes, pesca incidental o accesoria y conflictos con los pescadores artesanales que usan redes de enmalle cortineras. En toda la zona de estudio, los índices de captura por unidad de esfuerzo (CPUE) fue 142,4 kg/h y 477,5 kg/lance, bycatch por unidad de esfuerzo (BPUE) fue 27,2 kg/h y 91,1 kg/lance. Los mayores CPUE fueron en la zona sur dentro de las 5 mn con 199,0 kg/h y 617,8 kg/lance. La composición de la captura relativa al peso estuvo dominada por el falso volador (Prionotus stephanophrys, 24,6%) y carajito (Diplectrum conceptione, 21,4%). Las especies incidentales más importantes fueron espejo (Selene peruviana, 9,8%), bereche (Larimus pacificus, 9,3%), cachema (Cynoscion analis, 4,0%), chiri (Peprilus medius, 2,9%), lenguado de boca chica (Etropus ectenes, 2,5%), doncella (Hemanthias peruanus, 2,1%). El descarte fue 19,1% de la captura, los principales recursos fueron merluza (Merluccius gayi peruanus, 39,1%), lengüeta (Symphurus sechurae, 10,9%), morena (Muraena clepsidra, 4,9%), pez hojita (Chloroscombrus orqueta, 4,8%), otras especies 31,5% (incluyendo restos de peces y equinodermos). El índice de impacto al ecosistema marino fue de 3,7 (1: no favorable al 10: favorable). Por lo que es un arte de pesca no amigable con el ecosistema marino que no debe usarse dentro del área costera
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
This paper is concerned with the lower semicontinuity of attractors for semilinear non-autonomous differential equations in Banach spaces. We require the unperturbed attractor to be given as the union of unstable manifolds of time-dependent hyperbolic solutions, generalizing previous results valid only for gradient-like systems in which the hyperbolic solutions are equilibria. The tools employed are a study of the continuity of the local unstable manifolds of the hyperbolic solutions and results on the continuity of the exponential dichotomy of the linearization around each of these solutions.
Resumo:
In this paper we give general results on the continuity of pullback attractors for nonlinear evolution processes. We then revisit results of [D. Li, P.E. Kloeden, Equi-attraction and the continuous dependence of pullback attractors on parameters, Stoch. Dyn. 4 (3) (2004) 373-384] which show that, under certain conditions, continuity is equivalent to uniformity of attraction over a range of parameters (""equi-attraction""): we are able to simplify their proofs and weaken the conditions required for this equivalence to hold. Generalizing a classical autonomous result [A.V. Babin, M.I. Vishik, Attractors of Evolution Equations, North Holland, Amsterdam, 1992] we give bounds on the rate of convergence of attractors when the family is uniformly exponentially attracting. To apply these results in a more concrete situation we show that a non-autonomous regular perturbation of a gradient-like system produces a family of pullback attractors that are uniformly exponentially attracting: these attractors are therefore continuous, and we can give an explicit bound on the distance between members of this family. (C) 2009 Elsevier Ltd. All rights reserved.
Continuity of the dynamics in a localized large diffusion problem with nonlinear boundary conditions
Resumo:
This paper is concerned with singular perturbations in parabolic problems subjected to nonlinear Neumann boundary conditions. We consider the case for which the diffusion coefficient blows up in a subregion Omega(0) which is interior to the physical domain Omega subset of R(n). We prove, under natural assumptions, that the associated attractors behave continuously as the diffusion coefficient blows up locally uniformly in Omega(0) and converges uniformly to a continuous and positive function in Omega(1) = (Omega) over bar\Omega(0). (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In this paper we conclude the analysis started in [J.M. Arrieta, AN Carvalho, G. Lozada-Cruz, Dynamics in dumbbell domains I. Continuity of the set of equilibria, J. Differential Equations 231 (2006) 551-597] and continued in [J.M. Arrieta, AN Carvalho, G. Lozada-Cruz, Dynamics in dumbbell domains II. The limiting problem, J. Differential Equations 247 (1) (2009) 174-202 (this issue)] concerning the behavior of the asymptotic dynamics of a dissipative reaction-diffusion equation in a dumbbell domain as the channel shrinks to a line segment. In [J.M. Arrieta, AN Carvalho. G. Lozada-Cruz, Dynamics in dumbbell domains I. Continuity of the set of equilibria, J. Differential Equations 231 (2006) 551-597], we have established an appropriate functional analytic framework to address this problem and we have shown the continuity of the set of equilibria. In [J.M. Arrieta, AN Carvalho, G. Lozada-Cruz. Dynamics in dumbbell domains II. The limiting problem, J. Differential Equations 247 (1) (2009) 174-202 (this issue)], we have analyzed the behavior of the limiting problem. In this paper we show that the attractors are Upper semicontinuous and, moreover, if all equilibria of the limiting problem are hyperbolic, then they are lower semicontinuous and therefore, continuous. The continuity is obtained in L(p) and H(1) norms. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
We consider a 1-dimensional reaction-diffusion equation with nonlinear boundary conditions of logistic type with delay. We deal with non-negative solutions and analyze the stability behavior of its unique positive equilibrium solution, which is given by the constant function u equivalent to 1. We show that if the delay is small, this equilibrium solution is asymptotically stable, similar as in the case without delay. We also show that, as the delay goes to infinity, this equilibrium becomes unstable and undergoes a cascade of Hopf bifurcations. The structure of this cascade will depend on the parameters appearing in the equation. This equation shows some dynamical behavior that differs from the case where the nonlinearity with delay is in the interior of the domain. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
We study an one-dimensional nonlinear reaction-diffusion system coupled on the boundary. Such system comes from modeling problems of temperature distribution on two bars of same length, jointed together, with different diffusion coefficients. We prove the transversality property of unstable and stable manifolds assuming all equilibrium points are hyperbolic. To this end, we write the system as an equation with noncontinuous diffusion coefficient. We then study the nonincreasing property of the number of zeros of a linearized nonautonomous equation as well as the Sturm-Liouville properties of the solutions of a linear elliptic problem. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
In this work, we are interested in the dynamic behavior of a parabolic problem with nonlinear boundary conditions and delay in the boundary. We construct a reaction-diffusion problem with delay in the interior, where the reaction term is concentrated in a neighborhood of the boundary and this neighborhood shrinks to boundary, as a parameter epsilon goes to zero. We analyze the limit of the solutions of this concentrated problem and prove that these solutions converge in certain continuous function spaces to the unique solution of the parabolic problem with delay in the boundary. This convergence result allows us to approximate the solution of equations with delay acting on the boundary by solutions of equations with delay acting in the interior and it may contribute to analyze the dynamic behavior of delay equations when the delay is at the boundary. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
2000 Mathematics Subject Classification: 35K55, 35K60.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.