886 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
Resumo:
Tackling a problem requires mostly, an ability to read it, conceptualize it, represent it, define it, and then applying the necessary mechanisms to solve it. This may sound self-evident except when the problem to be tackled happens to be “complex, “ “ill-structured,” and/or “wicked.” Corruption is one of those kinds of problems. Both in its global and national manifestations it is ill-structured. Where it is structural in nature, endemic and pervasive, it is perhaps even wicked. Qualities of the kind impose modest expectations regarding possibilities of any definitive solution to this insidious phenomenon. If so, it may not suffice to address the problem of corruption using existing categories of law and/or good governance, which overlook the “long-term memory” of the collective and cultural specific dimensions of the subject. Such socio-historical conditions require focusing on the interactive and self-reproducing networks of corruption and attempting to ‘subvert’ that phenomenon’s entire matrix. Concepts such as collective responsibility, collective punishment and sanctions are introduced as relevant categories in the structural, as well as behavioral, subversion of some of the most prevalent aspects of corruption. These concepts may help in the evolving of a new perspective on corruption fighting strategies.
Resumo:
In this paper we show how to extend clausal temporal resolution to the ground eventuality fragment of monodic first-order temporal logic, which has recently been introduced by Hodkinson, Wolter and Zakharyaschev. While a finite Hilbert-like axiomatization of complete monodic first order temporal logic was developed by Wolter and Zakharyaschev, we propose a temporal resolution-based proof system which reduces the satisfiability problem for ground eventuality monodic first-order temporal formulae to the satisfiability problem for formulae of classical first-order logic.
Resumo:
This note provides necessary and su¢cient conditions for some speci…c multidimensional consumer’s surplus welfare measures to be well posed (path independent). We motivate the problem by investigating partial-equilibrium measures of the welfare costs of in‡ation. The results can also be used for checking path independence of alternative de…nitions of Divisia indexes of monetary services. Consumer theory classically approaches the integrability problem by considering compensated demands, homothetic preferences or quasi-linear utility functions. Here, instead, we consider demands of monetary assets generated from a shopping-time perspective. Paralleling the above mentioned procedure, of …nding special classes of utility functions that satisfy the integrability conditions, we try to infer what particular properties of the transacting technology could assure path independence of multidimensional welfare measures. We show that the integrability conditions are satis…ed if and only if the transacting technology is blockwise weakly separable. We use two examples to clarify the point.
Resumo:
The control of the spread of dengue fever by introduction of the intracellular parasitic bacterium Wolbachia in populations of the vector Aedes aegypti, is presently one of the most promising tools for eliminating dengue, in the absence of an efficient vaccine. The success of this operation requires locally careful planning to determine the adequate number of mosquitoes carrying the Wolbachia parasite that need to be introduced into the natural population. The latter are expected to eventually replace the Wolbachia-free population and guarantee permanent protection against the transmission of dengue to human. In this paper, we propose and analyze a model describing the fundamental aspects of the competition between mosquitoes carrying Wolbachia and mosquitoes free of the parasite. We then introduce a simple feedback control law to synthesize an introduction protocol, and prove that the population is guaranteed to converge to a stable equilibrium where the totality of mosquitoes carry Wolbachia. The techniques are based on the theory of monotone control systems, as developed after Angeli and Sontag. Due to bistability, the considered input-output system has multivalued static characteristics, but the existing results are unable to prove almost-global stabilization, and ad hoc analysis has to be conducted.
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
According to the World Health Organization (WHO) estimates for the year 2020, approximately 1.5 million people will commit suicide, and at least 10 times that many will make an attempt. This paper offers a brief overview of the current state of the epidemiology of suicide, a burgeoning public health problem. The information provided is based in large measure on reports of suicide mortality from 130/193 countries. In order to contextualize these data, this paper explores the contribution of both individual and sociocultural factors that influence suicidal behavior, from which much has been learned. Outlining the history of attempts by international and national organizations like WHO, United Nations, member states in the European community and other countries to regularize identification and suicide reporting procedures, this paper also demonstrates that serious knowledge gaps remain. Minimal requirements for successful evidence-based interventions are presented. (C) 2010 Published by Elsevier Masson SAS.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We investigate and solve in the context of general relativity the apparent paradox which appears when bodies floating in a background fluid are set in relativistic motion. Suppose some macroscopic body, say, a submarine designed to lie just in equilibrium when it rests (totally) immersed in a certain background fluid. The puzzle arises when different observers are asked to describe what is expected to happen when the submarine is given some high velocity parallel to the direction of the fluid surface. on the one hand, according to observers at rest with the fluid, the submarine would contract and, thus, sink as a consequence of the density increase. on the other hand, mariners at rest with the submarine using an analogous reasoning for the fluid elements would reach the opposite conclusion. The general relativistic extension of the Archimedes law for moving bodies shows that the submarine sinks. As an extra bonus, this problem suggests a new gedankenexperiment for the generalized second law of thermodynamics.
Resumo:
Traditional cutoff regularization schemes of the Nambu-Jona-Lasinio model limit the applicability of the model to energy-momentum scales much below the value of the regularizing cutoff. In particular, the model cannot be used to study quark matter with Fermi momenta larger than the cutoff. In the present work, an extension of the model to high temperatures and densities recently proposed by Casalbuoni, Gatto, Nardulli, and Ruggieri is used in connection with an implicit regularization scheme. This is done by making use of scaling relations of the divergent one-loop integrals that relate these integrals at different energy-momentum scales. Fixing the pion decay constant at the chiral symmetry breaking scale in the vacuum, the scaling relations predict a running coupling constant that decreases as the regularization scale increases, implementing in a schematic way the property of asymptotic freedom of quantum chromodynamics. If the regularization scale is allowed to increase with density and temperature, the coupling will decrease with density and temperature, extending in this way the applicability of the model to high densities and temperatures. These results are obtained without specifying an explicit regularization. As an illustration of the formalism, numerical results are obtained for the finite density and finite temperature quark condensate and applied to the problem of color superconductivity at high quark densities and finite temperature.
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
Resumo:
In this paper we argue that there is no ambiguity between the Pauli-Villars and other methods of regularization in (2+1)-dimensional quantum electrodynamics with respect to dynamical mass generation, provided we properly choose the couplings for the regulators.
Resumo:
Singular perturbations problems in dimension three which are approximations of discontinuous vector fields are studied in this paper. The main result states that the regularization process developed by Sotomayor and Teixeira produces a singular problem for which the discontinuous set is a center manifold. Moreover, the definition of' sliding vector field coincides with the reduced problem of the corresponding singular problem for a class of vector fields.
Resumo:
The transmission network planning problem is a non-linear integer mixed programming problem (NLIMP). Most of the algorithms used to solve this problem use a linear programming subroutine (LP) to solve LP problems resulting from planning algorithms. Sometimes the resolution of these LPs represents a major computational effort. The particularity of these LPs in the optimal solution is that only some inequality constraints are binding. This task transforms the LP into an equivalent problem with only one equality constraint (the power flow equation) and many inequality constraints, and uses a dual simplex algorithm and a relaxation strategy to solve the LPs. The optimisation process is started with only one equality constraint and, in each step, the most unfeasible constraint is added. The logic used is similar to a proposal for electric systems operation planning. The results show a higher performance of the algorithm when compared to primal simplex methods.