961 resultados para Set covering theory
Resumo:
ABSTRACT: This work presents a method to analyze characteristics of a set of genes that can have an influence in a certain anomaly, such as a particular type of cancer. A measure is proposed with the objective of diagnosing individuals regarding the anomaly under study and some characteristics of the genes are analyzed. Maximum likelihood equations for general and particular cases are presented.
Resumo:
A teoria de Garret Hardin intitulada “A tragédia dos comuns” apresenta a privatização e o controle governamental como saída para evitar o esgotamento dos recursos naturais. Entretanto, outros autores demonstraram que os usuários dos recursos podem apresentar eficientes formas de manejo, aliando o uso pelo homem à conservação da natureza. Esta tese analisa o uso de recursos comuns em Unidades de Conservação e Assentamentos Rurais de Uso Sustentável, localizadas no interflúvio Purus-Madeira, região Sul do Estado do Amazonas. A pergunta que norteou a hipótese da pesquisa foi: Diante das especificidades amazônicas e das regras impostas pelas políticas ambientais e agrárias na região, quais condições apresentam-se como necessárias e suficientes ao bom desempenho no uso de recursos comuns? A análise foi realizada por meio da combinação de três métodos: o método comparativo Qualitative Comparative Analysis (QCA), o método de análise institucional Institutional Analysis and Development (IAD) Framework e a lógica fuzzy. Operacionalmente, foram consideradas como variáveis independentes (X) os aspectos socioeconômicos, produtivos, ambientais e institucionais, partindo-se do pressuposto de que os programas governamentais destinados às Unidades devem apresentar melhorias nestes indicadores, refletindo por sua vez no bom desempenho no uso de recursos comuns (variável dependente Y) a partir deste desenho institucional. Os resultados confirmaram as hipóteses levantadas, afirmando-se que o bom desempenho no uso de recursos comuns, preconizado pelos critérios da sustentabilidade, somente pode ser alcançado mediante a combinação de um desempenho também satisfatório nas variáveis socioeconômicas, produtivas, institucionais e ambientais, apresentando-se estas variáveis como individualmente necessárias e conjuntamente suficientes para ocorrência deste fenômeno.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Some problems of Calculus of Variations do not have solutions in the class of classic continuous and smooth arcs. This suggests the need of a relaxation or extension of the problem ensuring the existence of a solution in some enlarged class of arcs. This work aims at the development of an extension for a more general optimal control problem with nonlinear control dynamics in which the control function takes values in some closed, but not necessarily bounded, set. To achieve this goal, we exploit the approach of R.V. Gamkrelidze based on the generalized controls, but related to discontinuous arcs. This leads to the notion of generalized impulsive control. The proposed extension links various approaches on the issue of extension found in the literature.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Starting induction motors on isolated or weak systems is a highly dynamic process that can cause motor and load damage as well as electrical network fluctuations. Mechanical damage is associated with the high starting current drawn by a ramping induction motor. In order to compensate the load increase, the voltage of the electrical system decreases. Different starting methods can be applied to the electrical system to reduce these and other starting method issues. The purpose of this thesis is to build accurate and usable simulation models that can aid the designer in making the choice of an appropriate motor starting method. The specific case addressed is the situation where a diesel-generator set is used as the electrical supplied source to the induction motor. The most commonly used starting methods equivalent models are simulated and compared to each other. The main contributions of this thesis is that motor dynamic impedance is continuously calculated and fed back to the generator model to simulate the coupling of the electrical system. The comparative analysis given by the simulations has shown reasonably similar characteristics to other comparative studies. The diesel-generator and induction motor simulations have shown good results, and can adequately demonstrate the dynamics for testing and comparing the starting methods. Further work is suggested to refine the equivalent impedance presented in this thesis.
Resumo:
Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
Reasoning and change over inconsistent knowledge bases (KBs) is of utmost relevance in areas like medicine and law. Argumentation may bring the possibility to cope with both problems. Firstly, by constructing an argumentation framework (AF) from the inconsistent KB, we can decide whether to accept or reject a certain claim through the interplay among arguments and counterarguments. Secondly, by handling dynamics of arguments of the AF, we might deal with the dynamics of knowledge of the underlying inconsistent KB. Dynamics of arguments has recently attracted attention and although some approaches have been proposed, a full axiomatization within the theory of belief revision was still missing. A revision arises when we want the argumentation semantics to accept an argument. Argument Theory Change (ATC) encloses the revision operators that modify the AF by analyzing dialectical trees-arguments as nodes and attacks as edges-as the adopted argumentation semantics. In this article, we present a simple approach to ATC based on propositional KBs. This allows to manage change of inconsistent KBs by relying upon classical belief revision, although contrary to it, consistency restoration of the KB is avoided. Subsequently, a set of rationality postulates adapted to argumentation is given, and finally, the proposed model of change is related to the postulates through the corresponding representation theorem. Though we focus on propositional logic, the results can be easily extended to more expressive formalisms such as first-order logic and description logics, to handle evolution of ontologies.
Resumo:
This work evaluates the efficiency of economic levels of theory for the prediction of (3)J(HH) spin-spin coupling constants, to be used when robust electronic structure methods are prohibitive. To that purpose, DFT methods like mPW1PW91. B3LYP and PBEPBE were used to obtain coupling constants for a test set whose coupling constants are well known. Satisfactory results were obtained in most of cases, with the mPW1PW91/6-31G(d,p)//B3LYP/6-31G(d,p) leading the set. In a second step. B3LYP was replaced by the semiempirical methods PM6 and RM1 in the geometry optimizations. Coupling constants calculated with these latter structures were at least as good as the ones obtained by pure DFT methods. This is a promising result, because some of the main objectives of computational chemistry - low computational cost and time, allied to high performance and precision - were attained together. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
[EN]This paper presents a study on the facial feature detection performance achieved using the Viola-Jones framework. A set of classi- ers using two di erent focuses to gather the training samples is created and tested on four di erent datasets covering a wide range of possibili- ties. The results achieved should serve researchers to choose the classi er that better ts their demands.
Resumo:
This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.
Resumo:
The Internet of Things (IoT) is the next industrial revolution: we will interact naturally with real and virtual devices as a key part of our daily life. This technology shift is expected to be greater than the Web and Mobile combined. As extremely different technologies are needed to build connected devices, the Internet of Things field is a junction between electronics, telecommunications and software engineering. Internet of Things application development happens in silos, often using proprietary and closed communication protocols. There is the common belief that only if we can solve the interoperability problem we can have a real Internet of Things. After a deep analysis of the IoT protocols, we identified a set of primitives for IoT applications. We argue that each IoT protocol can be expressed in term of those primitives, thus solving the interoperability problem at the application protocol level. Moreover, the primitives are network and transport independent and make no assumption in that regard. This dissertation presents our implementation of an IoT platform: the Ponte project. Privacy issues follows the rise of the Internet of Things: it is clear that the IoT must ensure resilience to attacks, data authentication, access control and client privacy. We argue that it is not possible to solve the privacy issue without solving the interoperability problem: enforcing privacy rules implies the need to limit and filter the data delivery process. However, filtering data require knowledge of how the format and the semantics of the data: after an analysis of the possible data formats and representations for the IoT, we identify JSON-LD and the Semantic Web as the best solution for IoT applications. Then, this dissertation present our approach to increase the throughput of filtering semantic data by a factor of ten.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.