245 resultados para INTRACTABLE EPISTAXIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is growing recognition that hybrid organizations can play a critical role in tackling intractable global sustainable development challenges. At the same time, acute social, environmental, and economic challenges are opening up “opportunity” spaces for hybrids. Different institutional contexts are also leading to variable hybrid forms linked to the focus of their mission and their profit-oriented status. This article presents a process for identifying, mapping, and building impact indicators based on a study of 20 hybrid organizations in Sub-Saharan Africa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing research on the legitimacy of the UN Security Council is conceptual or theoretical, for the most part, as scholars tend to make legitimacy assessments with reference to objective standards. Whether UN member states perceive the Security Council as legitimate or illegitimate has yet to be investigated systematically; nor do we know whether states care primarily about the Council's compliance with its legal mandate, its procedures, or its effectiveness. To address this gap, our article analyzes evaluative statements made by states in UN General Assembly debates on the Security Council, for the period 1991–2009. In making such statements, states confer legitimacy on the Council or withhold legitimacy from it. We conclude the following: First, the Security Council suffers from a legitimacy deficit because negative evaluations of the Council by UN member states far outweigh positive ones. Nevertheless, the Council does not find itself in an intractable legitimacy crisis because it still enjoys a rudimentary degree of legitimacy. Second, the Council's legitimacy deficit results primarily from states' concerns regarding the body's procedural shortcomings. Misgivings as regards shortcomings in performance rank second. Whether or not the Council complies with its legal mandate has failed to attract much attention at all.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter offers a fresh critique of the approach taken by the International Court of Justice to the relationship between humanitarian law and human rights law. In so doing, it seeks to move beyond the intractable debates that have dominated this area, offering an original account of the relationship that is firmly grounded in general international law concepts of treaty interpretation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With a focus on key themes and debates, this article aims to illustrate and assess how the interaction between justice and politics has shaped the international regime and defined the nature of the international agreement that was signed in COP21 Paris. The work demonstrates that despite the rise of neo-conservatism and self-interested power politics, questions of global distributive justice remain a central aspect of the international politics of climate change. However, while it is relatively easy to demonstrate that international climate politics is not beyond the reach of moral contestations, the assessment of exactly how much impact justice has on climate policies and the broader normative structures of the climate governance regime remains a very difficult task. As the world digests the Paris Agreement, it is vital that the current state of justice issues within the international climate change regime is comprehensively understood by scholars of climate justice and by academics and practitioners, not least because how these intractable issues of justice are dealt with (or not) will be a crucial factor in determining the effectiveness of the emerging climate regime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Juvenile angiofibroma is a benign fibroangiomatous tumor of relatively rare occurrence, developing most frequently in male adolescents. It has local characteristics of aggressiveness and expansion. The treatment of choice is surgical excision. In this article, the advantages and disadvantages of the surgical technique using the Le Fort I osteotomy are described, and the literature correlated with 2 case reports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we consider local influence analysis for the skew-normal linear mixed model (SN-LMM). As the observed data log-likelihood associated with the SN-LMM is intractable, Cook`s well-known approach cannot be applied to obtain measures of local influence. Instead, we develop local influence measures following the approach of Zhu and Lee (2001). This approach is based on the use of an EM-type algorithm and is measurement invariant under reparametrizations. Four specific perturbation schemes are discussed. Results obtained for a simulated data set and a real data set are reported, illustrating the usefulness of the proposed methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traveling salesman problem is although looking very simple problem but it is an important combinatorial problem. In this thesis I have tried to find the shortest distance tour in which each city is visited exactly one time and return to the starting city. I have tried to solve traveling salesman problem using multilevel graph partitioning approach.Although traveling salesman problem itself very difficult as this problem is belong to the NP-Complete problems but I have tried my best to solve this problem using multilevel graph partitioning it also belong to the NP-Complete problems. I have solved this thesis by using the k-mean partitioning algorithm which divides the problem into multiple partitions and solving each partition separately and its solution is used to improve the overall tour by applying Lin Kernighan algorithm on it. Through all this I got optimal solution which proofs that solving traveling salesman problem through graph partition scheme is good for this NP-Problem and through this we can solved this intractable problem within few minutes.Keywords: Graph Partitioning Scheme, Traveling Salesman Problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Na modelagem de sistemas complexos, abordagens analíticas tradicionais com equações diferenciais muitas vezes resultam em soluções intratáveis. Para contornar este problema, Modelos Baseados em Agentes surgem como uma ferramenta complementar, onde o sistema é modelado a partir de suas entidades constituintes e interações. Mercados Financeiros são exemplos de sistemas complexos, e como tais, o uso de modelos baseados em agentes é aplicável. Este trabalho implementa um Mercado Financeiro Artificial composto por formadores de mercado, difusores de informações e um conjunto de agentes heterogêneos que negociam um ativo através de um mecanismo de Leilão Duplo Contínuo. Diversos aspectos da simulação são investigados para consolidar sua compreensão e assim contribuir com a concepção de modelos, onde podemos destacar entre outros: Diferenças do Leilão Duplo Contínuo contra o Discreto; Implicações da variação do spread praticado pelo Formador de Mercado; Efeito de Restrições Orçamentárias sobre os agentes e Análise da formação de preços na emissão de ofertas. Pensando na aderência do modelo com a realidade do mercado brasileiro, uma técnica auxiliar chamada Simulação Inversa, é utilizada para calibrar os parâmetros de entrada, de forma que trajetórias de preços simulados resultantes sejam próximas à séries de preços históricos observadas no mercado.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Ehrlichiosis is a worldwide diseases of great importance in a veterinary medicine is an important infectious diseases whose prevalence has increased significant in the last year in the Brazilian states. Due to the fact that this study was designed to correlate the findings hematological clinical signs and PCR, being the most sensitive. This study evaluated twenty dogs seen at veterinary hospital UNESP - Botucatu campus during the 03 from august to September 28 2009. Animal cited 65% were positive in the PCR test. Among the most prominent clinical findings 76.92% (10/13) with anorexia, 53.84% (7/13) with hepatoesplenomegaly, 46.15% (6/13) with apathy and 38.46% (5/13) epistaxis. The thirteen animals positive PCR 92.30% (12/13) showed thrombocytopenia (<150.000 platelets) e 61.53 (8/13) anemic (<5.50 x10). Thus, we conclude that the PCR was a good method for detection differential canine ehrlichiosis may be adopted together with the clinical and hematological findings for the accurate diagnosis of the disease.