87 resultados para Solving-problem algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work an efficient third order non-linear finite difference scheme for solving adaptively hyperbolic systems of one-dimensional conservation laws is developed. The method is based oil applying to the solution of the differential equation an interpolating wavelet transform at each time step, generating a multilevel representation for the solution, which is thresholded and a sparse point representation is generated. The numerical fluxes obtained by a Lax-Friedrichs flux splitting are evaluated oil the sparse grid by an essentially non-oscillatory (ENO) approximation, which chooses the locally smoothest stencil among all the possibilities for each point of the sparse grid. The time evolution of the differential operator is done on this sparse representation by a total variation diminishing (TVD) Runge-Kutta method. Four classical examples of initial value problems for the Euler equations of gas dynamics are accurately solved and their sparse solutions are analyzed with respect to the threshold parameters, confirming the efficiency of the wavelet transform as an adaptive grid generation technique. (C) 2008 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os sistemas biológicos são surpreendentemente flexíveis pra processar informação proveniente do mundo real. Alguns organismos biológicos possuem uma unidade central de processamento denominada de cérebro. O cérebro humano consiste de 10(11) neurônios e realiza processamento inteligente de forma exata e subjetiva. A Inteligência Artificial (IA) tenta trazer para o mundo da computação digital a heurística dos sistemas biológicos de várias maneiras, mas, ainda resta muito para que isso seja concretizado. No entanto, algumas técnicas como Redes neurais artificiais e lógica fuzzy tem mostrado efetivas para resolver problemas complexos usando a heurística dos sistemas biológicos. Recentemente o numero de aplicação dos métodos da IA em sistemas zootécnicos tem aumentado significativamente. O objetivo deste artigo é explicar os princípios básicos da resolução de problemas usando heurística e demonstrar como a IA pode ser aplicada para construir um sistema especialista para resolver problemas na área de zootecnia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sociedade contemporânea ocidental estendeu o período da adolescência, que não é mais encarada apenas como uma preparação para a vida adulta, mas passou a adquirir sentido em si mesma, como um estágio do ciclo vital. O presente artigo procura descrever como os adolescentes eram vistos e tratados, desde a antiguidade até os dias de hoje, a partir de textos literários ou filosóficos e estudos científicos. O material bibliográfico a respeito da adolescência caracteriza-se por três etapas distintas: a descrição dos padrões de comportamento, ajustamento pessoal e relacionamento; a resolução de problemas reais por meio do conhecimento científico; e o desenvolvimento positivo do indivíduo, considerando os adolescentes como o futuro da humanidade.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Work disability is a major consequence of rheumatoid arthritis (RA), associated not only with traditional disease activity variables, but also more significantly with demographic, functional, occupational, and societal variables. Recent reports suggest that the use of biologic agents offers potential for reduced work disability rates, but the conclusions are based on surrogate disease activity measures derived from studies primarily from Western countries. Methods: The Quantitative Standard Monitoring of Patients with RA (QUEST-RA) multinational database of 8,039 patients in 86 sites in 32 countries, 16 with high gross domestic product (GDP) (>24K US dollars (USD) per capita) and 16 low-GDP countries (<11K USD), was analyzed for work and disability status at onset and over the course of RA and clinical status of patients who continued working or had stopped working in high-GDP versus low-GDP countries according to all RA Core Data Set measures. Associations of work disability status with RA Core Data Set variables and indices were analyzed using descriptive statistics and regression analyses. Results: At the time of first symptoms, 86% of men (range 57%-100% among countries) and 64% (19%-87%) of women <65 years were working. More than one third (37%) of these patients reported subsequent work disability because of RA. Among 1,756 patients whose symptoms had begun during the 2000s, the probabilities of continuing to work were 80% (95% confidence interval (CI) 78%-82%) at 2 years and 68% (95% CI 65%-71%) at 5 years, with similar patterns in high-GDP and low-GDP countries. Patients who continued working versus stopped working had significantly better clinical status for all clinical status measures and patient self-report scores, with similar patterns in high-GDP and low-GDP countries. However, patients who had stopped working in high-GDP countries had better clinical status than patients who continued working in low-GDP countries. The most significant identifier of work disability in all subgroups was Health Assessment Questionnaire (HAQ) functional disability score. Conclusions: Work disability rates remain high among people with RA during this millennium. In low-GDP countries, people remain working with high levels of disability and disease activity. Cultural and economic differences between societies affect work disability as an outcome measure for RA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. The turbulent pumping effect corresponds to the transport of magnetic flux due to the presence of density and turbulence gradients in convectively unstable layers. In the induction equation it appears as an advective term and for this reason it is expected to be important in the solar and stellar dynamo processes. Aims. We explore the effects of turbulent pumping in a flux-dominated Babcock-Leighton solar dynamo model with a solar-like rotation law. Methods. As a first step, only vertical pumping has been considered through the inclusion of a radial diamagnetic term in the induction equation. In the second step, a latitudinal pumping term was included and then, a near-surface shear was included. Results. The results reveal the importance of the pumping mechanism in solving current limitations in mean field dynamo modeling, such as the storage of the magnetic flux and the latitudinal distribution of the sunspots. If a meridional flow is assumed to be present only in the upper part of the convective zone, it is the full turbulent pumping that regulates both the period of the solar cycle and the latitudinal distribution of the sunspot activity. In models that consider shear near the surface, a second shell of toroidal field is generated above r = 0.95 R(circle dot) at all latitudes. If the full pumping is also included, the polar toroidal fields are efficiently advected inwards, and the toroidal magnetic activity survives only at the observed latitudes near the equator. With regard to the parity of the magnetic field, only models that combine turbulent pumping with near-surface shear always converge to the dipolar parity. Conclusions. This result suggests that, under the Babcock-Leighton approach, the equartorward motion of the observed magnetic activity is governed by the latitudinal pumping of the toroidal magnetic field rather than by a large scale coherent meridional flow. Our results support the idea that the parity problem is related to the quadrupolar imprint of the meridional flow on the poloidal component of the magnetic field and the turbulent pumping positively contributes to wash out this imprint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims. An analytical solution for the discrepancy between observed core-like profiles and predicted cusp profiles in dark matter halos is studied. Methods. We calculate the distribution function for Navarro-Frenk-White halos and extract energy from the distribution, taking into account the effects of baryonic physics processes. Results. We show with a simple argument that we can reproduce the evolution of a cusp to a flat density profile by a decrease of the initial potential energy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The energy spectrum of an electron confined in a quantum dot (QD) with a three-dimensional anisotropic parabolic potential in a tilted magnetic field was found analytically. The theory describes exactly the mixing of in-plane and out-of-plane motions of an electron caused by a tilted magnetic field, which could be seen, for example, in the level anticrossing. For charged QDs in a tilted magnetic field we predict three strong resonant lines in the far-infrared-absorption spectra.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A planar k-restricted structure is a simple graph whose blocks are planar and each has at most k vertices. Planar k-restricted structures are used by approximation algorithms for Maximum Weight Planar Subgraph, which motivates this work. The planar k-restricted ratio is the infimum, over simple planar graphs H, of the ratio of the number of edges in a maximum k-restricted structure subgraph of H to the number edges of H. We prove that, as k tends to infinity, the planar k-restricted ratio tends to 1/2. The same result holds for the weighted version. Our results are based on analyzing the analogous ratios for outerplanar and weighted outerplanar graphs. Here both ratios tend to 1 as k goes to infinity, and we provide good estimates of the rates of convergence, showing that they differ in the weighted from the unweighted case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Identifying local similarity between two or more sequences, or identifying repeats occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding such fragments while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. Results: The filter TUIUIU that we introduce in this paper provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment or repeats inference method, eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate repeat. It consists in the verification of several strong necessary conditions that can be checked in a fast way. We implemented three versions of the filter. The first is simply a straightforward extension to the case of multiple sequences of an application of conditions already existing in the literature. The second uses a stronger condition which, as our results show, enable to filter sensibly more with negligible (if any) additional time. The third version uses an additional condition and pushes the sensibility of the filter even further with a non negligible additional time in many circumstances; our experiments show that it is particularly useful with large error rates. The latter version was applied as a preprocessing of a multiple alignment tool, obtaining an overall time (filter plus alignment) on average 63 and at best 530 times smaller than before (direct alignment), with in most cases a better quality alignment. Conclusion: To the best of our knowledge, TUIUIU is the first filter designed for multiple repeats and for dealing with error rates greater than 10% of the repeats length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.