943 resultados para POLYNOMIAL-MAPPINGS
Resumo:
In a seminal paper [10], Weitz gave a deterministic fully polynomial approximation scheme for counting exponentially weighted independent sets (which is the same as approximating the partition function of the hard-core model from statistical physics) in graphs of degree at most d, up to the critical activity for the uniqueness of the Gibbs measure on the innite d-regular tree. ore recently Sly [8] (see also [1]) showed that this is optimal in the sense that if here is an FPRAS for the hard-core partition function on graphs of maximum egree d for activities larger than the critical activity on the innite d-regular ree then NP = RP. In this paper we extend Weitz's approach to derive a deterministic fully polynomial approximation scheme for the partition function of general two-state anti-ferromagnetic spin systems on graphs of maximum degree d, up to the corresponding critical point on the d-regular tree. The main ingredient of our result is a proof that for two-state anti-ferromagnetic spin systems on the d-regular tree, weak spatial mixing implies strong spatial mixing. his in turn uses a message-decay argument which extends a similar approach proposed recently for the hard-core model by Restrepo et al [7] to the case of general two-state anti-ferromagnetic spin systems.
Resumo:
Understanding the different background landscapes in which malaria transmission occurs is fundamental to understanding malaria epidemiology and to designing effective local malaria control programs. Geology, geomorphology, vegetation, climate, land use, and anopheline distribution were used as a basis for an ecological classification of the state of Roraima, Brazil, in the northern Amazon Basin, focused on the natural history of malaria and transmission. We used unsupervised maximum likelihood classification, principal components analysis, and weighted overlay with equal contribution analyses to fine-scale thematic maps that resulted in clustered regions. We used ecological niche modeling techniques to develop a fine-scale picture of malaria vector distributions in the state. Eight ecoregions were identified and malaria-related aspects are discussed based on this classification, including 5 types of dense tropical rain forest and 3 types of savannah. Ecoregions formed by dense tropical rain forest were named as montane (ecoregion I), submontane (II), plateau (III), lowland (IV), and alluvial (V). Ecoregions formed by savannah were divided into steppe (VI, campos de Roraima), savannah (VII, cerrado), and wetland (VIII, campinarana). Such ecoregional mappings are important tools in integrated malaria control programs that aim to identify specific characteristics of malaria transmission, classify transmission risk, and define priority areas and appropriate interventions. For some areas, extension of these approaches to still-finer resolutions will provide an improved picture of malaria transmission patterns.
Resumo:
BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more
Resumo:
Remote sensing and geographical information technologies were used to discriminate areas of high and low risk for contracting kala-azar or visceral leishmaniasis. Satellite data were digitally processed to generate maps of land cover and spectral indices, such as the normalised difference vegetation index and wetness index. To map estimated vector abundance and indoor climate data, local polynomial interpolations were used based on the weightage values. Attribute layers were prepared based on illiteracy and the unemployed proportion of the population and associated with village boundaries. Pearson's correlation coefficient was used to estimate the relationship between environmental variables and disease incidence across the study area. The cell values for each input raster in the analysis were assigned values from the evaluation scale. Simple weighting/ratings based on the degree of favourable conditions for kala-azar transmission were used for all the variables, leading to geo-environmental risk model. Variables such as, land use/land cover, vegetation conditions, surface dampness, the indoor climate, illiteracy rates and the size of the unemployed population were considered for inclusion in the geo-environmental kala-azar risk model. The risk model was stratified into areas of "risk"and "non-risk"for the disease, based on calculation of risk indices. The described approach constitutes a promising tool for microlevel kala-azar surveillance and aids in directing control efforts.
Resumo:
The purpose of this paper is to reflect on the possibilities and challenges of Community Development Banks (CDBs) as an innovative method of socioeconomic management of microcredit for poor populations. To this end, we will discuss the case of Banco Palmas in Conjunto Palmeiras in the city of Fortaleza, in the northeastern state of Ceará, as an empirical case study. The analyses presented here are based on information obtained from Banco Palmas between late 2011 and early 2012. In addition, previous studies by other researchers on the bank and other studies on CDBs were important. The primary data collected at Banco Palmas came from documents made available by the bank, such as reports and mappings. The analyses describe some of the characteristics of the granting of microcredit and allow one to situate it in the universe of microfinance and solidarity finance. They also show the significant growth of local consumption, mostly through the use of the Palmas social currency. The Banco Palmas experience, aside from influencing national public policies of solidarity finance, initiated a CDBs network that encourages the replication of these experiences throughout the country.
Resumo:
The recently released Affymetrix Human Gene 1.0 ST array has two major differences compared with standard 3' based arrays: (i) it interrogates the entire mRNA transcript, and (ii) it uses DNA targets. To assess the impact of these differences on array performance, we performed a series of comparative hybridizations between the Human Gene 1.0 ST and the Affymetrix HG-U133 Plus 2.0 and the Illumina HumanRef-8 BeadChip arrays. Additionally, both RNA and DNA targets were hybridized on HG-U133 Plus 2.0 arrays. The results show that the overall reproducibility of the Gene 1.0 ST array is best. When looking only at the high intensity probes, the reproducibility of the Gene 1.0 ST array and the Illumina BeadChip array is equally good. Concordance of array results was assessed using different inter-platform mappings. Agreements are best between the two labeling protocols using HG-U133 Plus 2.0 array. The Gene 1.0 ST array is most concordant with the HG-U133 array hybridized with cDNA targets. This may reflect the impact of the target type. Overall, the high degree of correspondence provides strong evidence for the reliability of the Gene 1.0 ST array.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
OBJECTIVE: Previous research suggested that proper blood pressure (BP) management in acute stroke may need to take into account the underlying etiology. METHODS: All patients with acute ischemic stroke registered in the ASTRAL registry between 2003 and 2009 were analyzed. Unfavorable outcome was defined as modified Rankin Scale score >2. A local polynomial surface algorithm was used to assess the effect of baseline and 24- to 48-hour systolic BP (SBP) and mean arterial pressure (MAP) on outcome in patients with lacunar, atherosclerotic, and cardioembolic stroke. RESULTS: A total of 791 patients were included in the analysis. For lacunar and atherosclerotic strokes, there was no difference in the predicted probability of unfavorable outcome between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, or >160 mm Hg (15.3 vs 12.1% vs 20.8%, respectively, for lacunar, p = 015; 41.0% vs 41.5% vs 45.5%, respectively, for atherosclerotic, p = 075), or between patients with BP increase vs decrease at 24-48 hours (18.7% vs 18.0%, respectively, for lacunar, p = 0.84; 43.4% vs 43.6%, respectively, for atherosclerotic, p = 0.88). For cardioembolic strokes, increase of BP at 24-48 hours was associated with higher probability of unfavorable outcome compared to BP reduction (53.4% vs 42.2%, respectively, p = 0.037). Also, the predicted probability of unfavorable outcome was significantly different between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, and >160 mm Hg (34.8% vs 42.3% vs 52.4%, respectively, p < 0.01). CONCLUSIONS: This study provides evidence to support that BP management in acute stroke may have to be tailored with respect to the underlying etiopathogenetic mechanism.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.
Resumo:
Background: The understanding of whole genome sequences in higher eukaryotes depends to a large degree on the reliable definition of transcription units including exon/intron structures, translated open reading frames (ORFs) and flanking untranslated regions. The best currently available chicken transcript catalog is the Ensembl build based on the mappings of a relatively small number of full length cDNAs and ESTs to the genome as well as genome sequence derived in silico gene predictions.Results: We use Long Serial Analysis of Gene Expression (LongSAGE) in bursal lymphocytes and the DT40 cell line to verify the quality and completeness of the annotated transcripts. 53.6% of the more than 38,000 unique SAGE tags (unitags) match to full length bursal cDNAs, the Ensembl transcript build or the genome sequence. The majority of all matching unitags show single matches to the genome, but no matches to the genome derived Ensembl transcript build. Nevertheless, most of these tags map close to the 3' boundaries of annotated Ensembl transcripts.Conclusions: These results suggests that rather few genes are missing in the current Ensembl chicken transcript build, but that the 3' ends of many transcripts may not have been accurately predicted. The tags with no match in the transcript sequences can now be used to improve gene predictions, pinpoint the genomic location of entirely missed transcripts and optimize the accuracy of gene finder software.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
Extreme Vocal Effects (EVE) in music are so recent that few studies have been carried out about how they are physiologically produced and whether they are harmful or not for the human voice.Voice Transformations in real-time are possible nowadays thanks to new technologies and voice processing algorithms. This Master's Thesis pretends to define and classify these new singing techniques and to create a mapping between the physiological aspect of each EVE to its relative spectrumvariations.Voice Transformation Models based on these mappings are proposed and discussed for each one of these EVEs. We also discuss different transformation methods and strategies in order to obtain better results.A subjective evaluation of the results of the transformations is also presented and discussed along with further work, improvements, and working lines on this field.