952 resultados para polynomial superinvariant


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give the first systematic study of strong isomorphism reductions, a notion of reduction more appropriate than polynomial time reduction when, for example, comparing the computational complexity of the isomorphim problem for different classes of structures. We show that the partial ordering of its degrees is quite rich. We analyze its relationship to a further type of reduction between classes of structures based on purely comparing for every n the number of nonisomorphic structures of cardinality at most n in both classes. Furthermore, in a more general setting we address the question of the existence of a maximal element in the partial ordering of the degrees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assume that the problem Qo is not solvable in polynomial time. For theories T containing a sufficiently rich part of true arithmetic we characterize T U {ConT} as the minimal extension of T proving for some algorithm that it decides Qo as fast as any algorithm B with the property that T proves that B decides Qo. Here, ConT claims the consistency of T. Moreover, we characterize problems with an optimal algorithm in terms of arithmetical theories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a seminal paper [10], Weitz gave a deterministic fully polynomial approximation scheme for counting exponentially weighted independent sets (which is the same as approximating the partition function of the hard-core model from statistical physics) in graphs of degree at most d, up to the critical activity for the uniqueness of the Gibbs measure on the innite d-regular tree. ore recently Sly [8] (see also [1]) showed that this is optimal in the sense that if here is an FPRAS for the hard-core partition function on graphs of maximum egree d for activities larger than the critical activity on the innite d-regular ree then NP = RP. In this paper we extend Weitz's approach to derive a deterministic fully polynomial approximation scheme for the partition function of general two-state anti-ferromagnetic spin systems on graphs of maximum degree d, up to the corresponding critical point on the d-regular tree. The main ingredient of our result is a proof that for two-state anti-ferromagnetic spin systems on the d-regular tree, weak spatial mixing implies strong spatial mixing. his in turn uses a message-decay argument which extends a similar approach proposed recently for the hard-core model by Restrepo et al [7] to the case of general two-state anti-ferromagnetic spin systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing and geographical information technologies were used to discriminate areas of high and low risk for contracting kala-azar or visceral leishmaniasis. Satellite data were digitally processed to generate maps of land cover and spectral indices, such as the normalised difference vegetation index and wetness index. To map estimated vector abundance and indoor climate data, local polynomial interpolations were used based on the weightage values. Attribute layers were prepared based on illiteracy and the unemployed proportion of the population and associated with village boundaries. Pearson's correlation coefficient was used to estimate the relationship between environmental variables and disease incidence across the study area. The cell values for each input raster in the analysis were assigned values from the evaluation scale. Simple weighting/ratings based on the degree of favourable conditions for kala-azar transmission were used for all the variables, leading to geo-environmental risk model. Variables such as, land use/land cover, vegetation conditions, surface dampness, the indoor climate, illiteracy rates and the size of the unemployed population were considered for inclusion in the geo-environmental kala-azar risk model. The risk model was stratified into areas of "risk"and "non-risk"for the disease, based on calculation of risk indices. The described approach constitutes a promising tool for microlevel kala-azar surveillance and aids in directing control efforts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Previous research suggested that proper blood pressure (BP) management in acute stroke may need to take into account the underlying etiology. METHODS: All patients with acute ischemic stroke registered in the ASTRAL registry between 2003 and 2009 were analyzed. Unfavorable outcome was defined as modified Rankin Scale score >2. A local polynomial surface algorithm was used to assess the effect of baseline and 24- to 48-hour systolic BP (SBP) and mean arterial pressure (MAP) on outcome in patients with lacunar, atherosclerotic, and cardioembolic stroke. RESULTS: A total of 791 patients were included in the analysis. For lacunar and atherosclerotic strokes, there was no difference in the predicted probability of unfavorable outcome between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, or >160 mm Hg (15.3 vs 12.1% vs 20.8%, respectively, for lacunar, p = 015; 41.0% vs 41.5% vs 45.5%, respectively, for atherosclerotic, p = 075), or between patients with BP increase vs decrease at 24-48 hours (18.7% vs 18.0%, respectively, for lacunar, p = 0.84; 43.4% vs 43.6%, respectively, for atherosclerotic, p = 0.88). For cardioembolic strokes, increase of BP at 24-48 hours was associated with higher probability of unfavorable outcome compared to BP reduction (53.4% vs 42.2%, respectively, p = 0.037). Also, the predicted probability of unfavorable outcome was significantly different between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, and >160 mm Hg (34.8% vs 42.3% vs 52.4%, respectively, p < 0.01). CONCLUSIONS: This study provides evidence to support that BP management in acute stroke may have to be tailored with respect to the underlying etiopathogenetic mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The properties and cosmological importance of a class of non-topological solitons, Q-balls, are studied. Aspects of Q-ball solutions and Q-ball cosmology discussed in the literature are reviewed. Q-balls are particularly considered in the Minimal Supersymmetric Standard Model with supersymmetry broken by a hidden sector mechanism mediated by either gravity or gauge interactions. Q-ball profiles, charge-energy relations and evaporation rates for realistic Q-ball profiles are calculated for general polynomial potentials and for the gravity mediated scenario. In all of the cases, the evaporation rates are found to increase with decreasing charge. Q-ball collisions are studied by numerical means in the two supersymmetry breaking scenarios. It is noted that the collision processes can be divided into three types: fusion, charge transfer and elastic scattering. Cross-sections are calculated for the different types of processes in the different scenarios. The formation of Q-balls from the fragmentation of the Aflieck-Dine -condensate is studied by numerical and analytical means. The charge distribution is found to depend strongly on the initial energy-charge ratio of the condensate. The final state is typically noted to consist of Q- and anti-Q-balls in a state of maximum entropy. By studying the relaxation of excited Q-balls the rate at which excess energy can be emitted is calculated in the gravity mediated scenario. The Q-ball is also found to withstand excess energy well without significant charge loss. The possible cosmological consequences of these Q-ball properties are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To assess the level of hemoglobin-Hb during pregnancy before and after fortification of flours with iron. Method: A cross-sectional study with data from 12,119 pregnant women attended at a public prenatal from five macro regions of Brazil. The sample was divided into two groups: Before-fortification (birth before June/2004) and After-fortification (last menstruation after June/2005). Hb curves were compared with national and international references. Polynomial regression models were built, with a significance level of 5%. Results: Although the higher levels of Hb in all gestational months after-fortification, the polynomial regression did not show the fortification effect (p=0.3). Curves in the two groups were above the references in the first trimester, with following decrease and stabilization at the end of pregnancy. Conclusion: Although the fortification effect was not confirmed, the study presents variation of Hb levels during pregnancy, which is important for assistencial practice and evaluation of public policies.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Different studies have shown circadian variation of ischemic burden among patients with ST-Elevation Myocardial Infarction (STEMI), but with controversial results. The aim of this study was to analyze circadian variation of myocardial infarction size and in-hospital mortality in a large multicenter registry. METHODS: This retrospective, registry-based study was based on data from AMIS Plus, a large multicenter Swiss registry of patients who suffered myocardial infarction between 1999 and 2013. Peak creatine kinase (CK) was used as a proxy measure for myocardial infarction size. Associations between peak CK, in-hospital mortality, and the time of day at symptom onset were modelled using polynomial-harmonic regression methods. RESULTS: 6,223 STEMI patients were admitted to 82 acute-care hospitals in Switzerland and treated with primary angioplasty within six hours of symptom onset. Only the 24-hour harmonic was significantly associated with peak CK (p = 0.0001). The maximum average peak CK value (2,315 U/L) was for patients with symptom onset at 23:00, whereas the minimum average (2,017 U/L) was for onset at 11:00. The amplitude of variation was 298 U/L. In addition, no correlation was observed between ischemic time and circadian peak CK variation. Of the 6,223 patients, 223 (3.58%) died during index hospitalization. Remarkably, only the 24-hour harmonic was significantly associated with in-hospital mortality. The risk of death from STEMI was highest for patients with symptom onset at 00:00 and lowest for those with onset at 12:00. DISCUSSION: As a part of this first large study of STEMI patients treated with primary angioplasty in Swiss hospitals, investigations confirmed a circadian pattern to both peak CK and in-hospital mortality which were independent of total ischemic time. Accordingly, this study proposes that symptom onset time be incorporated as a prognosis factor in patients with myocardial infarction.