946 resultados para Polynomial Invariants


Relevância:

10.00% 10.00%

Publicador:

Resumo:

All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing and geographical information technologies were used to discriminate areas of high and low risk for contracting kala-azar or visceral leishmaniasis. Satellite data were digitally processed to generate maps of land cover and spectral indices, such as the normalised difference vegetation index and wetness index. To map estimated vector abundance and indoor climate data, local polynomial interpolations were used based on the weightage values. Attribute layers were prepared based on illiteracy and the unemployed proportion of the population and associated with village boundaries. Pearson's correlation coefficient was used to estimate the relationship between environmental variables and disease incidence across the study area. The cell values for each input raster in the analysis were assigned values from the evaluation scale. Simple weighting/ratings based on the degree of favourable conditions for kala-azar transmission were used for all the variables, leading to geo-environmental risk model. Variables such as, land use/land cover, vegetation conditions, surface dampness, the indoor climate, illiteracy rates and the size of the unemployed population were considered for inclusion in the geo-environmental kala-azar risk model. The risk model was stratified into areas of "risk"and "non-risk"for the disease, based on calculation of risk indices. The described approach constitutes a promising tool for microlevel kala-azar surveillance and aids in directing control efforts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Previous research suggested that proper blood pressure (BP) management in acute stroke may need to take into account the underlying etiology. METHODS: All patients with acute ischemic stroke registered in the ASTRAL registry between 2003 and 2009 were analyzed. Unfavorable outcome was defined as modified Rankin Scale score >2. A local polynomial surface algorithm was used to assess the effect of baseline and 24- to 48-hour systolic BP (SBP) and mean arterial pressure (MAP) on outcome in patients with lacunar, atherosclerotic, and cardioembolic stroke. RESULTS: A total of 791 patients were included in the analysis. For lacunar and atherosclerotic strokes, there was no difference in the predicted probability of unfavorable outcome between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, or >160 mm Hg (15.3 vs 12.1% vs 20.8%, respectively, for lacunar, p = 015; 41.0% vs 41.5% vs 45.5%, respectively, for atherosclerotic, p = 075), or between patients with BP increase vs decrease at 24-48 hours (18.7% vs 18.0%, respectively, for lacunar, p = 0.84; 43.4% vs 43.6%, respectively, for atherosclerotic, p = 0.88). For cardioembolic strokes, increase of BP at 24-48 hours was associated with higher probability of unfavorable outcome compared to BP reduction (53.4% vs 42.2%, respectively, p = 0.037). Also, the predicted probability of unfavorable outcome was significantly different between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, and >160 mm Hg (34.8% vs 42.3% vs 52.4%, respectively, p < 0.01). CONCLUSIONS: This study provides evidence to support that BP management in acute stroke may have to be tailored with respect to the underlying etiopathogenetic mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The properties and cosmological importance of a class of non-topological solitons, Q-balls, are studied. Aspects of Q-ball solutions and Q-ball cosmology discussed in the literature are reviewed. Q-balls are particularly considered in the Minimal Supersymmetric Standard Model with supersymmetry broken by a hidden sector mechanism mediated by either gravity or gauge interactions. Q-ball profiles, charge-energy relations and evaporation rates for realistic Q-ball profiles are calculated for general polynomial potentials and for the gravity mediated scenario. In all of the cases, the evaporation rates are found to increase with decreasing charge. Q-ball collisions are studied by numerical means in the two supersymmetry breaking scenarios. It is noted that the collision processes can be divided into three types: fusion, charge transfer and elastic scattering. Cross-sections are calculated for the different types of processes in the different scenarios. The formation of Q-balls from the fragmentation of the Aflieck-Dine -condensate is studied by numerical and analytical means. The charge distribution is found to depend strongly on the initial energy-charge ratio of the condensate. The final state is typically noted to consist of Q- and anti-Q-balls in a state of maximum entropy. By studying the relaxation of excited Q-balls the rate at which excess energy can be emitted is calculated in the gravity mediated scenario. The Q-ball is also found to withstand excess energy well without significant charge loss. The possible cosmological consequences of these Q-ball properties are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To assess the level of hemoglobin-Hb during pregnancy before and after fortification of flours with iron. Method: A cross-sectional study with data from 12,119 pregnant women attended at a public prenatal from five macro regions of Brazil. The sample was divided into two groups: Before-fortification (birth before June/2004) and After-fortification (last menstruation after June/2005). Hb curves were compared with national and international references. Polynomial regression models were built, with a significance level of 5%. Results: Although the higher levels of Hb in all gestational months after-fortification, the polynomial regression did not show the fortification effect (p=0.3). Curves in the two groups were above the references in the first trimester, with following decrease and stabilization at the end of pregnancy. Conclusion: Although the fortification effect was not confirmed, the study presents variation of Hb levels during pregnancy, which is important for assistencial practice and evaluation of public policies.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Different studies have shown circadian variation of ischemic burden among patients with ST-Elevation Myocardial Infarction (STEMI), but with controversial results. The aim of this study was to analyze circadian variation of myocardial infarction size and in-hospital mortality in a large multicenter registry. METHODS: This retrospective, registry-based study was based on data from AMIS Plus, a large multicenter Swiss registry of patients who suffered myocardial infarction between 1999 and 2013. Peak creatine kinase (CK) was used as a proxy measure for myocardial infarction size. Associations between peak CK, in-hospital mortality, and the time of day at symptom onset were modelled using polynomial-harmonic regression methods. RESULTS: 6,223 STEMI patients were admitted to 82 acute-care hospitals in Switzerland and treated with primary angioplasty within six hours of symptom onset. Only the 24-hour harmonic was significantly associated with peak CK (p = 0.0001). The maximum average peak CK value (2,315 U/L) was for patients with symptom onset at 23:00, whereas the minimum average (2,017 U/L) was for onset at 11:00. The amplitude of variation was 298 U/L. In addition, no correlation was observed between ischemic time and circadian peak CK variation. Of the 6,223 patients, 223 (3.58%) died during index hospitalization. Remarkably, only the 24-hour harmonic was significantly associated with in-hospital mortality. The risk of death from STEMI was highest for patients with symptom onset at 00:00 and lowest for those with onset at 12:00. DISCUSSION: As a part of this first large study of STEMI patients treated with primary angioplasty in Swiss hospitals, investigations confirmed a circadian pattern to both peak CK and in-hospital mortality which were independent of total ischemic time. Accordingly, this study proposes that symptom onset time be incorporated as a prognosis factor in patients with myocardial infarction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One aspect of person-job fit reflects congruence between personal preferences and job design; as congruence increases so should satisfaction. We hypothesized that power distance would moderate whether fit is related to satisfaction with degree of job formalization. We obtained measures of job-formalization, fit and satisfaction, as well as organizational commitment from employees (n = 772) in a multinational firm with subsidiaries in six countries. Confirming previous findings, individuals from low power-distance cultures were most satisfied with increasing fit. However, the extent to which individuals from high power-distance cultures were satisfied did not necessarily depend on increasing fit, but mostly on whether the degree of formalization received was congruent to cultural norms. Irrespective of culture, satisfaction with formalization predicted a broad measure of organizational commitment. Apart from our novel extension of fit theory, we show how moderation can be tested in the context of polynomial response surface regression and how specific hypotheses can be tested regarding different points on the response surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses the robustness of Least-Squares Monte Carlo, a techniquerecently proposed by Longstaff and Schwartz (2001) for pricing Americanoptions. This method is based on least-squares regressions in which theexplanatory variables are certain polynomial functions. We analyze theimpact of different basis functions on option prices. Numerical resultsfor American put options provide evidence that a) this approach is veryrobust to the choice of different alternative polynomials and b) few basisfunctions are required. However, these conclusions are not reached whenanalyzing more complex derivatives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The principal objective of the knot theory is to provide a simple way of classifying and ordering all the knot types. Here, we propose a natural classification of knots based on their intrinsic position in the knot space that is defined by the set of knots to which a given knot can be converted by individual intersegmental passages. In addition, we characterize various knots using a set of simple quantum numbers that can be determined upon inspection of minimal crossing diagram of a knot. These numbers include: crossing number; average three-dimensional writhe; number of topological domains; and the average relaxation value

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In pediatric echocardiography, cardiac dimensions are often normalized for weight, height, or body surface area (BSA). The combined influence of height and weight on cardiac size is complex and likely varies with age. We hypothesized that increasing weight for height, as represented by body mass index (BMI) adjusted for age, is poorly accounted for in Z scores normalized for weight, height, or BSA. We aimed to evaluate whether a bias related to BMI was introduced when proximal aorta diameter Z scores are derived from bivariate models (only one normalizing variable), and whether such a bias was reduced when multivariable models are used. We analyzed 1,422 echocardiograms read as normal in children ≤18 years. We computed Z scores of the proximal aorta using allometric, polynomial, and multivariable models with four body size variables. We then assessed the level of residual association of Z scores and BMI adjusted for age and sex. In children ≥6 years, we found a significant residual linear association with BMI-for-age and Z scores for most regression models. Only a multivariable model including weight and height as independent predictors produced a Z score free of linear association with BMI. We concluded that a bias related to BMI was present in Z scores of proximal aorta diameter when normalization was done using bivariate models, regardless of the regression model or the normalizing variable. The use of multivariable models with weight and height as independent predictors should be explored to reduce this potential pitfall when pediatric echocardiography reference values are evaluated.