960 resultados para Graph partitioning
Resumo:
A systematic method for constructing trigonometric R-matrices corresponding to the (multiplicity-free) tensor product of any two affinizable representations of a quantum algebra or superalgebra has been developed by the Brisbane group and its collaborators. This method has been referred to as the Tensor Product Graph Method. Here we describe applications of this method to untwisted and twisted quantum affine superalgebras.
Resumo:
Pearl millet landraces from Rajasthan, India, yield significantly less than improved cultivars under optimum growing conditions, but not under stressed conditions. To successfully develop a simulation model for pearl millet, capable of capturing such genotype x environment (G x E) interactions for grain yield, we need to understand the causes of the observed yield interaction. The aim of this paper is to quantify the key parameters that determine the accumulation and partitioning of biomass: the,light extinction coefficient, radiation use efficiency (RUE), pattern of dry matter allocation to the leaf blades, the determination of grain number, and the rate and duration of dry matter accumulation into individual grains. We used data on improved cultivars and landraces, obtained from both published and unpublished sources collected at ICRISAT, Patancheru, India. Where possible, the effects of cultivar and axis (main shoot vs. tillers) on these parameters were analysed, as previous research suggested that G x E interactions for grain yield are associated with differences in tillering habit. Our results indicated there were no cultivar differences in extinction coefficient, RUE, and biomass partitioning before anthesis, and differences between axes in biomass partitioning were negligible. This indicates there was no basis for cultivar differences in the potential grain yield. Landraces, however, produced consistently less grain yield for a given rate of dry matter accumulation at anthesis than did improved cultivars. This was caused by a combination of low grain number and small grain size. The latter was predominantly due to a lower grain growth rate, as genotypic differences in the duration of grain filling were relatively small. Main shoot and tillers also had a similar duration of grain filling. The low grain yield of the landraces was associated with profuse nodal tillering, supporting the hypothesis that grain yield was below the potential yield that could be supported by assimilate availability. We hypothesise this is a survival strategy, which enhances the prospects to escape the effects of stress around anthesis. (C) 2002 E.J. van Oosterom. Published by Elsevier Science B.V. All rights reserved.
Resumo:
We study partitions of the set of all ((v)(3)) triples chosen from a v-set into pairwise disjoint planes with three points per line. Our partitions may contain copies of PG(2, 2) only (Fano partitions) or copies of AG(2, 3) only (affine partitions) or copies of some planes of each type (mixed partitions). We find necessary conditions for Fano or affine partitions to exist. Such partitions are already known in several cases: Fano partitions for v = 8 and affine partitions for v = 9 or 10. We construct such partitions for several sporadic orders, namely, Fano partitions for v = 14, 16, 22, 23, 28, and an affine partition for v = 18. Using these as starter partitions, we prove that Fano partitions exist for v = 7(n) + 1, 13(n) + 1, 27(n) + 1, and affine partitions for v = 8(n) + 1, 9(n) + 1, 17(n) + 1. In particular, both Fano and affine partitions exist for v = 3(6n) + 1. Using properties of 3-wise balanced designs, we extend these results to show that affine partitions also exist for v = 3(2n). Similarly, mixed partitions are shown to exist for v = 8(n), 9(n), 11(n) + 1.
Resumo:
Predicting plant leaf area production is required for modelling carbon balance and tiller dynamics in plant canopies. Plant leaf area production can be studied using a framework based on radiation intercepted, radiation use efficiency (RUE) and leaf area ratio (LAR) (ratio of leaf area to net above-ground biomass). The objective of this study was to test this framework for predicting leaf area production of sorghum during vegetative development by examining the stability of the contributing components over a large range of plant density. Four densities, varying from 2 to 16 plants m(-2), were implemented in a field experiment. Plants were either allowed to tiller or were maintained as uniculm by systematic tiller removal. In all cases, intercepted radiation was recorded daily and leaf area and shoot dry matter partitioning were quantified weekly at individual culm level. Up to anthesis, a unique relationship applied between fraction of intercepted radiation and leaf area index, and between shoot dry weight accumulation and amount of intercepted radiation, regardless of plant density. Partitioning of shoot assimilate between leaf, stem and head was also common across treatments up to anthesis, at both plant and culm levels. The relationship with thermal time (TT) from emergence of specific leaf area (SLA) and LAR of tillering plants did not change with plant density. In contrast, SLA of uniculm plants was appreciably lower under low-density conditions at any given TT from emergence. This was interpreted as a consequence of assimilate surplus arising from the inability of the plant to compensate by increasing the leaf area a culm could produce. It is argued that the stability of the extinction coefficient, RUE and plant LAR of tillering plants observed in these conditions provides a reliable way to predict leaf area production regardless of plant density. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, we show that K-10n can be factored into alpha C-5-factors and beta 1-factors for all non-negative integers alpha and beta satisfying 2alpha + beta = 10(n) - 1.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
Copyright © 2013 John Wiley & Sons Ltd.
Resumo:
Modern multicore processors for the embedded market are often heterogeneous in nature. One feature often available are multiple sleep states with varying transition cost for entering and leaving said sleep states. This research effort explores the energy efficient task-mapping on such a heterogeneous multicore platform to reduce overall energy consumption of the system. This is performed in the context of a partitioned scheduling approach and a very realistic power model, which improves over some of the simplifying assumptions often made in the state-of-the-art. The developed heuristic consists of two phases, in the first phase, tasks are allocated to minimise their active energy consumption, while the second phase trades off a higher active energy consumption for an increased ability to exploit savings through more efficient sleep states. Extensive simulations demonstrate the effectiveness of the approach.
Resumo:
We derived a framework in integer programming, based on the properties of a linear ordering of the vertices in interval graphs, that acts as an edge completion model for obtaining interval graphs. This model can be applied to problems of sequencing cutting patterns, namely the minimization of open stacks problem (MOSP). By making small modifications in the objective function and using only some of the inequalities, the MOSP model is applied to another pattern sequencing problem that aims to minimize, not only the number of stacks, but also the order spread (the minimization of the stack occupation problem), and the model is tested.
Resumo:
The minimum interval graph completion problem consists of, given a graph G = ( V, E ), finding a supergraph H = ( V, E ∪ F ) that is an interval graph, while adding the least number of edges |F| . We present an integer programming formulation for solving the minimum interval graph completion problem recurring to a characteri- zation of interval graphs that produces a linear ordering of the maximal cliques of the solution graph.