928 resultados para Hypergraph Partitioning
Resumo:
In a complex multitrophic plant-animal interaction system in which there are direct and indirect interactions between species, comprehending the dynamics of these multiple partners is very important for an understanding of how the system is structured. We investigated the plant Ficus racemosa L. (Moraceae) and its community of obligatory mutualistic and parasitic fig wasps (Hymenoptera: Chalcidoidea) that develop within the fig inflorescence or syconium, as well as their interaction with opportunistic ants. We focused on temporal resource partitioning among members of the fig wasp community over the development cycle of the fig syconia during which wasp oviposition and development occur and we studied the activity rhythm of the ants associated with this community. We found that the seven members of the wasp community partitioned their oviposition across fig syconium development phenology and showed interspecific variation in activity across the day-night cycle. The wasps presented a distinct sequence in their arrival at fig syconia for oviposition, with the parasitoid wasps following the galling wasps. Although fig wasps are known to be largely diurnal, we documented night oviposition in several fig wasp species for the first time. Ant activity on the fig syconia was correlated with wasp activity and was dependent on whether the ants were predatory or trophobiont-tending species; only numbers of predatory ants increased during peak arrivals of the wasps.
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
Condensation from the vapor state is an important technique for the preparation of nanopowders. Levitational gas condensation is one such technique that has a unique ability of attaining steady state. Here, we present the results of applying this technique to an iron-copper alloy (96Fe-4Cu). A qualitative model of the process is proposed to understand the process and the characteristics of resultant powder. A phase diagram of the alloy system in the liquid-vapor region was calculated to help understand the course of condensation, especially partitioning and coring during processing. The phase diagram could not explain coring in view of the simultaneous occurrence of solidification and the fast homogenization through diffusion in the nanoparticles; however, it could predict the very low levels of copper observed in the levitated drop. The enrichment of copper observed near the surface of the powder was considered to be a manifestation of the lower surface energy of copper compared with that of iron. Heat transfer calculations indicated that most condensed particles can undergo solidification even when they are still in the proximity of the levitated drop. It helped us to predict the temperature and the cooling rate of the powder particles as they move away from the levitated drop. The particles formed by the process seem to be single domain, single crystals that are magnetic in nature. They, thus, can agglomerate by forming a chain-like structure, which manifests as a three-dimensional network enclosing a large unoccupied space, as noticed in scanning electron microscopy and transmission electron microscopy studies. This also explains the observed low packing density of the nanopowders.
Resumo:
Clustering is a process of partitioning a given set of patterns into meaningful groups. The clustering process can be viewed as consisting of the following three phases: (i) feature selection phase, (ii) classification phase, and (iii) description generation phase. Conventional clustering algorithms implicitly use knowledge about the clustering environment to a large extent in the feature selection phase. This reduces the need for the environmental knowledge in the remaining two phases, permitting the usage of simple numerical measure of similarity in the classification phase. Conceptual clustering algorithms proposed by Michalski and Stepp [IEEE Trans. PAMI, PAMI-5, 396–410 (1983)] and Stepp and Michalski [Artif. Intell., pp. 43–69 (1986)] make use of the knowledge about the clustering environment in the form of a set of predefined concepts to compute the conceptual cohesiveness during the classification phase. Michalski and Stepp [IEEE Trans. PAMI, PAMI-5, 396–410 (1983)] have argued that the results obtained with the conceptual clustering algorithms are superior to conventional methods of numerical classification. However, this claim was not supported by the experimental results obtained by Dale [IEEE Trans. PAMI, PAMI-7, 241–244 (1985)]. In this paper a theoretical framework, based on an intuitively appealing set of axioms, is developed to characterize the equivalence between the conceptual clustering and conventional clustering. In other words, it is shown that any classification obtained using conceptual clustering can also be obtained using conventional clustering and vice versa.
Resumo:
The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.
Resumo:
Cylindrical specimens of commercial pure titanium have been compressed at strain rates in the range of 0.1 to 100 s-1 and temperatures in the range of 25-degrees-C to 400-degrees-C. At strain rates of 10 and 100 s-1, the specimens exhibited adiabatic shear bands. At lower strain rates, the material deformed in an inhomogeneous fashion. These material-related instabilities are examined in the light of the ''phenomenological model'' and the ''dynamic materials mode.'' It is found that the regime of adiabatic shear band formation is predicted by the phenomenological model, while the dynamic materials model is able to predict the inhomogeneous deformation zone. The criterion based on power partitioning is competent to predict the variations within the inhomogeneous deformation zone.
Resumo:
Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.
Functional Analysis of an Acid Adaptive DNA Adenine Methyltransferase from Helicobacter pylori 26695
Resumo:
HP0593 DNA-(N-6-adenine)-methyltransferase (HP0593 MTase) is a member of a Type III restriction-modification system in Helicobacter pylori strain 26695. HP0593 MTase has been cloned, overexpressed and purified heterologously in Escherichia coli. The recognition sequence of the purified MTase was determined as 5'-GCAG-3' and the site of methylation was found to be adenine. The activity of HP0593 MTase was found to be optimal at pH 5.5. This is a unique property in context of natural adaptation of H. pylori in its acidic niche. Dot-blot assay using antibodies that react specifically with DNA containing m6A modification confirmed that HP0593 MTase is an adenine-specific MTase. HP0593 MTase occurred as both monomer and dimer in solution as determined by gel-filtration chromatography and chemical-crosslinking studies. The nonlinear dependence of methylation activity on enzyme concentration indicated that more than one molecule of enzyme was required for its activity. Analysis of initial velocity with AdoMet as a substrate showed that two molecules of AdoMet bind to HP0593 MTase, which is the first example in case of Type III MTases. Interestingly, metal ion cofactors such as Co2+, Mn2+, and also Mg2+ stimulated the HP0593 MTase activity. Preincubation and isotope partitioning analyses clearly indicated that HP0593 MTase-DNA complex is catalytically competent, and suggested that DNA binds to the MTase first followed by AdoMet. HP0593 MTase shows a distributive mechanism of methylation on DNA having more than one recognition site. Considering the occurrence of GCAG sequence in the potential promoter regions of physiologically important genes in H. pylori, our results provide impetus for exploring the role of this DNA MTase in the cellular processes of H. pylori.
Resumo:
Clustered VLIW architectures solve the scalability problem associated with flat VLIW architectures by partitioning the register file and connecting only a subset of the functional units to a register file. However, inter-cluster communication in clustered architectures leads to increased leakage in functional components and a high number of register accesses. In this paper, we propose compiler scheduling algorithms targeting two previously ignored power-hungry components in clustered VLIW architectures, viz., instruction decoder and register file. We consider a split decoder design and propose a new energy-aware instruction scheduling algorithm that provides 14.5% and 17.3% benefit in the decoder power consumption on an average over a purely hardware based scheme in the context of 2-clustered and 4-clustered VLIW machines. In the case of register files, we propose two new scheduling algorithms that exploit limited register snooping capability to reduce extra register file accesses. The proposed algorithms reduce register file power consumption on an average by 6.85% and 11.90% (10.39% and 17.78%), respectively, along with performance improvement of 4.81% and 5.34% (9.39% and 11.16%) over a traditional greedy algorithm for 2-clustered (4-clustered) VLIW machine. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.
Resumo:
The reactions of p-nitrophenyl alkanoate esters with dialkylaminopyridine (DAAP) and its related mono- and di-anionic water-soluble derivatives have been studied separately in three different microemulsion (ME) media. These were (a) oil-in-water ME (O/W), (b) water-in-oil ME (W/O) and (c) a bicontinuous ME, where oil and water are in nearly comparable amounts. All the ME systems were stabilized by cationic surfactant, cetyltrimethylammonium bromide (CTABr) and butanol as a cosurfactant. The second-order rate constants (k(2)) in the microemulsion media were also determined : over a phase volume (phi) of approximately 0.13-0.46. In order to explain the contribution of effective concentration of the nucleophiles in the aqueous pseudophase, corrected rate constants k(2 phi) = k(2)(1 - phi) were obtained, The rate constants of the corresponding hydrolytic reactions were also examined in CTABr micelles. While the DAAP catalysts were partitioned between the micellar and aqueous pseudophases in ME, the hydrophobic substrates were found to be mainly confined to oil-rich phases, Present results indicate that the main effect of ME media on the hydrolysis reaction is due,to both electrostatic reasons and substrate partitioning.
Resumo:
A parallel matrix multiplication algorithm is presented, and studies of its performance and estimation are discussed. The algorithm is implemented on a network of transputers connected in a ring topology. An efficient scheme for partitioning the input matrices is introduced which enables overlapping computation with communication. This makes the algorithm achieve near-ideal speed-up for reasonably large matrices. Analytical expressions for the execution time of the algorithm have been derived by analysing its computation and communication characteristics. These expressions are validated by comparing the theoretical results of the performance with the experimental values obtained on a four-transputer network for both square and irregular matrices. The analytical model is also used to estimate the performance of the algorithm for a varying number of transputers and varying problem sizes. Although the algorithm is implemented on transputers, the methodology and the partitioning scheme presented in this paper are quite general and can be implemented on other processors which have the capability of overlapping computation with communication. The equations for performance prediction can also be extended to other multiprocessor systems.
Resumo:
The isothermal section of the phase diagram for the system NiO-MgO-SiO2 at 1373 K is established, The tie lines between (NiXMg1-X)O solid solution with rock salt structure and orthosilicate solid solution (NiYMg1-Y)Si0.5O2 and between orthosilicate and metasilicate (NiZMg1-Z)SiO3 crystalline solutions are determined using electron probe microanalysis (EPMA) and lattice parameter measurement on equilibrated samples, Although the monoxides and orthosilicates of Ni and Mg form a continuous range of solid solutions, the metasilicate phase exists only for 0 < Z < 0.096, The activity of NiO in the rock salt solid solution is determined as a function of composition and temperature in the range of 1023 to 1377 K using a solid state galvanic cell, The Gibbs energy of mixing of the monoxide solid solution can be expressed by a pseudo-subregular solution model: Delta G(ex) = X(1 - X)[(-2430 + 0.925T)X + (-5390 + 1.758T)(1 - X)] J/mol, The thermodynamic data for the rock salt phase are combined with information on interphase partitioning of Ni and Mg to generate the mixing properties for the orthosilicate and the metasilicate solid solutions, The regular solution model describes the orthosilicate and the metasilicate solid solutions at 1373 K within experimental uncertainties, The regular solution parameter Delta G(ex)/Y(1 - Y) is -820 (+/-70) J/mol for the orthosilicate solid solution, The corresponding value for the metasilicate solid solution is -220 (+/-150) J/mol, The derived activities for the orthosilicate solid solution are discussed in relation to the intracrystalline ion exchange equilibrium between M1 and M2 sites. The tie line information, in conjunction with the activity data for orthosilicate and metasilicate solid solutions, is used to calculate the Gibbs energy changes for the intercrystalline ion exchange reactions, Combining this with the known data for NiSi0.5O2, Gibbs energies of formation of MgSi0.5O2, MgSiO3, and metastable NiSiO3 are calculated, The Gibbs energy of formation of NiSiO3, from its component oxides, is equal to 7.67 (+/-0.6) kJ/mol at 1373 K.
Resumo:
We give a simple linear algebraic proof of the following conjecture of Frankl and Furedi [7, 9, 13]. (Frankl-Furedi Conjecture) if F is a hypergraph on X = {1, 2, 3,..., n} such that 1 less than or equal to /E boolean AND F/ less than or equal to k For All E, F is an element of F, E not equal F, then /F/ less than or equal to (i=0)Sigma(k) ((i) (n-1)). We generalise a method of Palisse and our proof-technique can be viewed as a variant of the technique used by Tverberg to prove a result of Graham and Pollak [10, 11, 14]. Our proof-technique is easily described. First, we derive an identity satisfied by a hypergraph F using its intersection properties. From this identity, we obtain a set of homogeneous linear equations. We then show that this defines the zero subspace of R-/F/. Finally, the desired bound on /F/ is obtained from the bound on the number of linearly independent equations. This proof-technique can also be used to prove a more general theorem (Theorem 2). We conclude by indicating how this technique can be generalised to uniform hypergraphs by proving the uniform Ray-Chaudhuri-Wilson theorem. (C) 1997 Academic Press.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.