928 resultados para Hypergraph Partitioning
Resumo:
The association of a factors with the RNA polymerase dictates the expression profile of a bacterial cell. Major changes to the transcription profile are achieved by the use of multiple sigma factors that confer distinct promoter selectivity to the holoenzyme. The cellular concentration of a sigma factor is regulated by diverse mechanisms involving transcription, translation and post-translational events. The number of sigma factors varies substantially across bacteria. The diversity in the interactions between sigma factors also vary-ranging from collaboration, competition or partial redundancy in some cellular or environmental contexts. These interactions can be rationalized by a mechanistic model referred to as the partitioning of a space model of bacterial transcription. The structural similarity between different sigma/anti-sigma complexes despite poor sequence conservation and cellular localization reveals an elegant route to incorporate diverse regulatory mechanisms within a structurally conserved scaffold. These features are described here with a focus on sigma/anti-sigma complexes from Mycobacterium tuberculosis. In particular, we discuss recent data on the conditional regulation of sigma/anti-sigma factor interactions. Specific stages of M. tuberculosis infection, such as the latent phase, as well as the remarkable adaptability of this pathogen to diverse environmental conditions can be rationalized by the synchronized action of different a factors.
Resumo:
Programming for parallel architectures that do not have a shared address space is extremely difficult due to the need for explicit communication between memories of different compute devices. A heterogeneous system with CPUs and multiple GPUs, or a distributed-memory cluster are examples of such systems. Past works that try to automate data movement for distributed-memory architectures can lead to excessive redundant communication. In this paper, we propose an automatic data movement scheme that minimizes the volume of communication between compute devices in heterogeneous and distributed-memory systems. We show that by partitioning data dependences in a particular non-trivial way, one can generate data movement code that results in the minimum volume for a vast majority of cases. The techniques are applicable to any sequence of affine loop nests and works on top of any choice of loop transformations, parallelization, and computation placement. The data movement code generated minimizes the volume of communication for a particular configuration of these. We use a combination of powerful static analyses relying on the polyhedral compiler framework and lightweight runtime routines they generate, to build a source-to-source transformation tool that automatically generates communication code. We demonstrate that the tool is scalable and leads to substantial gains in efficiency. On a heterogeneous system, the communication volume is reduced by a factor of 11X to 83X over state-of-the-art, translating into a mean execution time speedup of 1.53X. On a distributed-memory cluster, our scheme reduces the communication volume by a factor of 1.4X to 63.5X over state-of-the-art, resulting in a mean speedup of 1.55X. In addition, our scheme yields a mean speedup of 2.19X over hand-optimized UPC codes.
Resumo:
In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.
Resumo:
The efficiency of long-distance acoustic signalling of insects in their natural habitat is constrained in several ways. Acoustic signals are not only subjected to changes imposed by the physical structure of the habitat such as attenuation and degradation but also to masking interference from co-occurring signals of other acoustically communicating species. Masking interference is likely to be a ubiquitous problem in multi-species assemblages, but successful communication in natural environments under noisy conditions suggests powerful strategies to deal with the detection and recognition of relevant signals. In this review we present recent work on the role of the habitat as a driving force in shaping insect signal structures. In the context of acoustic masking interference, we discuss the ecological niche concept and examine the role of acoustic resource partitioning in the temporal, spatial and spectral domains as sender strategies to counter masking. We then examine the efficacy of different receiver strategies: physiological mechanisms such as frequency tuning, spatial release from masking and gain control as useful strategies to counteract acoustic masking. We also review recent work on the effects of anthropogenic noise on insect acoustic communication and the importance of insect sounds as indicators of biodiversity and ecosystem health.
Resumo:
This paper presents a GPU implementation of normalized cuts for road extraction problem using panchromatic satellite imagery. The roads have been extracted in three stages namely pre-processing, image segmentation and post-processing. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, vegetation,. and fallow regions). The road regions are then extracted using the normalized cuts algorithm. Normalized cuts algorithm is a graph-based partitioning `approach whose focus lies in extracting the global impression (perceptual grouping) of an image rather than local features. For the segmented image, post-processing is carried out using morphological operations - erosion and dilation. Finally, the road extracted image is overlaid on the original image. Here, a GPGPU (General Purpose Graphical Processing Unit) approach has been adopted to implement the same algorithm on the GPU for fast processing. A performance comparison of this proposed GPU implementation of normalized cuts algorithm with the earlier algorithm (CPU implementation) is presented. From the results, we conclude that the computational improvement in terms of time as the size of image increases for the proposed GPU implementation of normalized cuts. Also, a qualitative and quantitative assessment of the segmentation results has been projected.
Resumo:
The climatic effects of Solar Radiation Management (SRM) geoengineering have been often modeled by simply reducing the solar constant. This is most likely valid only for space sunshades and not for atmosphere and surface based SRM methods. In this study, a global climate model is used to evaluate the differences in the climate response to SRM by uniform solar constant reduction and stratospheric aerosols. Our analysis shows that when global mean warming from a doubling of CO2 is nearly cancelled by both these methods, they are similar when important surface and tropospheric climate variables are considered. However, a difference of 1 K in the global mean stratospheric (61-9.8 hPa) temperature is simulated between the two SRM methods. Further, while the global mean surface diffuse radiation increases by similar to 23 % and direct radiation decreases by about 9 % in the case of sulphate aerosol SRM method, both direct and diffuse radiation decrease by similar fractional amounts (similar to 1.0 %) when solar constant is reduced. When CO2 fertilization effects from elevated CO2 concentration levels are removed, the contribution from shaded leaves to gross primary productivity (GPP) increases by 1.8 % in aerosol SRM because of increased diffuse light. However, this increase is almost offset by a 15.2 % decline in sunlit contribution due to reduced direct light. Overall both the SRM simulations show similar decrease in GPP (similar to 8 %) and net primary productivity (similar to 3 %). Based on our results we conclude that the climate states produced by a reduction in solar constant and addition of aerosols into the stratosphere can be considered almost similar except for two important aspects: stratospheric temperature change and the consequent implications for the dynamics and the chemistry of the stratosphere and the partitioning of direct versus diffuse radiation reaching the surface. Further, the likely dependence of global hydrological cycle response on aerosol particle size and the latitudinal and height distribution of aerosols is discussed.
Resumo:
Prediction of queue waiting times of jobs submitted to production parallel batch systems is important to provide overall estimates to users and can also help meta-schedulers make scheduling decisions. In this work, we have developed a framework for predicting ranges of queue waiting times for jobs by employing multi-class classification of similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k-Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the class predicted using k-NN and its neighboring classes are used to provide a set of ranges of predicted wait times with probabilities. We have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. Experiments with different production supercomputer job traces show that our prediction strategies can give correct predictions for about 77-87% of the jobs, and also result in about 12% improved accuracy when compared to the next best existing method. Experiments with our meta-scheduling strategy using different production and synthetic job traces for various system sizes, partitioning schemes and different workloads, show that the meta-scheduling strategy gives much improved performance when compared to existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47%.
Resumo:
This paper derives outer bounds for the 2-user symmetric linear deterministic interference channel (SLDIC) with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. Five outer bounds are derived, under different assumptions of providing side information to receivers and partitioning the encoded message/output depending on the relative strength of the signal and the interference. The usefulness of these outer bounds is shown by comparing the bounds with the inner bound on the achievable secrecy rate derived by the authors in a previous work. Also, the outer bounds help to establish that sharing random bits through the cooperative link can achieve the optimal rate in the very high interference regime.
Resumo:
The present paper reports a new class of Co based superalloys that has gamma-gamma' microstructure and exhibits much lower density compared to other commercially available Co superalloys including Co-Al-W based alloys. The basic composition is Co-10Al-5Mo (at%) with addition of 2 at% Ta for stabilization of gamma' phase. The gamma-gamma' microstructure evolves through solutionising and aging treatment. Using first principles calculations, we observe that Ta plays a crucial role in stabilizing gamma' phase. By addition of Ta in the basic stoichiometric composition Co-3(Al, Mo), the enthalpy of formation (Delta H-f) of L1(2) structure (gamma' phase) becomes more negative in comparison to DO19 structure. The All of the L12 structure becomes further more negative by the occupancy of Ni and Ti atoms in the lattice suggesting an increase in the stability of the gamma' precipitates. Among large number of alloys studied experimentally, the paper presents results of detailed investigations on Co-10Al-5Mo-2Ta, Co-30Ni-10Al-5Mo-2Ta and Co-30Ni-10Al-5Mo-2Ta-2Ti. To evaluate the role alloying elements, atom probe tomography investigations were carried out to obtain partition coefficients for the constituent elements. The results show strong partitioning of Ni, Al, Ta and Ti in ordered gamma' precipitates. 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The link between atmospheric CO2 level and ventilation state of the deep ocean is poorly understood due to the lack of coherent observations on the partitioning of carbon between atmosphere and ocean. In this Southern Ocean study, we have classified the Southern Ocean into different zones based on its hydrological features and have binned the variability in latitudinal air-CO2 concentration and its isotopic ratios. Together with air-CO2, we analysed the surface water for the isotopic ratios in dissolved inorganic carbon (DIC). Using the binary mixing approach on the isotopic ratio of atmospheric CO2 and its concentration, we identified the delta C-13 value of source CO2. The isotopic composition of source CO2 was around -9.22 +/- 0.26 parts per thousand for the year 2011 and 2012, while a composition of -13.49 +/- 4.07 parts per thousand was registered for the year 2013. We used the delta C-13 of DIC to predict the CO2 composition in air under equilibrium and compared our estimates with actual observations. We suggest that the degeneration of the DIC in presence of warm water in the region was the factor responsible for adding the CO2 to the atmosphere above. The place of observation coincides with the zone of high wind speed which promotes the process of CO2 exsolution from sea water. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The broader goal of the research being described here is to automatically acquire diagnostic knowledge from documents in the domain of manual and mechanical assembly of aircraft structures. These documents are treated as a discourse used by experts to communicate with others. It therefore becomes possible to use discourse analysis to enable machine understanding of the text. The research challenge addressed in the paper is to identify documents or sections of documents that are potential sources of knowledge. In a subsequent step, domain knowledge will be extracted from these segments. The segmentation task requires partitioning the document into relevant segments and understanding the context of each segment. In discourse analysis, the division of a discourse into various segments is achieved through certain indicative clauses called cue phrases that indicate changes in the discourse context. However, in formal documents such language may not be used. Hence the use of a domain specific ontology and an assembly process model is proposed to segregate chunks of the text based on a local context. Elements of the ontology/model, and their related terms serve as indicators of current context for a segment and changes in context between segments. Local contexts are aggregated for increasingly larger segments to identify if the document (or portions of it) pertains to the topic of interest, namely, assembly. Knowledge acquired through such processes enables acquisition and reuse of knowledge during any part of the lifecycle of a product.
Resumo:
We develop a scheme based on a real space microscopic analysis of particle dynamics to ascertain the relevance of dynamical facilitation as a mechanism of structural relaxation in glass-forming liquids. By analyzing the spatial organization of localized excitations within clusters of mobile particles in a colloidal glass former and examining their partitioning into shell-like and corelike regions, we establish the existence of a crossover from a facilitation-dominated regime at low area fractions to a collective activated hopping-dominated one close to the glass transition. This crossover occurs in the vicinity of the area fraction at which the peak of the mobility transfer function exhibits a maximum and the morphology of cooperatively rearranging regions changes from stringlike to a compact form. Collectively, our findings suggest that dynamical facilitation is dominated by collective hopping close to the glass transition, thereby constituting a crucial step towards identifying the correct theoretical scenario for glass formation.
Resumo:
The polyhedral model provides an expressive intermediate representation that is convenient for the analysis and subsequent transformation of affine loop nests. Several heuristics exist for achieving complex program transformations in this model. However, there is also considerable scope to utilize this model to tackle the problem of automatic memory footprint optimization. In this paper, we present a new automatic storage optimization technique which can be used to achieve both intra-array as well as inter-array storage reuse with a pre-determined schedule for the computation. Our approach works by finding statement-wise storage partitioning hyper planes that partition a unified global array space so that values with overlapping live ranges are not mapped to the same partition. Our heuristic is driven by a fourfold objective function which not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter statement storage reuse. The storage mappings obtained using our heuristic can be asymptotically better than those obtained by any existing technique. We implement our technique and demonstrate its practical impact by evaluating its effectiveness on several benchmarks chosen from the domains of image processing, stencil computations, and high-performance computing.
Resumo:
Introduction: Our purpose was to assess how pairs of sibling horseshoe bats coexists when their morphology and echolocation are almost identical. We collected data on echolocation, wing morphology, diet, and habitat use of sympatric Rhinolophus mehelyi and R. euryale. We compared our results with literature data collected in allopatry with similar protocols and at the same time of the year (breeding season). Results:Echolocation frequencies recorded in sympatry for R. mehelyi (mean = 106.8 kHz) and R. euryale (105.1 kHz) were similar to those reported in allopatry (R. mehelyi 105–111 kHz; R. euryale 101–109 kHz). Wing parameters were larger in R. mehelyi than R. euryale for both sympatric and allopatric conditions. Moths constitute the bulk of the diet of both species in sympatry and allopatry, with minor variation in the amounts of other prey. There were no inter-specific differences in the use of foraging habitats in allopatry in terms of structural complexity, however we found inter-specific differences between sympatric populations: R. mehelyi foraged in less complex habitats. The subtle inter-specific differences in echolocation frequency seems to be unlikely to facilitate dietary niche partitioning; overall divergences observed in diet may be explained as a consequence of differential prey availability among foraging habitats. Inter-specific differences in the use of foraging habitats in sympatry seems to be the main dimension for niche partitioning between R. mehelyi and R. euryale, probably due to letter differences in wing morphology. Conclusions: Coexistence between sympatric sibling horseshoe bats is likely allowed by a displacement in spatial niche dimension, presumably due to the wing morphology of each species, and shifts the niche domains that minimise competition. Effective measures for conservation of sibling/similar horseshoe bats should guarantee structural diversity of foraging habitats.
Resumo:
We present a scheme to generate clusters submodels with stage ordering from a (symmetric or a nonsymmetric one) multistage stochastic mixed integer optimization model using break stage. We consider a stochastic model in compact representation and MPS format with a known scenario tree. The cluster submodels are built by storing first the 0-1 the variables, stage by stage, and then the continuous ones, also stage by stage. A C++ experimental code has been implemented for reordering the stochastic model as well as the cluster decomposition after the relaxation of the non-anticipativiy constraints until the so-called breakstage. The computational experience shows better performance of the stage ordering in terms of elapsed time in a randomly generated testbed of multistage stochastic mixed integer problems.