939 resultados para niche partitioning
Resumo:
Transcription is the most fundamental step in gene expression in any living organism. Various environmental cues help in the maturation of core RNA polymerase (RNAP; alpha(2)beta beta'omega) with different sigma-factors, leading to the directed recruitment of RNAP to different promoter DNA sequences. Thus it is essential to determine the sigma-factors that affect the preferential partitioning of core RNAP among various a-actors, and the role of sigma-switching in transcriptional gene regulation. Further, the macromolecular assembly of holo RNAP takes place in an extremely crowded environment within a cell, and thus far the kinetics and thermodynamics of this molecular recognition process have not been well addressed. In this study we used a site-directed bioaffinity immobilization method to evaluate the relative binding affinities of three different Escherichia coli sigma-factors to the same core RNAP with variations in temperature and ionic strength while emulating the crowded cellular milieu. Our data indicate that the interaction of core RNAP-sigma is susceptible to changes in external stimuli such as osmolytic and thermal stress, and the degree of susceptibility varies among different sigma-factors. This allows for a reversible sigma-switching from housekeeping factors to alternate sigma-factors when the organism senses a change in its physiological conditions.
Resumo:
Fast content addressable data access mechanisms have compelling applications in today's systems. Many of these exploit the powerful wildcard matching capabilities provided by ternary content addressable memories. For example, TCAM based implementations of important algorithms in data mining been developed in recent years; these achieve an an order of magnitude speedup over prevalent techniques. However, large hardware TCAMs are still prohibitively expensive in terms of power consumption and cost per bit. This has been a barrier to extending their exploitation beyond niche and special purpose systems. We propose an approach to overcome this barrier by extending the traditional virtual memory hierarchy to scale up the user visible capacity of TCAMs while mitigating the power consumption overhead. By exploiting the notion of content locality (as opposed to spatial locality), we devise a novel combination of software and hardware techniques to provide an abstraction of a large virtual ternary content addressable space. In the long run, such abstractions enable applications to disassociate considerations of spatial locality and contiguity from the way data is referenced. If successful, ideas for making content addressability a first class abstraction in computing systems can open up a radical shift in the way applications are optimized for memory locality, just as storage class memories are soon expected to shift away from the way in which applications are typically optimized for disk access locality.
Resumo:
The effectiveness of the last-level shared cache is crucial to the performance of a multi-core system. In this paper, we observe and make use of the DelinquentPC - Next-Use characteristic to improve shared cache performance. We propose a new PC-centric cache organization, NUcache, for the shared last level cache of multi-cores. NUcache logically partitions the associative ways of a cache set into MainWays and DeliWays. While all lines have access to the MainWays, only lines brought in by a subset of delinquent PCs, selected by a PC selection mechanism, are allowed to enter the DeliWays. The PC selection mechanism is an intelligent cost-benefit analysis based algorithm that utilizes Next-Use information to select the set of PCs that can maximize the hits experienced in DeliWays. Performance evaluation reveals that NUcache improves the performance over a baseline design by 9.6%, 30% and 33% respectively for dual, quad and eight core workloads comprised of SPEC benchmarks. We also show that NUcache is more effective than other well-known cache-partitioning algorithms.
Resumo:
Low-frequency sounds are advantageous for long-range acoustic signal transmission, but for small animals they constitute a challenge for signal detection and localization. The efficient detection of sound in insects is enhanced by mechanical resonance either in the tracheal or tympanal system before subsequent neuronal amplification. Making small structures resonant at low sound frequencies poses challenges for insects and has not been adequately studied. Similarly, detecting the direction of long-wavelength sound using interaural signal amplitude and/or phase differences is difficult for small animals. Pseudophylline bushcrickets predominantly call at high, often ultrasonic frequencies, but a few paleotropical species use lower frequencies. We investigated the mechanical frequency tuning of the tympana of one such species, Onomarchus uninotatus, a large bushcricket that produces a narrow bandwidth call at an unusually low carrier frequency of 3.2. kHz. Onomarchus uninotatus, like most bushcrickets, has two large tympanal membranes on each fore-tibia. We found that both these membranes vibrate like hinged flaps anchored at the dorsal wall and do not show higher modes of vibration in the frequency range investigated (1.5-20. kHz). The anterior tympanal membrane acts as a low-pass filter, attenuating sounds at frequencies above 3.5. kHz, in contrast to the high-pass filter characteristic of other bushcricket tympana. Responses to higher frequencies are partitioned to the posterior tympanal membrane, which shows maximal sensitivity at several broad frequency ranges, peaking at 3.1, 7.4 and 14.4. kHz. This partitioning between the two tympanal membranes constitutes an unusual feature of peripheral auditory processing in insects. The complex tracheal shape of O. uninotatus also deviates from the known tube or horn shapes associated with simple band-pass or high-pass amplification of tracheal input to the tympana. Interestingly, while the anterior tympanal membrane shows directional sensitivity at conspecific call frequencies, the posterior tympanal membrane is not directional at conspecific frequencies and instead shows directionality at higher frequencies.
Resumo:
This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.
Resumo:
Video decoders used in emerging applications need to be flexible to handle a large variety of video formats and deliver scalable performance to handle wide variations in workloads. In this paper we propose a unified software and hardware architecture for video decoding to achieve scalable performance with flexibility. The light weight processor tiles and the reconfigurable hardware tiles in our architecture enable software and hardware implementations to co-exist, while a programmable interconnect enables dynamic interconnection of the tiles. Our process network oriented compilation flow achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi-processor, semi hardware and full hardware implementations of a video decoder. An application quality of service aware scheduler monitors and controls the operation of the entire system. We prove the concept through a prototype of the architecture on an off-the-shelf FPGA. The FPGA prototype shows a scaling in performance from QCIF to 1080p resolutions in four discrete steps. We also demonstrate that the reconfiguration time is short enough to allow migration from one configuration to the other without any frame loss.
Resumo:
It is well accepted that technology plays a critical role in socio-technical transitions, and sustainable development pathways. A society‘s amenability to the intervening (sustainable) technology is fundamental to permit these transitions. The current age is at a juncture wherein technological advancements and capacities provide the common individual with affordable and unlimited choice. Technological advancement and complexity can either remain simple and unseen to the user or may daunt him to keep away, in which case the intended pathways remain unexploited. The current paper explores the reasons behind rejection of technology and proposes a solution model to address these factors in accommodating socio-technical transitions. The paper begins with structuring the societal levels at which technological rejection occurs and proceeds to discuss technology rejection at the individual user (niche)level. The factors influencing decisions regarding technology rejection are identified and discussed with particular relevance to the progressive world (Asia).
Resumo:
The presence of a large number of spectral bands in the hyperspectral images increases the capability to distinguish between various physical structures. However, they suffer from the high dimensionality of the data. Hence, the processing of hyperspectral images is applied in two stages: dimensionality reduction and unsupervised classification techniques. The high dimensionality of the data has been reduced with the help of Principal Component Analysis (PCA). The selected dimensions are classified using Niche Hierarchical Artificial Immune System (NHAIS). The NHAIS combines the splitting method to search for the optimal cluster centers using niching procedure and the merging method is used to group the data points based on majority voting. Results are presented for two hyperspectral images namely EO-1 Hyperion image and Indian pines image. A performance comparison of this proposed hierarchical clustering algorithm with the earlier three unsupervised algorithms is presented. From the results obtained, we deduce that the NHAIS is efficient.
Resumo:
Clustering has been the most popular method for data exploration. Clustering is partitioning the data set into sub-partitions based on some measures say the distance measure, each partition has its own significant information. There are a number of algorithms explored for this purpose, one such algorithm is the Particle Swarm Optimization(PSO) which is a population based heuristic search technique derived from swarm intelligence. In this paper we present an improved version of the Particle Swarm Optimization where, each feature of the data set is given significance accordingly by adding some random weights, which also minimizes the distortions in the dataset if any. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The experimental results shows that our proposed methodology performs significantly better than the previously performed experiments.
Resumo:
Past studies use deterministic models to evaluate optimal cache configuration or to explore its design space. However, with the increasing number of components present on a chip multiprocessor (CMP), deterministic approaches do not scale well. Hence, we apply probabilistic genetic algorithms (GA) to determine a near-optimal cache configuration for a sixteen tiled CMP. We propose and implement a faster trace based approach to estimate fitness of a chromosome. It shows up-to 218x simulation speedup over the cycle-accurate architectural simulation. Our methodology can be applied to solve other cache optimization problems such as design space exploration of cache and its partitioning among applications/ virtual machines.
Resumo:
1. The relationship between species richness and ecosystem function, as measured by productivity or biomass, is of long-standing theoretical and practical interest in ecology. This is especially true for forests, which represent a majority of global biomass, productivity and biodiversity. 2. Here, we conduct an analysis of relationships between tree species richness, biomass and productivity in 25 forest plots of area 8-50ha from across the world. The data were collected using standardized protocols, obviating the need to correct for methodological differences that plague many studies on this topic. 3. We found that at very small spatial grains (0.04ha) species richness was generally positively related to productivity and biomass within plots, with a doubling of species richness corresponding to an average 48% increase in productivity and 53% increase in biomass. At larger spatial grains (0.25ha, 1ha), results were mixed, with negative relationships becoming more common. The results were qualitatively similar but much weaker when we controlled for stem density: at the 0.04ha spatial grain, a doubling of species richness corresponded to a 5% increase in productivity and 7% increase in biomass. Productivity and biomass were themselves almost always positively related at all spatial grains. 4. Synthesis. This is the first cross-site study of the effect of tree species richness on forest biomass and productivity that systematically varies spatial grain within a controlled methodology. The scale-dependent results are consistent with theoretical models in which sampling effects and niche complementarity dominate at small scales, while environmental gradients drive patterns at large scales. Our study shows that the relationship of tree species richness with biomass and productivity changes qualitatively when moving from scales typical of forest surveys (0.04ha) to slightly larger scales (0.25 and 1ha). This needs to be recognized in forest conservation policy and management.
Resumo:
Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.
Resumo:
This paper presents a study of the nature of the degrees-of-freedom of spatial manipulators based on the concept of partition of degrees-of-freedom. In particular, the partitioning of degrees-of-freedom is studied in five lower-mobility spatial parallel manipulators possessing different combinations of degrees-of-freedom. An extension of the existing theory is introduced so as to analyse the nature of the gained degree(s)-of-freedom at a gain-type singularity. The gain of one- and two-degrees-of-freedom is analysed in several well-studied, as well as newly developed manipulators. The formulations also present a basis for the analysis of the velocity kinematics of manipulators of any architecture. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning and data mining. Clustering is grouping of a data set or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait according to some defined distance measure. In this paper we present the genetically improved version of particle swarm optimization algorithm which is a population based heuristic search technique derived from the analysis of the particle swarm intelligence and the concepts of genetic algorithms (GA). The algorithm combines the concepts of PSO such as velocity and position update rules together with the concepts of GA such as selection, crossover and mutation. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The performance of our method is better than k-means and PSO algorithm.
Resumo:
Most ecosystems have multiple predator species that not only compete for shared prey, but also pose direct threats to each other. These intraguild interactions are key drivers of carnivore community structure, with ecosystem-wide cascading effects. Yet, behavioral mechanisms for coexistence of multiple carnivore species remain poorly understood. The challenges of studying large, free-ranging carnivores have resulted in mainly coarse-scale examination of behavioral strategies without information about all interacting competitors. We overcame some of these challenges by examining the concurrent fine-scale movement decisions of almost all individuals of four large mammalian carnivore species in a closed terrestrial system. We found that the intensity of intraguild interactions did not follow a simple hierarchical allometric pattern, because spatial and behavioral tactics of subordinate species changed with threat and resource levels across seasons. Lions (Panthera leo) were generally unrestricted and anchored themselves in areas rich in not only their principal prey, but also, during periods of resource limitation (dry season), rich in the main prey for other carnivores. Because of this, the greatest cost (potential intraguild predation) for subordinate carnivores was spatially coupled with the highest potential benefit of resource acquisition (prey-rich areas), especially in the dry season. Leopard (P. pardus) and cheetah (Acinonyx jubatus) overlapped with the home range of lions but minimized their risk using fine-scaled avoidance behaviors and restricted resource acquisition tactics. The cost of intraguild competition was most apparent for cheetahs, especially during the wet season, as areas with energetically rewarding large prey (wildebeest) were avoided when they overlapped highly with the activity areas of lions. Contrary to expectation, the smallest species (African wild dog, Lycaon pictus) did not avoid only lions, but also used multiple tactics to minimize encountering all other competitors. Intraguild competition thus forced wild dogs into areas with the lowest resource availability year round. Coexistence of multiple carnivore species has typically been explained by dietary niche separation, but our multi-scaled movement results suggest that differences in resource acquisition may instead be a consequence of avoiding intraguild competition. We generate a more realistic representation of hierarchical behavioral interactions that may ultimately drive spatially explicit trophic structures of multi-predator communities.