883 resultados para Scale-free network
Resumo:
Background: One of the major challenges in understanding enzyme catalysis is to identify the different conformations and their populations at detailed molecular level in response to ligand binding/environment. A detail description of the ligand induced conformational changes provides meaningful insights into the mechanism of action of enzymes and thus its function. Results: In this study, we have explored the ligand induced conformational changes in H. pylori LuxS and the associated mechanistic features. LuxS, a dimeric protein, produces the precursor (4,5-dihydroxy-2,3-pentanedione) for autoinducer-2 production which is a signalling molecule for bacterial quorum sensing. We have performed molecular dynamics simulations on H. pylori LuxS in its various ligand bound forms and analyzed the simulation trajectories using various techniques including the structure network analysis, free energy evaluation and water dynamics at the active site. The results bring out the mechanistic details such as co operativity and asymmetry between the two subunits, subtle changes in the conformation as a response to the binding of active and inactive forms of ligands and the population distribution of different conformations in equilibrium. These investigations have enabled us to probe the free energy landscape and identify the corresponding conformations in terms of network parameters. In addition, we have also elucidated the variations in the dynamics of water co-ordination to the Zn2+ ion in LuxS and its relation to the rigidity at the active sites. Conclusions: In this article, we provide details of a novel method for the identification of conformational changes in the different ligand bound states of the protein, evaluation of ligand-induced free energy changes and the biological relevance of our results in the context of LuxS structure-function. The methodology outlined here is highly generalized to illuminate the linkage between structure and function in any protein of known structure.
Resumo:
This study is about the challenges of learning in the creation and implementation of new sustainable technologies. The system of biogas production in the Programme of Sustainable Swine Production (3S Programme) conducted by the Sadia food processing company in Santa Catarina State, Brazil, is used as a case example for exploring the challenges, possibilities and obstacles of learning in the use of biogas production as a way to increase the environmental sustainability of swine production. The aim is to contribute to the discussion about the possibilities of developing systems of biogas production for sustainability (BPfS). In the study I develop hypotheses concerning the central challenges and possibilities for developing systems of BPfS in three phases. First, I construct a model of the network of activities involved in the BP for sustainability in the case study. Next, I construct a) an idealised model of the historically evolved concepts of BPfS through an analysis of the development of forms of BP and b) a hypothesis of the current central contradictions within and between the activity systems involved in BP for sustainability in the case study. This hypothesis is further developed through two actual empirical analyses: an analysis of the actors senses in taking part in the system, and an analysis of the disturbance processes in the implementation and operation of the BP system in the 3S Programme. The historical analysis shows that BP for sustainability in the 3S Programme emerged as a feasible solution for the contradiction between environmental protection and concentration, intensification and specialisation in swine production. This contradiction created a threat to the supply of swine to the food processing company. In the food production activity, the contradiction was expressed as a contradiction between the desire of the company to become a sustainable company and the situation in the outsourced farms. For the swine producers the contradiction was expressed between the contradictory rules in which the market exerted pressure which pushed for continual increases in scale, specialisation and concentration to keep the production economically viable, while the environmental rules imposed a limit to this expansion. Although the observed disturbances in the biogas system seemed to be merely technical and localised within the farms, the analysis proposed that these disturbances were formed in and between the activity systems involved in the network of BPfS during the implementation. The disturbances observed could be explained by four contradictions: a) contradictions between the new, more expanded activity of sustainable swine production and the old activity, b) a contradiction between the concept of BP for carbon credits and BP for local use in the BPfS that was implemented, c) contradictions between the new UNFCCC1 methodology for applying for carbon credits and the small size of the farms, and d) between the technologies of biogas use and burning available in the market and the small size of the farms. The main finding of this study relates to the zone of proximal development (ZPD) of the BPfS in Sadia food production chain. The model is first developed as a general model of concepts of BPfS and further developed here to the specific case of the BPfS in the 3S Programme. The model is composed of two developmental dimensions: societal and functional integration. The dimension of societal integration refers to the level of integration with other activities outside the farm. At one extreme, biogas production is self-sufficient and highly independent and the products of BP are consumed within the farm, while at the other extreme BP is highly integrated in markets and networks of collaboration, and BP products are exchanged within the markets. The dimension of functional integration refers to the level of integration between products and production processes so that economies of scope can be achieved by combining several functions using the same utility. At one extreme, BP is specialised in only one product, which allows achieving economies of scale, while at the other extreme there is an integrated production in which several biogas products are produced in order to maximise the outcomes from the BP system. The analysis suggests that BP is moving towards a societal integration, towards the market and towards a functional integration in which several biogas products are combined. The model is a hypothesis to be further tested through interventions by collectively constructing the new proposed concept of BPfS. Another important contribution of this study refers to the concept of the learning challenge. Three central learning challenges for developing a sustainable system of BP in the 3S Programme were identified: 1) the development of cheaper and more practical technologies of burning and measuring the gas, as well as the reduction of costs of the process of certification, 2) the development of new ways of using biogas within farms, and 3) the creation of new local markets and networks for selling BP products. One general learning challenge is to find more varied and synergic ways of using BP products than solely for the production of carbon credits. Both the model of the ZPD of BPfS and the identified learning challenges could be used as learning tools to facilitate the development of biogas production systems. The proposed model of the ZPD could be used to analyse different types of agricultural activities that face a similar contradiction. The findings could be used in interventions to help actors to find their own expansive actions and developmental projects for change. Rather than proposing a standardised best concept of BPfS, the idea of these learning tools is to facilitate the analysis of local situations and to help actors to make their activities more sustainable.
Resumo:
We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The enzymes of the family of tRNA synthetases perform their functions with high precision by synchronously recognizing the anticodon region and the aminoacylation region, which are separated by ?70 in space. This precision in function is brought about by establishing good communication paths between the two regions. We have modeled the structure of the complex consisting of Escherichia coli methionyl-tRNA synthetase (MetRS), tRNA, and the activated methionine. Molecular dynamics simulations have been performed on the modeled structure to obtain the equilibrated structure of the complex and the cross-correlations between the residues in MetRS have been evaluated. Furthermore, the network analysis on these simulated structures has been carried out to elucidate the paths of communication between the activation site and the anticodon recognition site. This study has provided the detailed paths of communication, which are consistent with experimental results. Similar studies also have been carried out on the complexes (MetRS + activated methonine) and (MetRS + tRNA) along with ligand-free native enzyme. A comparison of the paths derived from the four simulations clearly has shown that the communication path is strongly correlated and unique to the enzyme complex, which is bound to both the tRNA and the activated methionine. The details of the method of our investigation and the biological implications of the results are presented in this article. The method developed here also could be used to investigate any protein system where the function takes place through long-distance communication.
Resumo:
Time scales associated with activated transitions between glassy metastable states of a free-energy functional appropriate for a dense hard-sphere system are calculated by using a new Monte Carlo method for the local density variables. In particular, we calculate the time the system, initially placed in a shallow glassy minimum of the free-energy, spends in the neighborhood of this minimum before making a transition to the basin of attraction of another free-energy minimum. This time scale is found to increase as the average density is increased. We find a crossover density near which this time scale increases very sharply and becomes longer than the longest times accessible in our simulation. This time scale does not show any evidence of increasing with sample size
Resumo:
In this article, we present a novel application of a quantum clustering (QC) technique to objectively cluster the conformations, sampled by molecular dynamics simulations performed on different ligand bound structures of the protein. We further portray each conformational population in terms of dynamically stable network parameters which beautifully capture the ligand induced variations in the ensemble in atomistic detail. The conformational populations thus identified by the QC method and verified by network parameters are evaluated for different ligand bound states of the protein pyrrolysyl-tRNA synthetase (DhPylRS) from D. hafniense. The ligand/environment induced re-distribution of protein conformational ensembles forms the basis for understanding several important biological phenomena such as allostery and enzyme catalysis. The atomistic level characterization of each population in the conformational ensemble in terms of the re-orchestrated networks of amino acids is a challenging problem, especially when the changes are minimal at the backbone level. Here we demonstrate that the QC method is sensitive to such subtle changes and is able to cluster MD snapshots which are similar at the side-chain interaction level. Although we have applied these methods on simulation trajectories of a modest time scale (20 ns each), we emphasize that our methodology provides a general approach towards an objective clustering of large-scale MD simulation data and may be applied to probe multistate equilibria at higher time scales, and to problems related to protein folding for any protein or protein-protein/RNA/DNA complex of interest with a known structure.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
Neural network models of associative memory exhibit a large number of spurious attractors of the network dynamics which are not correlated with any memory state. These spurious attractors, analogous to "glassy" local minima of the energy or free energy of a system of particles, degrade the performance of the network by trapping trajectories starting from states that are not close to one of the memory states. Different methods for reducing the adverse effects of spurious attractors are examined with emphasis on the role of synaptic asymmetry. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
An investigation has been made of the structure of the motion above a heated plate inclined at a small angle (about 10°) to the horizontal. The turbulence is considered in terms of the similarities to and differences from the motion above an exactly horizontal surface. One effect of inclination is, of course, that there is also a mean motion. Accurate data on the mean temperature field and the intensity of the temperature fluctuations have been obtained with platinum resistance thermometers, the signals being processed electronically. More approximate information on the velocity field has been obtained with quartz fibre anemometers. These results have been supplemented qualitatively by simultaneous observations of the temperature and velocity fluctuations and also by smoke experiments. The principal features of the flow inferred from these observations are as follows. The heat transfer and the mean temperature field are not much altered by the inclination, though small, not very systematic, variations may result from the complexities of the velocity field. This supports the view that the mean temperature field is largely governed by the large-scale motions. The temperature fluctuations show a systematic variation with distance from the lower edge and resemble those above a horizontal plate when this distance is large. The largescale motions of the turbulence start close to the lower edge, but the smaller eddies do not attain full intensity until the air has moved some distance up the plate. The mean velocity receives a sizable contribution from a ‘through-flow’ between the side-walls. Superimposed on this are developments that show that the momentum transfer processes are complex and certainly not capable of representation by any simple theory such as an eddy viscosity. On the lower part of the plate there is surprisingly large acceleration, but further up the mixing action of the small eddies has a decelerating effect.
Resumo:
Scalable Networks on Chips (NoCs) are needed to match the ever-increasing communication demands of large-scale Multi-Processor Systems-on-chip (MPSoCs) for multi media communication applications. The heterogeneous nature of application specific on-chip cores along with the specific communication requirements among the cores calls for the design of application-specific NoCs for improved performance in terms of communication energy, latency, and throughput. In this work, we propose a methodology for the design of customized irregular networks-on-chip. The proposed method exploits a priori knowledge of the applications communication characteristic to generate an optimized network topology and corresponding routing tables.
Resumo:
The planform structure of turbulent free convection over a heated horizontal surface has been visualized and analyzed for different boundary conditions at the top and for different aspect ratios, for flux Rayleigh numbers ranging from 10 exp 8 - 10 exp 10. The different boundary conditions correspond to Rayleigh-Benard convection, open convection with evaporation at the top and with an imposed external flow on the heated boundary. Without the external flow the planform is one randomly oriented line plume. At large Ra, these line plumes seem to align along the diagonal, persumably due to a large-scale flow along as visualized in the side view. When the external flow is imposed, the line plumes clearly align in the direction of external flow. Flow visualization reveals that at these Ra, the shear tends to break the plumes which otherwise would reach the opposite boundary. (Author)
Resumo:
Interpenetrating polymer networks (IPNs) of trimethylol propane triacrylate (TMPTA) and 1,6-hexane diol diacrylate (HDDA) at different weight ratios were synthesized. Temperature modulated differential scanning calorimetry (TMDSC) was used to determine whether the formation resulted in a copolymer or interpenetrating polymer network (IPN). These polymers are used as binders for microstereolithography (MSL) based ceramic microfabrication. The kinetics of thermal degradation of these polymers are important to optimize the debinding process for fabricating 3D shaped ceramic objects by MSL based rapid prototyping technique. Therefore, thermal and thermo-oxidative degradation of these IPNs have been studied by dynamic and isothermal thermogravimetry (TGA). Non-isothermal model-free kinetic methods have been adopted (isoconversional differential and KAS) to calculate the apparent activation energy (E a) as a function of conversion (α) in N 2 and air. The degradation of these polymers in N 2 atmosphere occurs via two mechanisms. Chain end scission plays a dominant role at lower temperature while the kinetics is governed by random chain scission at higher temperature. Oxidative degradation shows multiple degradation steps having higher activation energy than in N 2. Isothermal degradation was also carried out to predict the reaction model which is found to be decelerating. It was shown that the degradation of PTMPTA follows a contracting sphere reaction model in N 2. However, as the HDDA content increases in the IPNs, the degradation reaction follows Avrami-Erofeev model and diffusion governed mechanisms. The intermediate IPN compositions show both type of mechanism. Based on the above study, debinding strategy for MSL based microfabricated ceramic structure has been proposed. © 2012 Elsevier B.V.
Resumo:
Critical applications like cyclone tracking and earthquake modeling require simultaneous high-performance simulations and online visualization for timely analysis. Faster simulations and simultaneous visualization enable scientists provide real-time guidance to decision makers. In this work, we have developed an integrated user-driven and automated steering framework that simultaneously performs numerical simulations and efficient online remote visualization of critical weather applications in resource-constrained environments. It considers application dynamics like the criticality of the application and resource dynamics like the storage space, network bandwidth and available number of processors to adapt various application and resource parameters like simulation resolution, simulation rate and the frequency of visualization. We formulate the problem of finding an optimal set of simulation parameters as a linear programming problem. This leads to 30% higher simulation rate and 25-50% lesser storage consumption than a naive greedy approach. The framework also provides the user control over various application parameters like region of interest and simulation resolution. We have also devised an adaptive algorithm to reduce the lag between the simulation and visualization times. Using experiments with different network bandwidths, we find that our adaptive algorithm is able to reduce lag as well as visualize the most representative frames.
Resumo:
Introduction: Advances in genomics technologies are providing a very large amount of data on genome-wide gene expression profiles, protein molecules and their interactions with other macromolecules and metabolites. Molecular interaction networks provide a useful way to capture this complex data and comprehend it. Networks are beginning to be used in drug discovery, in many steps of the modern discovery pipeline, with large-scale molecular networks being particularly useful for the understanding of the molecular basis of the disease. Areas covered: The authors discuss network approaches used for drug target discovery and lead identification in the drug discovery pipeline. By reconstructing networks of targets, drugs and drug candidates as well as gene expression profiles under normal and disease conditions, the paper illustrates how it is possible to find relationships between different diseases, find biomarkers, explore drug repurposing and study emergence of drug resistance. Furthermore, the authors also look at networks which address particular important aspects such as off-target effects, combination-targets, mechanism of drug action and drug safety. Expert opinion: The network approach represents another paradigm shift in drug discovery science. A network approach provides a fresh perspective of understanding important proteins in the context of their cellular environments, providing a rational basis for deriving useful strategies in drug design. Besides drug target identification and inferring mechanism of action, networks enable us to address new ideas that could prove to be extremely useful for new drug discovery, such as drug repositioning, drug synergy, polypharmacology and personalized medicine.