883 resultados para Scale-free network
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
Tuberculosis continues to be a major health challenge, warranting the need for newer strategies for therapeutic intervention and newer approaches to discover them. Here, we report the identification of efficient metabolism disruption strategies by analysis of a reactome network. Protein-protein dependencies at a genome scale are derived from the curated metabolic network, from which insights into the nature and extent of inter-protein and inter-pathway dependencies have been obtained. A functional distance matrix and a subsequent nearness index derived from this information, helps in understanding how the influence of a given protein can pervade to the metabolic network. Thus, the nearness index can be viewed as a metabolic disruptability index, which suggests possible strategies for achieving maximal metabolic disruption by inhibition of the least number of proteins. A greedy approach has been used to identify the most influential singleton, and its combination with the other most pervasive proteins to obtain highly influential pairs, triplets and quadruplets. The effect of deletion of these combinations on cellular metabolism has been studied by flux balance analysis. An obvious outcome of this study is a rational identification of drug targets, to efficiently bring down mycobacterial metabolism.
Resumo:
The interdependence of the concept of allostery and enzymatic catalysis, and they being guided by conformational mobility is gaining increased prominence. However, to gain a molecular level understanding of llostery and hence of enzymatic catalysis, it is of utter importance that the networks of amino acids participating in allostery be deciphered. Our lab has been exploring the methods of network analysis combined with molecular dynamics simulations to understand allostery at molecular level. Earlier we had outlined methods to obtain communication paths and then to map the rigid/flexible regions of proteins through network parameters like the shortest correlated paths, cliques, and communities. In this article, we advance the methodology to estimate the conformational populations in terms of cliques/communities formed by interactions including the side-chains and then to compute the ligand-induced population shift. Finally, we obtain the free-energy landscape of the protein in equilibrium, characterizing the free-energy minima accessed by the protein complexes. We have chosen human tryptophanyl-tRNA synthetase (hTrpRS), a protein esponsible for charging tryptophan to its cognate tRNA during protein biosynthesis for this investigation. This is a multidomain protein exhibiting excellent allosteric communication. Our approach has provided valuable structural as well as functional insights into the protein. The methodology adopted here is highly generalized to illuminate the linkage between protein structure networks and conformational mobility involved in the allosteric mechanism in any protein with known structure.
Resumo:
Optimization in energy consumption of the existing synchronization mechanisms can lead to substantial gains in terms of network life in Wireless Sensor Networks (WSNs). In this paper, we analyze ERBS and TPSN, two existing synchronization algorithms for WSNs which use widely different approach, and compare their performance in large scale WSNs each of which consists of different type of platform and has varying node density. We, then, propose a novel algorithm, PROBESYNC, which takes advantage of differences in power required to transmit and receive a message on ERBS and TPSN and leverages the shortcomings of each of these algorithms. This leads to considerable improvement in energy conservation and enhanced life of large scale WSNs.
Resumo:
The Earth's ecosystems are protected from the dangerous part of the solar ultraviolet (UV) radiation by stratospheric ozone, which absorbs most of the harmful UV wavelengths. Severe depletion of stratospheric ozone has been observed in the Antarctic region, and to a lesser extent in the Arctic and midlatitudes. Concern about the effects of increasing UV radiation on human beings and the natural environment has led to ground based monitoring of UV radiation. In order to achieve high-quality UV time series for scientific analyses, proper quality control (QC) and quality assurance (QA) procedures have to be followed. In this work, practices of QC and QA are developed for Brewer spectroradiometers and NILU-UV multifilter radiometers, which measure in the Arctic and Antarctic regions, respectively. These practices are applicable to other UV instruments as well. The spectral features and the effect of different factors affecting UV radiation were studied for the spectral UV time series at Sodankylä. The QA of the Finnish Meteorological Institute's (FMI) two Brewer spectroradiometers included daily maintenance, laboratory characterizations, the calculation of long-term spectral responsivity, data processing and quality assessment. New methods for the cosine correction, the temperature correction and the calculation of long-term changes of spectral responsivity were developed. Reconstructed UV irradiances were used as a QA tool for spectroradiometer data. The actual cosine correction factor was found to vary between 1.08-1.12 and 1.08-1.13. The temperature characterization showed a linear temperature dependence between the instrument's internal temperature and the photon counts per cycle. Both Brewers have participated in international spectroradiometer comparisons and have shown good stability. The differences between the Brewers and the portable reference spectroradiometer QASUME have been within 5% during 2002-2010. The features of the spectral UV radiation time series at Sodankylä were analysed for the time period 1990-2001. No statistically significant long-term changes in UV irradiances were found, and the results were strongly dependent on the time period studied. Ozone was the dominant factor affecting UV radiation during the springtime, whereas clouds played a more important role during the summertime. During this work, the Antarctic NILU-UV multifilter radiometer network was established by the Instituto Nacional de Meteorogía (INM) as a joint Spanish-Argentinian-Finnish cooperation project. As part of this work, the QC/QA practices of the network were developed. They included training of the operators, daily maintenance, regular lamp tests and solar comparisons with the travelling reference instrument. Drifts of up to 35% in the sensitivity of the channels of the NILU-UV multifilter radiometers were found during the first four years of operation. This work emphasized the importance of proper QC/QA, including regular lamp tests, for the multifilter radiometers also. The effect of the drifts were corrected by a method scaling the site NILU-UV channels to those of the travelling reference NILU-UV. After correction, the mean ratios of erythemally-weighted UV dose rates measured during solar comparisons between the reference NILU-UV and the site NILU-UVs were 1.007±0.011 and 1.012±0.012 for Ushuaia and Marambio, respectively, when the solar zenith angle varied up to 80°. Solar comparisons between the NILU-UVs and spectroradiometers showed a ±5% difference near local noon time, which can be seen as proof of successful QC/QA procedures and transfer of irradiance scales. This work also showed that UV measurements made in the Arctic and Antarctic can be comparable with each other.
Resumo:
Carbon nanotubes, seamless cylinders made from carbon atoms, have outstanding characteristics: inherent nano-size, record-high Young’s modulus, high thermal stability and chemical inertness. They also have extraordinary electronic properties: in addition to extremely high conductance, they can be both metals and semiconductors without any external doping, just due to minute changes in the arrangements of atoms. As traditional silicon-based devices are reaching the level of miniaturisation where leakage currents become a problem, these properties make nanotubes a promising material for applications in nanoelectronics. However, several obstacles must be overcome for the development of nanotube-based nanoelectronics. One of them is the ability to modify locally the electronic structure of carbon nanotubes and create reliable interconnects between nanotubes and metal contacts which likely can be used for integration of the nanotubes in macroscopic electronic devices. In this thesis, the possibility of using ion and electron irradiation as a tool to introduce defects in nanotubes in a controllable manner and to achieve these goals is explored. Defects are known to modify the electronic properties of carbon nanotubes. Some defects are always present in pristine nanotubes, and naturally are introduced during irradiation. Obviously, their density can be controlled by irradiation dose. Since different types of defects have very different effects on the conductivity, knowledge of their abundance as induced by ion irradiation is central for controlling the conductivity. In this thesis, the response of single walled carbon nanotubes to ion irradiation is studied. It is shown that, indeed, by energy selective irradiation the conductance can be controlled. Not only the conductivity, but the local electronic structure of single walled carbon nanotubes can be changed by the defects. The presented studies show a variety of changes in the electronic structures of semiconducting single walled nanotubes, varying from individual new states in the band gap to changes in the band gap width. The extensive simulation results for various types of defect make it possible to unequivocally identify defects in single walled carbon nanotubes by combining electronic structure calculations and scanning tunneling spectroscopy, offering a reference data for a wide scientific community of researchers studying nanotubes with surface probe microscopy methods. In electronics applications, carbon nanotubes have to be interconnected to the macroscopic world via metal contacts. Interactions between the nanotubes and metal particles are also essential for nanotube synthesis, as single walled nanotubes are always grown from metal catalyst particles. In this thesis, both growth and creation of nanotube-metal nanoparticle interconnects driven by electron irradiation is studied. Surface curvature and the size of metal nanoparticles is demonstrated to determine the local carbon solubility in these particles. As for nanotube-metal contacts, previous experiments have proved the possibility to create junctions between carbon nanotubes and metal nanoparticles under irradiation in a transmission electron microscope. In this thesis, the microscopic mechanism of junction formation is studied by atomistic simulations carried out at various levels of sophistication. It is shown that structural defects created by the electron beam and efficient reconstruction of the nanotube atomic network, inherently related to the nanometer size and quasi-one dimensional structure of nanotubes, are the driving force for junction formation. Thus, the results of this thesis not only address practical aspects of irradiation-mediated engineering of nanosystems, but also contribute to our understanding of the behaviour of point defects in low-dimensional nanoscale materials.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).
Resumo:
My doctoral dissertation in sociology and Russian studies, Social Networks and Everyday Practices in Russia, employs a "micro" or "grassroots" perspective on the transition. The study is a collection of articles detailing social networks in five different contexts. The first article examines Russian birthdays from a network perspective. The second takes a look at health care to see whether networks have become obsolete in a sector that is still overwhelmingly public, but increasingly being monetarised. The third article investigates neighbourhood relations. The fourth details relationships at work, particularly from the vantage point of internal migration. The fifth explores housing and the role of networks and money both in the Soviet and post-Soviet era. The study is based on qualitative social network and interview data gathered among three groups, teachers, doctors and factory workers, in St. Petersburg during 1993-2000. Methodologically it builds on a qualitative social network approach. The study adds a critical element to the discussion on networks in post-socialism. A considerable consensus exists that social networks were vital in state socialist societies and were used to bypass various difficulties caused by endemic shortages and bureaucratic rigidities, but a more debated issue has been their role in post-socialism. Some scholars have argued that the importance of networks has been dramatically reduced in the new market economy, whereas others have stressed their continuing importance. If a common denominator in both has been a focus on networks in relation to the past, a more overlooked aspect has been the question of inequality. To what extent is access to networks unequally distributed? What are the limits and consequences of networks, for those who have access, those outside networks or society at large? My study provides some evidence about inequalities. It shows that some groups are privileged over others, for instance, middle-class people in informal access to health care. Moreover, analysing the formation of networks sheds additional light on inequalities, as it highlights the importance of migration as a mechanism of inequality, for example. The five articles focus on how networks are actually used in everyday life. The article on health care, for instance, shows that personal connections are still important and popular in post-Soviet Russia, despite the growing importance of money and the emergence of "fee for service" medicine. Fifteen of twenty teachers were involved in informal medical exchange during a two-week study period, so that they used their networks to bypass the formal market mechanisms or official procedures. Medicines were obtained through personal connections because some were unavailable at local pharmacies or because these connections could provide medicines for a cheaper price or even for free. The article on neighbours shows that "mutual help" was the central feature of neighbouring, so that the exchange of goods, services and information covered almost half the contacts with neighbours reported. Neighbours did not provide merely small-scale help but were often exchange partners because they possessed important professional qualities, had access to workplace resources, or knew somebody useful. The article on the Russian work collective details workplace-related relationships in a tractor factory and shows that interaction with and assistance from one's co-workers remains important. The most interesting finding was that co-workers were even more important to those who had migrated to the city than to those who were born there, which is explained by the specifics of Soviet migration. As a result, the workplace heavily influenced or absorbed contexts for the worker migrants to establish relationships whereas many meeting-places commonly available in Western countries were largely absent or at least did not function as trusted public meeting places to initiate relationships. More results are to be found from my dissertation: Anna-Maria Salmi: Social Networks and Everyday Practices in Russia, Kikimora Publications, 2006, see www.kikimora-publications.com.
Resumo:
Following growth doping technique highly luminescent (quantum yield >50%) Mn-doped ZnS nanocrystals are synthesized via colloidal synthetictechnique. The dopant emission has been optimized with varying reaction parameters and found the ratio of Zn and S as well as the percentage of introduced dopant in the reaction mixture are key factors for controlling the intensity. The method is simple, hassle free, and can be scalable to gram level without hindering the quality of nanocrystals. These nanocrystals retain their emission during various ligand exchange processes and aqueous dispersion.
Resumo:
An expression derived for the free energy of mixing of a divalent basic oxide (MO) with SiO2 based on a model of silicate structure, takes into account the distribution of O2- (from MO) into the silica network, the mixing of silicate ions with O2- and the enthalpy of mixing. The resulting expression is ΔGmix=RT{N11n (2N1-N)2/4N1(1-N)+N21n N 2-N/1-N}, where N={(β+N1)-√(β+N 1)2-8βN1N2}/2β β=characteristic constant for the system N1=mol fraction of silica N2=mol fraction of MO. For the proper choice of β, calculated values of the activity of MO for the system PbO-SiO2, MnO-SiO2, FeO-SiO2 and CaO-SiO2 are in good agreement with experiment. The model predicts that the activity of the basic oxide decreases with increase in temperature.
Resumo:
We present a new computationally efficient method for large-scale polypeptide folding using coarse-grained elastic networks and gradient-based continuous optimization techniques. The folding is governed by minimization of energy based on Miyazawa–Jernigan contact potentials. Using this method we are able to substantially reduce the computation time on ordinary desktop computers for simulation of polypeptide folding starting from a fully unfolded state. We compare our results with available native state structures from Protein Data Bank (PDB) for a few de-novo proteins and two natural proteins, Ubiquitin and Lysozyme. Based on our simulations we are able to draw the energy landscape for a small de-novo protein, Chignolin. We also use two well known protein structure prediction software, MODELLER and GROMACS to compare our results. In the end, we show how a modification of normal elastic network model can lead to higher accuracy and lower time required for simulation.
Resumo:
The near flow field of small aspect ratio elliptic turbulent free jets (issuing from nozzle and orifice) was experimentally studied using a 2D PIV. Two point velocity correlations in these jets revealed the extent and orientation of the large scale structures in the major and minor planes. The spatial filtering of the instantaneous velocity field using Gaussian convolution kernel shows that while a single large vortex ring circumscribing the jet seems to be present at the exit of nozzle, the orifice jet exhibited a number of smaller vortex ring pairs close to jet exit. The smaller length scale observed in the case of the orifice jet is representative of the smaller azimuthal vortex rings that generate axial vortex field as they are convected. This results in the axis-switching in the case of orifice jet and may have a mechanism different from the self induction process as observed in the case of contoured nozzle jet flow.
Resumo:
In the framework of the ECSK [Einstein-Cartan-Sciama-Kibble] theory of cosmology, a scalar field nonminimally coupled to the gravitational field is considered. For a Robertson-Walker open universe (k=0) in the radiation era, the field equations admit a singularity-free solution for the scale factor. In theory, the torsion is generated through nonminimal coupling of a scalar field to the gravitation field. The nonsingular nature of the cosmological model automatically solves the flatness problem. Further absence of event horizon and particle horizon explains the high degree of isotropy, especially of 2.7-K background radiation.