979 resultados para network protocol


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Glioblastoma (GBM; grade IV astrocytoma) is a very aggressive form of brain cancer with a poor survival and few qualified predictive markers. This study integrates experimentally validated genes that showed specific upregulation in GBM along with their protein-protein interaction information. A system level analysis was used to construct GBM-specific network. Computation of topological parameters of networks showed scale-free pattern and hierarchical organization. From the large network involving 1,447 proteins, we synthesized subnetworks and annotated them with highly enriched biological processes. A careful dissection of the functional modules, important nodes, and their connections identified two novel intermediary molecules CSK21 and protein phosphatase 1 alpha (PP1A) connecting the two subnetworks CDC2-PTEN-TOP2A-CAV1-P53 and CDC2-CAV1-RB-P53-PTEN, respectively. Real-time quantitative reverse transcription-PCR analysis revealed CSK21 to be moderately upregulated and PP1A to be overexpressed by 20-fold in GBM tumor samples. Immunohistochemical staining revealed nuclear expression of PP1A only in GBM samples. Thus, CSK21 and PP1A, whose functions are intimately associated with cell cycle regulation, might play key role in gliomagenesis. Cancer Res; 70(16); 6437-47. (C)2010 AACR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frequency response analysis is critical in understanding the steady and transient state behavior of any electrical network. Network analyzeror frequency response analyzer is used to determine the frequency response of an electrical network. This paper deals with the design of an inexpensive digitally controlled Network Analyzer. The frequency range of the network analyzer is from 10Hz to 50kHz (suitable range for system studies on most power electronics apparatus). It is composed of a microcontroller (as central processing unit) and a personal computer (as analyzer and display). The communication between the microcontroller and personal computer is established through one of the USB ports. The testing and evaluation of the analyzer is done with RC, RLC and multi-resonant circuits. The design steps, basis of analysis, experimental results, limitation in bandwidth and possible techniques for improvement in performances are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel algorithm for placement of standard cells in VLSI circuits based on an analogy of this problem with neural networks. By employing some of the organising principles of these nets, we have attempted to improve the behaviour of the bipartitioning method as proposed by Kernighan and Lin. Our algorithm yields better quality placements compared with the above method, and also makes the final placement independent of the initial partition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an implementation of a multicast network of processors. The processors are connected in a fully connected network and it is possible to broadcast data in a single instruction. The network works at the processor-memory speed and therefore provides a fast communication link among processors. A number of interesting architectures are possible using such a network. We show some of these architectures which have been implemented and are functional. We also show the system software calls which allow programming of these machines in parallel mode.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A generalised formulation of the mathematical model developed for the analysis of transients in a canal network, under subcritical flow, with any realistic combination of control structures and their multiple operations, has been presented. The model accounts for a large variety of control structures such as weirs, gates, notches etc. discharging under different conditions, namely submerged and unsubmerged. A numerical scheme to compute and approximate steady state flow condition as the initial condition has also been presented. The model can handle complex situations that may arise from multiple gate operations. This has been demonstrated with a problem wherein the boundary conditions change from a gate discharge equation to an energy equation and back to a gate discharge equation. In such a situation the wave strikes a fixed gate and leads to large and rapid fluctuations in both discharge and depth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a low-complexity algorithm for intrusion detection in the presence of clutter arising from wind-blown vegetation, using Passive Infra-Red (PIR) sensors in a Wireless Sensor Network (WSN). The algorithm is based on a combination of Haar Transform (HT) and Support-Vector-Machine (SVM) based training and was field tested in a network setting comprising of 15-20 sensing nodes. Also contained in this paper is a closed-form expression for the signal generated by an intruder moving at a constant velocity. It is shown how this expression can be exploited to determine the direction of motion information and the velocity of the intruder from the signals of three well-positioned sensors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is about the challenges of learning in the creation and implementation of new sustainable technologies. The system of biogas production in the Programme of Sustainable Swine Production (3S Programme) conducted by the Sadia food processing company in Santa Catarina State, Brazil, is used as a case example for exploring the challenges, possibilities and obstacles of learning in the use of biogas production as a way to increase the environmental sustainability of swine production. The aim is to contribute to the discussion about the possibilities of developing systems of biogas production for sustainability (BPfS). In the study I develop hypotheses concerning the central challenges and possibilities for developing systems of BPfS in three phases. First, I construct a model of the network of activities involved in the BP for sustainability in the case study. Next, I construct a) an idealised model of the historically evolved concepts of BPfS through an analysis of the development of forms of BP and b) a hypothesis of the current central contradictions within and between the activity systems involved in BP for sustainability in the case study. This hypothesis is further developed through two actual empirical analyses: an analysis of the actors senses in taking part in the system, and an analysis of the disturbance processes in the implementation and operation of the BP system in the 3S Programme. The historical analysis shows that BP for sustainability in the 3S Programme emerged as a feasible solution for the contradiction between environmental protection and concentration, intensification and specialisation in swine production. This contradiction created a threat to the supply of swine to the food processing company. In the food production activity, the contradiction was expressed as a contradiction between the desire of the company to become a sustainable company and the situation in the outsourced farms. For the swine producers the contradiction was expressed between the contradictory rules in which the market exerted pressure which pushed for continual increases in scale, specialisation and concentration to keep the production economically viable, while the environmental rules imposed a limit to this expansion. Although the observed disturbances in the biogas system seemed to be merely technical and localised within the farms, the analysis proposed that these disturbances were formed in and between the activity systems involved in the network of BPfS during the implementation. The disturbances observed could be explained by four contradictions: a) contradictions between the new, more expanded activity of sustainable swine production and the old activity, b) a contradiction between the concept of BP for carbon credits and BP for local use in the BPfS that was implemented, c) contradictions between the new UNFCCC1 methodology for applying for carbon credits and the small size of the farms, and d) between the technologies of biogas use and burning available in the market and the small size of the farms. The main finding of this study relates to the zone of proximal development (ZPD) of the BPfS in Sadia food production chain. The model is first developed as a general model of concepts of BPfS and further developed here to the specific case of the BPfS in the 3S Programme. The model is composed of two developmental dimensions: societal and functional integration. The dimension of societal integration refers to the level of integration with other activities outside the farm. At one extreme, biogas production is self-sufficient and highly independent and the products of BP are consumed within the farm, while at the other extreme BP is highly integrated in markets and networks of collaboration, and BP products are exchanged within the markets. The dimension of functional integration refers to the level of integration between products and production processes so that economies of scope can be achieved by combining several functions using the same utility. At one extreme, BP is specialised in only one product, which allows achieving economies of scale, while at the other extreme there is an integrated production in which several biogas products are produced in order to maximise the outcomes from the BP system. The analysis suggests that BP is moving towards a societal integration, towards the market and towards a functional integration in which several biogas products are combined. The model is a hypothesis to be further tested through interventions by collectively constructing the new proposed concept of BPfS. Another important contribution of this study refers to the concept of the learning challenge. Three central learning challenges for developing a sustainable system of BP in the 3S Programme were identified: 1) the development of cheaper and more practical technologies of burning and measuring the gas, as well as the reduction of costs of the process of certification, 2) the development of new ways of using biogas within farms, and 3) the creation of new local markets and networks for selling BP products. One general learning challenge is to find more varied and synergic ways of using BP products than solely for the production of carbon credits. Both the model of the ZPD of BPfS and the identified learning challenges could be used as learning tools to facilitate the development of biogas production systems. The proposed model of the ZPD could be used to analyse different types of agricultural activities that face a similar contradiction. The findings could be used in interventions to help actors to find their own expansive actions and developmental projects for change. Rather than proposing a standardised best concept of BPfS, the idea of these learning tools is to facilitate the analysis of local situations and to help actors to make their activities more sustainable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The enzymes of the family of tRNA synthetases perform their functions with high precision by synchronously recognizing the anticodon region and the aminoacylation region, which are separated by ?70 in space. This precision in function is brought about by establishing good communication paths between the two regions. We have modeled the structure of the complex consisting of Escherichia coli methionyl-tRNA synthetase (MetRS), tRNA, and the activated methionine. Molecular dynamics simulations have been performed on the modeled structure to obtain the equilibrated structure of the complex and the cross-correlations between the residues in MetRS have been evaluated. Furthermore, the network analysis on these simulated structures has been carried out to elucidate the paths of communication between the activation site and the anticodon recognition site. This study has provided the detailed paths of communication, which are consistent with experimental results. Similar studies also have been carried out on the complexes (MetRS + activated methonine) and (MetRS + tRNA) along with ligand-free native enzyme. A comparison of the paths derived from the four simulations clearly has shown that the communication path is strongly correlated and unique to the enzyme complex, which is bound to both the tRNA and the activated methionine. The details of the method of our investigation and the biological implications of the results are presented in this article. The method developed here also could be used to investigate any protein system where the function takes place through long-distance communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Several types of networks, such as transcriptional, metabolic or protein-protein interaction networks of various organisms have been constructed, that have provided a variety of insights into metabolism and regulation. Here, we seek to exploit the reaction-based networks of three organisms for comparative genomics. We use concepts from spectral graph theory to systematically determine how differences in basic metabolism of organisms are reflected at the systems level and in the overall topological structures of their metabolic networks. Methodology/Principal Findings. Metabolome-based reaction networks of Mycobacterium tuberculosis, Mycobacterium leprae and Escherichia coli have been constructed based on the KEGG LIGAND database, followed by graph spectral analysis of the network to identify hubs as well as the sub-clustering of reactions. The shortest and alternate paths in the reaction networks have also been examined. Sub-cluster profiling demonstrates that reactions of the mycolic acid pathway in mycobacteria form a tightly connected sub-cluster. Identification of hubs reveals reactions involving glutamate to be central to mycobacterial metabolism, and pyruvate to be at the centre of the E. coli metabolome. The analysis of shortest paths between reactions has revealed several paths that are shorter than well established pathways. Conclusions. We conclude that severe downsizing of the leprae genome has not significantly altered the global structure of its reaction network but has reduced the total number of alternate paths between its reactions while keeping the shortest paths between them intact. The hubs in the mycobacterial networks that are absent in the human metabolome can be explored as potential drug targets. This work demonstrates the usefulness of constructing metabolome based networks of organisms and the feasibility of their analyses through graph spectral methods. The insights obtained from such studies provide a broad overview of the similarities and differences between organisms, taking comparative genomics studies to a higher dimension.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A neural network approach for solving the two-dimensional assignment problem is proposed. The design of the neural network is discussed and simulation results are presented. The neural network obtains 10-15% lower cost placements on the examples considered, than the adjacent pairwise exchange method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel universal approach to understand the self-deflagration in solids has been attempted by using basic thermodynamic equation of partial differentiation, where burning mte depends on the initial temperature and pressure of the system. Self-deflagrating solids are rare and are reported only in few compounds like ammonium perchlorate (AP), polystyrene peroxide and tetrazole. This approach has led us to understand the unique characteristics of AP, viz. the existence of low pressure deflagration limit (LPL 20 atm), hitherto not understood sufficiently. This analysis infers that the overall surface activation energy comprises of two components governed by the condensed phase and gas phase processes. The most attractive feature of the model is the identification of a new subcritical regime I' below LPL where AP does not burn. The model is aptly supported by the thermochemical computations and temperature-profile analyses of the combustion train. The thermodynamic model is further corroborated from the kinetic analysis of the high pressure (1-30 atm) DTA thermograms which affords distinct empirical decomposition rate laws in regimes I' and 1 (20-60 atm). Using Fourier-Kirchoff one dimensional heat transfer differential equation, the phase transition thickness and the melt-layer thickness have been computed which conform to the experimental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing network lifetime is important in wireless sensor/ad-hoc networks. In this paper, we are concerned with algorithms to increase network lifetime and amount of data delivered during the lifetime by deploying multiple mobile base stations in the sensor network field. Specifically, we allow multiple mobile base stations to be deployed along the periphery of the sensor network field and develop algorithms to dynamically choose the locations of these base stations so as to improve network lifetime. We propose energy efficient low-complexity algorithms to determine the locations of the base stations; they include i) Top-K-max algorithm, ii) maximizing the minimum residual energy (Max-Min-RE) algorithm, and iii) minimizing the residual energy difference (MinDiff-RE) algorithm. We show that the proposed base stations placement algorithms provide increased network lifetimes and amount of data delivered during the network lifetime compared to single base station scenario as well as multiple static base stations scenario, and close to those obtained by solving an integer linear program (ILP) to determine the locations of the mobile base stations. We also investigate the lifetime gain when an energy aware routing protocol is employed along with multiple base stations.