18 resultados para Scale-free network

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analysing network failures caused by hardware faults or overload. There network reaction was modelled as rerouting of traffic away from failed or congested elements. Here we model network reaction to congestion on much shorter time scales when the input traffic rate through congested routes is reduced. As an example we consider the Internet where local mismatch between demand and capacity results in traffic losses. We describe the onset of congestion as a phase transition characterised by strong, albeit relatively short-lived, fluctuations of losses caused by noise in input traffic and exacerbated by the heterogeneous nature of the network manifested in a power-law load distribution. The fluctuations may result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © 2013 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. Here we model another type of the network reaction to congestion - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. We consider the onset of congestion in the Internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non-Gaussian loss fluctuations at a mesoscopic time scale. The fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale-free load distribution. They result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © Copyright EPLA, 2012.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

GitHub is the most popular repository for open source code (Finley 2011). It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains infor­mation relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of con­cept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network ap­proach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the pres­ence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear tech­niques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The EU intends to increase the fraction of fuels from biogenic energy sources from 2% in 2005 to 8% in 2020. This means a minimum of 30 million TOE/a of fuels from biomass. This makes technical-scale generation of syngas from high-grade biomass, e.g. straw, hay, bark, or paper/cardboard waste, and the production of synthetic fuels by Fischer-Tropsch (FT) synthesis highly attractive. The BTL concept (Biomass to Liquids) of the Karlsruhe Research Center, labeled bioliq, focuses on this challenge by locally concentrating the biomass energy content by fast pyrolysis in a coke/oil slurry followed by slurry conversion to syngas in a central entrained flow gasifier at 1200C and pressures above 4MPa. FT synthesis generates intermediate products for synthetic fuels. To prevent the sensitive catalysts from being poisoned the syngas must be free of tar and particulates. Trace concentrations of H2S, COS, CS2, HCl, NH3, and HCN must be on the order of a few ppb. Moreover, maximum conversion efficiency will be achieved by cleaning the gas above the synthesis conditions. (T>350C, P>4MPa). The concept of an innovative dry HTHP syngas cleaning process is presented. Based on HT particle filtration and suitable sorption and catalysis processes for the relevant contaminants, an overall concept will be derived, which leads to a syngas quality required for FT synthesis in only two combined stages. Results of filtration experiments on a pilot scale are presented. The influence of temperature on the separation and conversion, respectively, of particulates and gaseous contaminants is discussed on the basis of experimental results obtained on a laboratory and pilot scale. Extensive studies of this concept are performed in a scientific network comprising the Karlsruhe Research Center and five universities; funding is provided by the Helmholtz Association of National Research Centers in Germany.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to study the structure and function of a protein, it is generally required that the protein in question is purified away from all others. For soluble proteins, this process is greatly aided by the lack of any restriction on the free and independent diffusion of individual protein particles in three dimensions. This is not the case for membrane proteins, as the membrane itself forms a continuum that joins the proteins within the membrane with one another. It is therefore essential that the membrane is disrupted in order to allow separation and hence purification of membrane proteins. In the present review, we examine recent advances in the methods employed to separate membrane proteins before purification. These approaches move away from solubilization methods based on the use of small surfactants, which have been shown to suffer from significant practical problems. Instead, the present review focuses on methods that stem from the field of nanotechnology and use a range of reagents that fragment the membrane into nanometre-scale particles containing the protein complete with the local membrane environment. In particular, we examine a method employing the amphipathic polymer poly(styrene-co-maleic acid), which is able to reversibly encapsulate the membrane protein in a 10 nm disc-like structure ideally suited to purification and further biochemical study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper surveys the literature on scale and scope economies in the water and sewerage industry. The magnitude of scale and scope economies determines the cost efficient configuration of any industry. In the case of a regulated sector, reliable estimates of these economies are relevant to inform reform proposals that promote vertical (un)bundling and mergers. The empirical evidence allows some general conclusions. First, there is considerable evidence for the existence of vertical scope economies between upstream water production and distribution. Second, there is only mixed evidence on the existence of (dis)economies of scope between water and sewerage activities. Third, economies of scale exist up to certain output level, and diseconomies of scale arise if the company increases its size beyond this level. However, the optimal scale of utilities also appears to vary considerably between countries. Finally, we briefly consider the implications of our findings for water pricing and point to several directions for necessary future empirical research on the measurement of these economies, and explaining their cross country variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ALBA 2002 Call for Papers asks the question ‘How do organizational learning and knowledge management contribute to organizational innovation and change?’. Intuitively, we would argue, the answer should be relatively straightforward as links between learning and change, and knowledge management and innovation, have long been commonly assumed to exist. On the basis of this assumption, theories of learning tend to focus ‘within organizations’, and assume a transfer of learning from individual to organization which in turn leads to change. However, empirically, we find these links are more difficult to articulate. Organizations exist in complex embedded economic, political, social and institutional systems, hence organizational change (or innovation) may be influenced by learning in this wider context. Based on our research in this wider interorganizational setting, we first make the case for the notion of network learning that we then explore to develop our appreciation of change in interorganizational networks, and how it may be facilitated. The paper begins with a brief review of lite rature on learning in the organizational and interorganizational context which locates our stance on organizational learning versus the learning organization, and social, distributed versus technical, centred views of organizational learning and knowledge. Developing from the view that organizational learning is “a normal, if problematic, process in every organization” (Easterby-Smith, 1997: 1109), we introduce the notion of network learning: learning by a group of organizations as a group. We argue this is also a normal, if problematic, process in organizational relationships (as distinct from interorganizational learning), which has particular implications for network change. Part two of the paper develops our analysis, drawing on empirical data from two studies of learning. The first study addresses the issue of learning to collaborate between industrial customers and suppliers, leading to the case for network learning. The second, larger scale study goes on to develop this theme, examining learning around several major change issues in a healthcare service provider network. The learning processes and outcomes around the introduction of a particularly controversial and expensive technology are described, providing a rich and contrasting case with the first study. In part three, we then discuss the implications of this work for change, and for facilitating change. Conclusions from the first study identify potential interventions designed to facilitate individual and organizational learning within the customer organization to develop individual and organizational ‘capacity to collaborate’. Translated to the network example, we observe that network change entails learning at all levels – network, organization, group and individual. However, presenting findings in terms of interventions is less meaningful in an interorganizational network setting given: the differences in authority structures; the less formalised nature of the network setting; and the importance of evaluating performance at the network rather than organizational level. Academics challenge both the idea of managing change and of managing networks. Nevertheless practitioners are faced with the issue of understanding and in fluencing change in the network setting. Thus we conclude that a network learning perspective is an important development in our understanding of organizational learning, capability and change, locating this in the wider context in which organizations are embedded. This in turn helps to develop our appreciation of facilitating change in interorganizational networks, both in terms of change issues (such as introducing a new technology), and change orientation and capability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale massively parallel molecular dynamics (MD) simulations of the human class I major histo-compatibility complex (MHC) protein HLA-A*0201 bound to a decameric tumor-specific antigenic peptide GVY-DGREHTV were performed using a scalable MD code on high-performance computing platforms. Such computational capabilities put us in reach of simulations of various scales and complexities. The supercomputing resources available Large-scale massively parallel molecular dynamics (MD) simulations of the human class I major histocompatibility complex (MHC) protein HLA-A*0201 bound to a decameric tumor-specific antigenic peptide GVYDGREHTV were performed using a scalable MD code on high-performance computing platforms. Such computational capabilities put us in reach of simulations of various scales and complexities. The supercomputing resources available for this study allow us to compare directly differences in the behavior of very large molecular models; in this case, the entire extracellular portion of the peptide–MHC complex vs. the isolated peptide binding domain. Comparison of the results from the partial and the whole system simulations indicates that the peptide is less tightly bound in the partial system than in the whole system. From a detailed study of conformations, solvent-accessible surface area, the nature of the water network structure, and the binding energies, we conclude that, when considering the conformation of the α1–α2 domain, the α3 and β2m domains cannot be neglected. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1803–1813, 2004