6 resultados para scale free
em Aston University Research Archive
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analysing network failures caused by hardware faults or overload. There network reaction was modelled as rerouting of traffic away from failed or congested elements. Here we model network reaction to congestion on much shorter time scales when the input traffic rate through congested routes is reduced. As an example we consider the Internet where local mismatch between demand and capacity results in traffic losses. We describe the onset of congestion as a phase transition characterised by strong, albeit relatively short-lived, fluctuations of losses caused by noise in input traffic and exacerbated by the heterogeneous nature of the network manifested in a power-law load distribution. The fluctuations may result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © 2013 IEEE.
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. Here we model another type of the network reaction to congestion - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. We consider the onset of congestion in the Internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non-Gaussian loss fluctuations at a mesoscopic time scale. The fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale-free load distribution. They result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © Copyright EPLA, 2012.
Resumo:
In order to study the structure and function of a protein, it is generally required that the protein in question is purified away from all others. For soluble proteins, this process is greatly aided by the lack of any restriction on the free and independent diffusion of individual protein particles in three dimensions. This is not the case for membrane proteins, as the membrane itself forms a continuum that joins the proteins within the membrane with one another. It is therefore essential that the membrane is disrupted in order to allow separation and hence purification of membrane proteins. In the present review, we examine recent advances in the methods employed to separate membrane proteins before purification. These approaches move away from solubilization methods based on the use of small surfactants, which have been shown to suffer from significant practical problems. Instead, the present review focuses on methods that stem from the field of nanotechnology and use a range of reagents that fragment the membrane into nanometre-scale particles containing the protein complete with the local membrane environment. In particular, we examine a method employing the amphipathic polymer poly(styrene-co-maleic acid), which is able to reversibly encapsulate the membrane protein in a 10 nm disc-like structure ideally suited to purification and further biochemical study.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
Human mesenchymal stem cell (hMSC) therapies are currently progressing through clinical development, driving the need for consistent, and cost effective manufacturing processes to meet the lot-sizes required for commercial production. The use of animal-derived serum is common in hMSC culture but has many drawbacks such as limited supply, lot-to-lot variability, increased regulatory burden, possibility of pathogen transmission, and reduced scope for process optimization. These constraints may impact the development of a consistent large-scale process and therefore must be addressed. The aim of this work was therefore to run a pilot study in the systematic development of serum-free hMSC manufacturing process. Human bone-marrow derived hMSCs were expanded on fibronectin-coated, non-porous plastic microcarriers in 100mL stirred spinner flasks at a density of 3×105cells.mL-1 in serum-free medium. The hMSCs were successfully harvested by our recently-developed technique using animal-free enzymatic cell detachment accompanied by agitation followed by filtration to separate the hMSCs from microcarriers, with a post-harvest viability of 99.63±0.03%. The hMSCs were found to be in accordance with the ISCT characterization criteria and maintained hMSC outgrowth and colony-forming potential. The hMSCs were held in suspension post-harvest to simulate a typical pooling time for a scaled expansion process and cryopreserved in a serum-free vehicle solution using a controlled-rate freezing process. Post-thaw viability was 75.8±1.4% with a similar 3h attachment efficiency also observed, indicating successful hMSC recovery, and attachment. This approach therefore demonstrates that once an hMSC line and appropriate medium have been selected for production, multiple unit operations can be integrated to generate an animal component-free hMSC production process from expansion through to cryopreservation.
Resumo:
In recent years, we have witnessed the mushrooming of pro- democracy and protest movements not only in the Arab world, but also within Europe and the Americas. Such movements have ranged from popular upheavals, like in Tunisia and Egypt, to the organization of large- scale demonstrations against unpopular policies, as in Spain, Greece and Poland. What connects these different events are not only their democratic aspirations, but also their innovative forms of communication and organization through online means, which are sometimes considered to be outside of the State’s control. At the same time, however, it has become more and more apparent that countries are attempting to increase their understanding of, and control over, their citizens’ actions in the digital sphere. This involves striving to develop surveillance instruments, control mechanisms and processes engineered to dominate the digital public sphere, which necessitates the assistance and support of private actors such as Internet intermediaries. Examples include the growing use of Internet surveillance technology with which online data traffic is analysed, and the extensive monitoring of social networks. Despite increased media attention, academic debate on the ambivalence of these technologies, mechanisms and techniques remains relatively limited, as is discussion of the involvement of corporate actors. The purpose of this edited volume is to reflect on how Internet-related technologies, mechanisms and techniques may be used as a means to enable expression, but also to restrict speech, manipulate public debate and govern global populaces.