59 resultados para NETWORK ANALYSIS
Resumo:
The Team Formation problem (TFP) has become a well-known problem in the OR literature over the last few years. In this problem, the allocation of multiple individuals that match a required set of skills as a group must be chosen to maximise one or several social positive attributes. Speci�cally, the aim of the current research is two-fold. First, two new dimensions of the TFP are added by considering multiple projects and fractions of people's dedication. This new problem is named the Multiple Team Formation Problem (MTFP). Second, an optimization model consisting in a quadratic objective function, linear constraints and integer variables is proposed for the problem. The optimization model is solved by three algorithms: a Constraint Programming approach provided by a commercial solver, a Local Search heuristic and a Variable Neighbourhood Search metaheuristic. These three algorithms constitute the first attempt to solve the MTFP, being a variable neighbourhood local search metaheuristic the most effi�cient in almost all cases. Applications of this problem commonly appear in real-life situations, particularly with the current and ongoing development of social network analysis. Therefore, this work opens multiple paths for future research.
Resumo:
This paper deals with the selection of centres for radial basis function (RBF) networks. A novel mean-tracking clustering algorithm is described as a way in which centers can be chosen based on a batch of collected data. A direct comparison is made between the mean-tracking algorithm and k-means clustering and it is shown how mean-tracking clustering is significantly better in terms of achieving an RBF network which performs accurate function modelling.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Deep Brain Stimulator devices are becoming widely used for therapeutic benefits in movement disorders such as Parkinson's disease. Prolonging the battery life span of such devices could dramatically reduce the risks and accumulative costs associated with surgical replacement. This paper demonstrates how an artificial neural network can be trained using pre-processing frequency analysis of deep brain electrode recordings to detect the onset of tremor in Parkinsonian patients. Implementing this solution into an 'intelligent' neurostimulator device will remove the need for continuous stimulation currently used, and open up the possibility of demand-driven stimulation. Such a methodology could potentially decrease the power consumption of a deep brain pulse generator.
Resumo:
Chemical and meteorological parameters measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the African Monsoon Multidisciplinary Analysis (AMMA) campaign are presented to show the impact of NOx emissions from recently wetted soils in West Africa. NO emissions from soils have been previously observed in many geographical areas with different types of soil/vegetation cover during small scale studies and have been inferred at large scales from satellite measurements of NOx. This study is the first dedicated to showing the emissions of NOx at an intermediate scale between local surface sites and continental satellite measurements. The measurements reveal pronounced mesoscale variations in NOx concentrations closely linked to spatial patterns of antecedent rainfall. Fluxes required to maintain the NOx concentrations observed by the BAe-146 in a number of cases studies and for a range of assumed OH concentrations (1×106 to 1×107 molecules cm−3) are calculated to be in the range 8.4 to 36.1 ng N m−2 s−1. These values are comparable to the range of fluxes from 0.5 to 28 ng N m−2 s−1 reported from small scale field studies in a variety of non-nutrient rich tropical and sub-tropical locations reported in the review of Davidson and Kingerlee (1997). The fluxes calculated in the present study have been scaled up to cover the area of the Sahel bounded by 10 to 20 N and 10 E to 20 W giving an estimated emission of 0.03 to 0.30 Tg N from this area for July and August 2006. The observed chemical data also suggest that the NOx emitted from soils is taking part in ozone formation as ozone concentrations exhibit similar fine scale structure to the NOx, with enhancements over the wet soils. Such variability can not be explained on the basis of transport from other areas. Delon et al. (2008) is a companion paper to this one which models the impact of soil NOx emissions on the NOx and ozone concentration over West Africa during AMMA. It employs an artificial neural network to define the emissions of NOx from soils, integrated into a coupled chemistry-dynamics model. The results are compared to the observed data presented in this paper. Here we compare fluxes deduced from the observed data with the model-derived values from Delon et al. (2008).
Resumo:
Nitrogen oxide biogenic emissions from soils are driven by soil and environmental parameters. The relationship between these parameters and NO fluxes is highly non linear. A new algorithm, based on a neural network calculation, is used to reproduce the NO biogenic emissions linked to precipitations in the Sahel on the 6 August 2006 during the AMMA campaign. This algorithm has been coupled in the surface scheme of a coupled chemistry dynamics model (MesoNH Chemistry) to estimate the impact of the NO emissions on NOx and O3 formation in the lower troposphere for this particular episode. Four different simulations on the same domain and at the same period are compared: one with anthropogenic emissions only, one with soil NO emissions from a static inventory, at low time and space resolution, one with NO emissions from neural network, and one with NO from neural network plus lightning NOx. The influence of NOx from lightning is limited to the upper troposphere. The NO emission from soils calculated with neural network responds to changes in soil moisture giving enhanced emissions over the wetted soil, as observed by aircraft measurements after the passing of a convective system. The subsequent enhancement of NOx and ozone is limited to the lowest layers of the atmosphere in modelling, whereas measurements show higher concentrations above 1000 m. The neural network algorithm, applied in the Sahel region for one particular day of the wet season, allows an immediate response of fluxes to environmental parameters, unlike static emission inventories. Stewart et al (2008) is a companion paper to this one which looks at NOx and ozone concentrations in the boundary layer as measured on a research aircraft, examines how they vary with respect to the soil moisture, as indicated by surface temperature anomalies, and deduces NOx fluxes. In this current paper the model-derived results are compared to the observations and calculated fluxes presented by Stewart et al (2008).
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
A novel bis(glycinato) copper(II) paradodecatungstate Na-8[{Cu(gly)(2)}(2)]-{H-2(H2W12O42)}] center dot 24H(2)O (1) has been synthesized under hydrothermal conditions. The crystal structure of 1 reveals an infinite one-dimensional chain along the [100] direction and is built from paradodecatungstate (H2W12O42)(10-) clusters joined through [Cu(gly)(2)] moieties. Parallel chains are interlinked by NaO6 octahedra to generate a two-dimensional network.
Resumo:
The hydrothermal reactions of Ni(NO3)(2).6H(2)O, disodium fumarate (fum) and 1,2-bis(4-pyridyl)ethane (bpe)/1,3-bis(4-pyridyl) propane (bpp) in aqueous-methanol medium yield one 3-D and one 2-D metal-organic hybrid material, [Ni(fum)(bpe)] (1) and [Ni(fum)(bpp)(H2O)] (2), respectively. Complex 1 possesses a novel unprecedented structure, the first example of an "unusual mode" of a five-fold distorted interpenetrated network with metal-ligand linkages where the four six-membered windows in each adamantane-type cage are different. The structural characterization of complex 2 evidences a buckled sheet where nickel ions are in a distorted octahedral geometry, with two carboxylic groups, one acting as a bis-chelate, the other as a bis-monodentate ligand. The metal ion completes the coordination sphere through one water molecule and two bpp nitrogens in cis position. Variable-temperature magnetic measurements of complexes 1 and 2 reveal the existence of very weak antiferromagnetic intramolecular interactions and/or the presence of single-ion zero field splitting (D) of isolated Ni-II ions in both the compounds. Experimentally, both the J parameters are close, comparable and very small. Considering zero-field splitting of Ni-II, the calculated D values are in agreement with values reported in the literature for Ni-II ions. Complex 3, [{Co(phen)}(2)(fum)(2)] (phen=1,10-phenanthroline) is obtained by diffusing methanolic solution of 1,10-phenanthroline on an aqueous layer of disodium fumarate and Co(NO3)(2).6H(2)O. It consists of dimeric Co-II(phen) units, doubly bridged by carboxylate groups in a distorted syn-syn fashion. These fumarate anions act as bis-chelates to form corrugated sheets. The 2D layer has a (4,4) topology, with the nodes represented by the centres of the dimers. The magnetic data were fitted ignoring the very weak coupling through the fumarate pathway and using a dimer model.
Resumo:
The major technical objectives of the RC-NSPES are to provide a framework for the concurrent operation of reactive and pro-active security functions to deliver efficient and optimised intrusion detection schemes as well as enhanced and highly correlated rule sets for more effective alerts management and root-cause analysis. The design and implementation of the RC-NSPES solution includes a number of innovative features in terms of real-time programmable embedded hardware (FPGA) deployment as well as in the integrated management station. These have been devised so as to deliver enhanced detection of attacks and contextualised alerts against threats that can arise from both the network layer and the application layer protocols. The resulting architecture represents an efficient and effective framework for the future deployment of network security systems.
Resumo:
The deployment of Quality of Service (QoS) techniques involves careful analysis of area including: those business requirements; corporate strategy; and technical implementation process, which can lead to conflict or contradiction between those goals of various user groups involved in that policy definition. In addition long-term change management provides a challenge as these implementations typically require a high-skill set and experience level, which expose organisations to effects such as “hyperthymestria” [1] and “The Seven Sins of Memory”, defined by Schacter and discussed further within this paper. It is proposed that, given the information embedded within the packets of IP traffic, an opportunity exists to augment the traffic management with a machine-learning agent-based mechanism. This paper describes the process by which current policies are defined and that research required to support the development of an application which enables adaptive intelligent Quality of Service controls to augment or replace those policy-based mechanisms currently in use.