828 resultados para network models
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Our understanding of how anthropogenic habitat change shapes species interactions is in its infancy. This is in large part because analytical approaches such as network theory have only recently been applied to characterize complex community dynamics. Network models are a powerful tool for quantifying how ecological interactions are affected by habitat modification because they provide metrics that quantify community structure and function. Here, we examine how large-scale habitat alteration has affected ecological interactions among mixed-species flocking birds in Amazonian rainforest. These flocks provide a model system for investigating how habitat heterogeneity influences non-trophic interactions and the subsequent social structure of forest-dependent mixed-species bird flocks. We analyse 21 flock interaction networks throughout a mosaic of primary forest, fragments of varying sizes and secondary forest (SF) at the Biological Dynamics of Forest Fragments Project in central Amazonian Brazil. Habitat type had a strong effect on network structure at the levels of both species and flock. Frequency of associations among species, as summarized by weighted degree, declined with increasing levels of forest fragmentation and SF. At the flock level, clustering coefficients and overall attendance positively correlated with mean vegetation height, indicating a strong effect of habitat structure on flock cohesion and stability. Prior research has shown that trophic interactions are often resilient to large-scale changes in habitat structure because species are ecologically redundant. By contrast, our results suggest that behavioural interactions and the structure of non-trophic networks are highly sensitive to environmental change. Thus, a more nuanced, system-by-system approach may be needed when thinking about the resiliency of ecological networks.
Resumo:
Complex non-linear interactions between banks and assets we model by two time-dependent Erdos-Renyi network models where each node, representing a bank, can invest either to a single asset (model I) or multiple assets (model II). We use a dynamical network approach to evaluate the collective financial failure -systemic risk- quantified by the fraction of active nodes. The systemic risk can be calculated over any future time period, divided into sub-periods, where within each sub-period banks may contiguously fail due to links to either i) assets or ii) other banks, controlled by two parameters, probability of internal failure p and threshold T-h ("solvency" parameter). The systemic risk decreases with the average network degree faster when all assets are equally distributed across banks than if assets are randomly distributed. The more inactive banks each bank can sustain (smaller T-h), the smaller the systemic risk -for some Th values in I we report a discontinuity in systemic risk. When contiguous spreading becomes stochastic ii) controlled by probability p(2) -a condition for the bank to be solvent (active) is stochasticthe- systemic risk decreases with decreasing p(2). We analyse the asset allocation for the U.S. banks. Copyright (C) EPLA, 2014
Resumo:
Pós-graduação em Engenharia Elétrica - FEB
Resumo:
Knowing which individuals can be more efficient in spreading a pathogen throughout a determinate environment is a fundamental question in disease control. Indeed, over recent years the spread of epidemic diseases and its relationship with the topology of the involved system have been a recurrent topic in complex network theory, taking into account both network models and real-world data. In this paper we explore possible correlations between the heterogeneous spread of an epidemic disease governed by the susceptible-infected-recovered (SIR) model, and several attributes of the originating vertices, considering Erdos-Renyi (ER), Barabasi-Albert (BA) and random geometric graphs (RGG), as well as a real case study, the US air transportation network, which comprises the 500 busiest airports in the US along with inter-connections. Initially, the heterogeneity of the spreading is achieved by considering the RGG networks, in which we analytically derive an expression for the distribution of the spreading rates among the established contacts, by assuming that such rates decay exponentially with the distance that separates the individuals. Such a distribution is also considered for the ER and BA models, where we observe topological effects on the correlations. In the case of the airport network, the spreading rates are empirically defined, assumed to be directly proportional to the seat availability. Among both the theoretical and real networks considered, we observe a high correlation between the total epidemic prevalence and the degree, as well as the strength and the accessibility of the epidemic sources. For attributes such as the betweenness centrality and the k-shell index, however, the correlation depends on the topology considered.
Resumo:
Spectral decomposition has rarely been used to investigate complex networks. In this work we apply this concept in order to define two kinds of link-directed attacks while quantifying their respective effects on the topology. Several other kinds of more traditional attacks are also adopted and compared. These attacks had substantially diverse effects, depending on each specific network (models and real-world structures). It is also shown that the spectrally based attacks have special effects in affecting the transitivity of the networks.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
Il presente lavoro di ricerca intende analizzare l’importanza dei modelli organizzativi a reti per rispondere alla sfida della complessità del momento storico, politico, sociologico, economico; in modo particolare in Sanità per rispondere all’esigenza di mettere al centro la persona nel percorso di cura ed ottenere una migliore integrazione delle cure. Sebbene i vantaggi delle reti siano bene descritti in letteratura sono ancora pochi gli studi nell’ambito della valutazione. Il caso di studio ha riguardato la rete oncologica romagnola così come percepita dagli informatori chiave, dagli operatori (medici, infermieri, amministrativi) e dalle persone con esperienza di tumore. Dall’analisi degli informatori chiave emerge forte che la rete nasce per dare risposte di qualità ai bisogni dei pazienti, mentre per gli operatori quanto sia importante la dimensione delle relazioni umane e avere valori condivisi per raggiungere obiettivi di efficacia e qualità delle cure. Per quanto riguarda invece la percezione delle persone con esperienza di tumore si rileva quanto sia importante l’appropriatezza della traiettoria di cura nonché l’avere continuità in un percorso, già di per sé difficile, oltre all’importanza dell’umanizzazione dei servizi e della corretta comunicazione medico paziente.
Resumo:
Il lavoro di ricerca prende le mosse da una premessa di ordine economico. Il fenomeno delle reti di impresa, infatti, nasce dalla realtà economica dei mercati. In tale contesto non può prescindere dal delineare un quadro della situazione- anche di crisi- congiunturale che ha visto coinvolte specialmente le imprese italiane. In tale prospettiva, si è reso necessario indagare il fenomeno della globalizzazione, con riferimento alle sue origini,caratteristiche e conseguenze. Ci si sofferma poi sulla ricostruzione dogmatica del fenomeno. Si parte dalla ricostruzione dello stesso in termini di contratto plurilaterale- sia esso con comunione di scopo oppure plurilaterale di scambio- per criticare tale impostazione, non del tutto soddisfacente, in quanto ritenuto remissiva di fronte alla attuale vis espansiva del contratto plurilaterale. Più convincente appare lo schema del collegamento contrattuale, che ha il pregio di preservare l’autonomia e l’indipendenza degli imprenditori aderenti, pur inseriti nel contesto di un’operazione economica unitaria, volta a perseguire uno scopo comune, l’“interesse di rete”, considerato meritevole di tutela secondo l’ordinamento giuridico ex art. 1322 2.co. c.c. In effetti il contratto ben si presta a disegnare modelli di rete sia con distribuzione simmetrica del potere decisionale, sia con distribuzione asimmetrica, vale a dire con un elevato livello di gerarchia interna. Non può d’altra parte non ravvisarsi un’affinità con le ipotesi di collegamento contrattuale in fase di produzione, consistente nel delegare ad un terzo parte della produzione, e nella fase distributiva, per cui la distribuzione avviene attraverso reti di contratti. Si affronta la materia della responsabilità della rete, impostando il problema sotto due profili: la responsabilità interna ed esterna. La prima viene risolta sulla base dell’affidamento reciproco maturato da ogni imprenditore. La seconda viene distinta in responsabilità extracontrattuale, ricondotta nella fattispecie all’art. 2050 c.c., e contrattuale.
Resumo:
The carbonate outcrops of the anticline of Monte Conero (Italy) were studied in order to characterize the geometry of the fractures and to establish their influence on the petrophysical properties (hydraulic conductivity) and on the vulnerability to pollution. The outcrops form an analog for a fractured aquifer and belong to the Maiolica Fm. and the Scaglia Rossa Fm. The geometrical properties of fractures such as orientation, length, spacing and aperture were collected and statistically analyzed. Five types of mechanical fractures were observed: veins, joints, stylolites, breccias and faults. The types of fractures are arranged in different sets and geometric assemblages which form fracture networks. In addition, the fractures were analyzed at the microscale using thin sections. The fracture age-relationships resulted similar to those observed at the outcrop scale, indicating that at least three geological episodes have occurred in Monte Conero. A conceptual model for fault development was based on the observations of veins and stylolites. The fracture sets were modelled by the code FracSim3D to generate fracture network models. The permeability of a breccia zone was estimated at microscale by and point counting and binary image methods, whereas at the outcrop scale with Oda’s method. Microstructure analysis revealed that only faults and breccias are potential pathways for fluid flow since all veins observed are filled with calcite. According this, three scenarios were designed to asses the vulnerability to pollution of the analogue aquifer: the first scenario considers the Monte Conero without fractures, second scenario with all observed systematic fractures and the third scenario with open veins, joints and faults/breccias. The fractures influence the carbonate aquifer by increasing its porosity and hydraulic conductivity. The vulnerability to pollution depends also on the presence of karst zones, detric zones and the material of the vadose zone.
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
Peru is a developing country with abundant fresh water resources, yet the lack of infrastructure leaves much of the population without access to safe water for domestic uses. The author of this report was a Peace Corps Volunteer in the sector of water & sanitation in the district of Independencia, Ica, Peru. Independencia is located in the arid coastal region of the country, receiving on average 15 mm of rain annually. The water source for this district comes from the Pisco River, originating in the Andean highlands and outflowing into the Pacific Ocean near the town of Pisco, Peru. The objectives of this report are to assess the water supply and sanitation practices, model the existing water distribution system, and make recommendations for future expansion of the distribution system in the district of Independencia, Peru. The assessment of water supply will be based on the results from community surveys done in the district of Independencia, water quality testing done by a detachment of the U.S. Navy, as well as on the results of a hydraulic model built in EPANET 2.0 to represent the distribution system. Sanitation practice assessments will be based on the surveys as well as observations from the author while living in Peru. Recommendations for system expansions will be made based on results from the EPANET model and the municipality’s technical report for the existing distribution system. Household water use and sanitation surveys were conducted with 84 families in the district revealing that upwards of 85% store their domestic water in regularly washed containers with lids. Over 80% of those surveyed are drinking water that is treated, mostly boiled. Of those surveyed, over 95% reported washing their hands and over 60% mentioned at least one critical time for hand washing when asked for specific instances. From the surveys, it was also discovered that over 80% of houses are properly disposing of excrement, in either latrines or septic tanks. There were 43 families interviewed with children five years of age or under, and just over 18% reported the child had a case of diarrhea within the last month at the time of the interview. Finally, from the surveys it was calculated that the average water use per person per day is about 22 liters. Water quality testing carried out by a detachment of the U.S. Navy revealed that the water intended for consumption in the houses surveyed was not suitable for consumption, with a median E. coli most probable number of 47/100 ml for the 61 houses sampled. The median total coliforms was 3,000 colony forming units per 100 ml. EPANET was used to simulate the water delivery system and evaluate its performance. EPANET is designed for continuous water delivery systems, assuming all pipes are always flowing full. To account for the intermittent nature of the system, multiple EPANET network models were created to simulate how water is routed to the different parts of the system throughout the day. The models were created from interviews with the water technicians and a map of the system created using handheld GPS units. The purpose is to analyze the performance of the water system that services approximately 13,276 people in the district of Independencia, Peru, as well as provide recommendations for future growth and improvement of the service level. Performance evaluation of the existing system is based on meeting 25 liters per person per day while maintaining positive pressure at all nodes in the network. The future performance is based on meeting a minimum pressure of 20 psi in the main line, as proposed by Chase (2000). The EPANET model results yield an average nodal pressure for all communities of 71 psi, with a range from 1.3 – 160 psi. Thus, if the current water delivery schedule obtained from the local municipality is followed, all communities should have sufficient pressure to deliver 25 l/p/d, with the exception of Los Rosales, which can only supply 3.25 l/p/d. However, if the line to Los Rosales were increased from one to four inches, the system could supply this community with 25 l/p/d. The district of Independencia could greatly benefit from increasing the service level to 24-hour water delivery and a minimum of 50 l/p/d, so that communities without reliable access due to insufficient pressure would become equal beneficiaries of this invaluable resource. To evaluate the feasibility of this, EPANET was used to model the system with a range of population growth rates, system lifetimes, and demands. In order to meet a minimum pressure of 20 psi in the main line, the 6-inch diameter main line must be increased and approximately two miles of trench must be excavated up to 30 feet deep. The sections of the main line that must be excavated are mile 0-1 and 1.5-2.5, and the first 3.4 miles of the main line must be increased from 6 to 16 inches, contracting to 10 inches for the remaining 5.8 miles. Doing this would allow 24-hour water delivery and provide 50 l/p/d for a range of population growth rates and system lifetimes. It is expected that improving the water delivery service would reduce the morbidity and mortality from diarrheal diseases by decreasing the recontamination of the water due to transport and household storage, as well as by maintaining continuous pressure in the system to prevent infiltration of contaminated groundwater. However, this expansion must be carefully planned so as not to affect aquatic ecosystems or other districts utilizing water from the Pisco River. It is recommended that stream gaging of the Pisco River and precipitation monitoring of the surrounding watershed is initiated in order to begin a hydrological study that would be integrated into the district’s water resource planning. It is also recommended that the district begin routine water quality testing, with the results available to the public.
Resumo:
This tutorial gives a step by step explanation of how one uses experimental data to construct a biologically realistic multicompartmental model. Special emphasis is given on the many ways that this process can be imprecise. The tutorial is intended for both experimentalists who want to get into computer modeling and for computer scientists who use abstract neural network models but are curious about biological realistic modeling. The tutorial is not dependent on the use of a specific simulation engine, but rather covers the kind of data needed for constructing a model, how they are used, and potential pitfalls in the process.
Resumo:
BACKGROUND The diagnostic performance of biochemical scores and artificial neural network models for portal hypertension and cirrhosis is not well established. AIMS To assess diagnostic accuracy of six serum scores, artificial neural networks and liver stiffness measured by transient elastography, for diagnosing cirrhosis, clinically significant portal hypertension and oesophageal varices. METHODS 202 consecutive compensated patients requiring liver biopsy and hepatic venous pressure gradient measurement were included. Several serum tests (alone and combined into scores) and liver stiffness were measured. Artificial neural networks containing or not liver stiffness as input variable were also created. RESULTS The best non-invasive method for diagnosing cirrhosis, portal hypertension and oesophageal varices was liver stiffness (C-statistics=0.93, 0.94, and 0.90, respectively). Among serum tests/scores the best for diagnosing cirrhosis and portal hypertension and oesophageal varices were, respectively, Fibrosis-4, and Lok score. Artificial neural networks including liver stiffness had high diagnostic performance for cirrhosis, portal hypertension and oesophageal varices (accuracy>80%), but were not statistically superior to liver stiffness alone. CONCLUSIONS Liver stiffness was the best non-invasive method to assess the presence of cirrhosis, portal hypertension and oesophageal varices. The use of artificial neural networks integrating different non-invasive tests did not increase the diagnostic accuracy of liver stiffness alone.
Resumo:
El presente proyecto fin de carrera tiene como objetivo realizar un estudio del núcleo de red en las de redes de nueva generación (NGN) y de cómo la evolución de las redes actuales hacia estos conceptos producirá un cambio en la forma de pensar y desarrollar las redes de comunicaciones del futuro. El estudio esta desglosado en tres grandes partes y se inicia con el análisis de la evolución que ha sufrido el núcleo de red en las redes de comunicaciones móviles digitales desde la implantación de las primeras redes digitales hasta la actualidad abarcando tanto la evolución de las redes troncales como de las redes de acceso así como los cambios que han tenido lugar tanto dentro de las propias estructuras de red de los operadores como la forma de interconectarse entre sus redes. Una segunda parte que constituye el cuerpo teórico del trabajo donde se estudia a nivel funcional y de arquitectura de red el desarrollo de los nuevos modelos de red proporcionados por los organismos de estandarización que dan lugar a la aparición de las redes de nueva generación (NGN) y que constituirán el siguiente paso en la evolución de las redes de comunicaciones hacia una infraestructura de red común para todas las redes de acceso actuales. Y una tercera parte que tiene como objetivo el estudio del grado de transformación que tienen que sufrir el núcleo de red en actuales redes troncales de comunicaciones móviles y terrestres, así como una valoración del estado actual de dicha integración, de las dificultades que están encontrando fabricantes y proveedores de servicio para la implementación de dichas redes en el contexto tecnológico y económico actual y su respectivo análisis de como afectará este cambio a los modelos de negocio de los proveedores de servicios de telecomunicaciones. Finalmente se estudia como se esta llevando a cabo este proceso por medio de un caso práctico de implantación e interconexión de la solución propuesta por un fabricante de equipamiento basándose en los modelos anteriormente expuestos en una red comercial de un operador en España y todas las implicaciones asociadas a esta caso concreto. The object of this work is to provide a deep view about the core network inside next generation network (NGN) and how the evolution of the current comunications networks towards the concepts introduced by these new networks brings a change in the way of think and develop communications networks of the future. This work is composed of three blocks and one real case and it starts with the analysis of the evolution of the core network in digital mobile comunications networks since the beginning of the digital mobile comunications networks deployments until nowadays both in core network side and access network side and how the providers have made changes inside their comunications infrastructure and how they interconnect them with other networks. A second part which is the central theoretical part of this work where it is studied the next generation network models stablished by telecomunications associations and how they will be the next step in the evolution of comunications networks towards a common network infrastructure for all existing access networks. A third part where it is studied the level of transformation that core network in mobile and terrestrial comunications networks have to experienced since current situation up to next generation scenarios and what it is the impact of these changes, the issues that are arising for developers, manufactures and service providers in this process, the way that these changes will improve and shift telecomunications business models and how the current economic and technological context is influencing in the whole process. Finally it is studied a actual case about a proposed solution by a manufacturer that based on the models exposed in second part take place a integration and interconection process in a the comercial network of one telecomunication service providers in Spain. This final part regards to all implications associated with this specific case.