940 resultados para Complex network. Optimal path. Optimal path cracks
Resumo:
Transition metals such as Fe, Cu, Mn, Ni, or Co are essential nutrients, as they are constitutive elements of a significant fraction of cell proteins. Such metals are present in the active site of many enzymes, and also participate as structural elements in different proteins. From a chemical point of view, metals have a defined order of affinity for binding, designated as the Irving-Williams series (Irving and Williams, 1948) Mg2+ menor que Mn2+ menor que Fe2+ menor que Co2+ menor que Ni2+ menor que Cu2+mayor queZn2+ Since cells contain a high number of different proteins harbouring different metal ions, a simplistic model in which proteins are synthesized and metals imported into a ?cytoplasmic soup? cannot explain the final product that we find in the cell. Instead we need to envisage a complex model in which specific ligands are present in definite amounts to leave the right amounts of available metals and protein binding sites, so specific pairs can bind appropriately. A critical control on the amount of ligands and metal present is exerted through specific metal-responsive regulators able to induce the synthesis of the right amount of ligands (essentially metal binding proteins), import and efflux proteins. These systems are adapted to establish the metal-protein equilibria compatible with the formation of the right metalloprotein complexes. Understanding this complex network of interactions is central to the understanding of metal metabolism for the synthesis of metalloenzymes, a key topic in the Rhizobium-legume symbiosis. In the case of the Rhizobium leguminosarum bv viciae (Rlv) UPM791 -Pisum sativum symbiotic system, the concentration of nickel in the plant nutrient solution is a limiting factor for hydrogenase expression, and provision of high amounts of this element to the plant nutrient solution is required to ensure optimal levels of enzyme synthesis (Brito et al., 1994).
Resumo:
Understanding flow path connectivity within a geothermal reservoir is a critical component for efficiently producing sustained flow rates of hot fluids from the subsurface. I present a new approach for characterizing subsurface fracture connectivity that combines petrographic and cold cathodoluminescence (CL) microscopy with stable isotope analysis (δ18O and δ13C) and clumped isotope (Δ47) thermometry of fracture-filling calcite cements from a geothermal reservoir in northern Nevada. Calcite cement samples were derived from both drill cuttings and core samples taken at various depths from wells within the geothermal field. CL microscopy of some fracture filling cements shows banding parallel to the fracture walls as well as brecciation, indicating that the cements are related to fracture opening and fault slip. Variations in trace element composition indicated by the luminescence patterns reflect variations in the composition and source of fluids moving through the fractures as they opened episodically. Calcite δ13C and δ18O results also show significant variation among the sampled cements, reflecting multiple generations of fluids and fracture connectivity. Clumped isotope analyses performed on a subset of the cements analyzed for conventional δ18O and δ13C mostly show calcite growth temperatures around 150°C—above the current ambient rock temperature, which indicates a common temperature trend for the geothermal reservoir. However, calcite cements sampled along faults located within the well field showed both cold (18.7°C) and hot (226.1°C) temperatures. The anomalously cool temperature found along the fault, using estimates from clumped isotope thermometry, suggests a possible connection to surface waters for the geothermal source fluids for this system. This information may indicate that some of the faults within the well field are transporting meteoric water from the surface to be heated at depth, which then is circulated through a complex network of fractures and other faults.
Resumo:
Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.
Resumo:
On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.
Resumo:
A network can be analyzed at different topological scales, ranging from single nodes to motifs, communities, up to the complete structure. We propose a novel approach which extends from single nodes to the whole network level by considering non-overlapping subgraphs (i.e. connected components) and their interrelationships and distribution through the network. Though such subgraphs can be completely general, our methodology focuses on the cases in which the nodes of these subgraphs share some special feature, such as being critical for the proper operation of the network. The methodology of subgraph characterization involves two main aspects: (i) the generation of histograms of subgraph sizes and distances between subgraphs and (ii) a merging algorithm, developed to assess the relevance of nodes outside subgraphs by progressively merging subgraphs until the whole network is covered. The latter procedure complements the histograms by taking into account the nodes lying between subgraphs, as well as the relevance of these nodes to the overall subgraph interconnectivity. Experiments were carried out using four types of network models and five instances of real-world networks, in order to illustrate how subgraph characterization can help complementing complex network-based studies.
Resumo:
There are several ways of controlling the propagation of a contagious disease. For instance, to reduce the spreading of an airborne infection, individuals can be encouraged to remain in their homes and/or to wear face masks outside their domiciles. However, when a limited amount of masks is available, who should use them: the susceptible subjects, the infective persons or both populations? Here we employ susceptible-infective-recovered (SIR) models described in terms of ordinary differential equations and probabilistic cellular automata in order to investigate how the deletion of links in the random complex network representing the social contacts among individuals affects the dynamics of a contagious disease. The inspiration for this study comes from recent discussions about the impact of measures usually recommended by health public organizations for preventing the propagation of the swine influenza A (H1N1) virus. Our answer to this question can be valid for other eco-epidemiological systems. (C) 2010 Elsevier BM. All rights reserved.
Resumo:
Complex systems, i.e. systems composed of a large set of elements interacting in a non-linear way, are constantly found all around us. In the last decades, different approaches have been proposed toward their understanding, one of the most interesting being the Complex Network perspective. This legacy of the 18th century mathematical concepts proposed by Leonhard Euler is still current, and more and more relevant in real-world problems. In recent years, it has been demonstrated that network-based representations can yield relevant knowledge about complex systems. In spite of that, several problems have been detected, mainly related to the degree of subjectivity involved in the creation and evaluation of such network structures. In this Thesis, we propose addressing these problems by means of different data mining techniques, thus obtaining a novel hybrid approximation intermingling complex networks and data mining. Results indicate that such techniques can be effectively used to i) enable the creation of novel network representations, ii) reduce the dimensionality of analyzed systems by pre-selecting the most important elements, iii) describe complex networks, and iv) assist in the analysis of different network topologies. The soundness of such approach is validated through different validation cases drawn from actual biomedical problems, e.g. the diagnosis of cancer from tissue analysis, or the study of the dynamics of the brain under different neurological disorders.
Resumo:
Tese de Doutoramento em Ciências da Saúde.
Resumo:
Water transport in wood is vital for the survival of trees. With synchrotron radiation X-ray tomographic microscopy (SRXTM), it has become possible to characterize and quantify the three-dimensional (3D) network formed by vessels that are responsible for longitudinal transport. In the present study, the spatial size dependence of vessels and the organization inside single growth rings in terms of vessel-induced porosity was studied by SRXTM. Network characteristics, such as connectivity, were deduced by digital image analysis from the processed tomographic data and related to known complex network topologies.
Resumo:
Many routes have been described for percutaneous adrenal gland biopsy. They require either a complex non-axial path or a long hydrodissection or even pass through an organ thereby increasing complications. We describe here an approach using an artificially-induced carbon dioxide (CO2) pneumothorax, performed as an outpatient procedure in a 57-year-old woman. Under local anaesthesia, 200 ml of CO2 was injected in the pleural space through a Veress needle under computed tomography fluoroscopy, to clear the lung parenchyma from the biopsy route. Using this technique, transthoracic adrenal biopsy can be performed under simple local anaesthesia as an safely outpatient procedure.
Resumo:
We study a Kuramoto model in which the oscillators are associated with the nodes of a complex network and the interactions include a phase frustration, thus preventing full synchronization. The system organizes into a regime of remote synchronization where pairs of nodes with the same network symmetry are fully synchronized, despite their distance on the graph. We provide analytical arguments to explain this result, and we show how the frustration parameter affects the distribution of phases. An application to brain networks suggests that anatomical symmetry plays a role in neural synchronization by determining correlated functional modules across distant locations.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
Biological systems are complex dynamical systems whose relationships with environment have strong implications on their regulation and survival. From the interactions between plant and environment can emerge a quite complex network of plant responses rarely observed through classical analytical approaches. The objective of this current study was to test the hypothesis that photosynthetic responses of different tree species to increasing irradiance are related to changes in network connectances of gas exchange and photochemical apparatus, and alterations in plant autonomy in relation to the environment. The heat dissipative capacity through daily changes in leaf temperature was also evaluated. It indicated that the early successional species (Citharexylum myrianthum Cham. and Rhamnidium elaeocarpum Reiss.) were more efficient as dissipative structures than the late successional one (Cariniana legalis (Mart.) Kuntze), suggesting that the parameter deltaT (T ºCair - T ºCleaf) could be a simple tool in order to help the classification of successional classes of tropical trees. Our results indicated a pattern of network responses and autonomy changes under high irradiance. Considering the maintenance of daily CO2 assimilation, the tolerant species (C. myrianthum and R. elaeocarpum) to high irradiance trended to maintain stable the level of gas exchange network connectance and to increase the autonomy in relation to the environment. On the other hand, the late successional species (C. legalis) trended to lose autonomy, decreasing the network connectance of gas exchange. All species showed lower autonomy and higher network connectance of the photochemical apparatus under high irradiance.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.