904 resultados para Optimisation of methods
Marine biotoxins in the Catalan littoral: could biosensors be integrated into monitoring programmes?
Resumo:
Aquest article descriu els sensors enzimàtics i immunosensors electroquímics que s’han desenvolupat als nostres grups per a la detecció de la biotoxina marina àcid okadaic (OA), i discuteix la possibilitat d’integrar-los en programes de seguiment. Els sensors enzimàtics per a OA que es presenten es basen en la inhibició de la proteïna fosfatasa (PP2A) per aquesta toxina i la mesura electroquímica de l’activitat enzimàtica mitjançant l’ús de substrats enzimàtics apropiats, electroquímicament actius després de la seva desfosforació per l’enzim. Els immunosensors electroquímics descrits en aquest article es basen en un enzimoimmunoassaig sobre fase sòlida competitiu indirecte (ciELISA), amb fosfatasa alcalina (ALP) o peroxidasa (HRP) com a marcatges, i un sistema de reciclatge enzimàtic amb diaforasa (DI). Els biosensors presentats aquí s’han aplicat a l’anàlisi de dinoflagel·lats, musclos i ostres. Les validacions preliminars amb assaigs colorimètrics i LC-MS/MS han demostrat la possibilitat d’utilitzar les bioeines desenvolupades per al cribratge preliminar de biotoxines marines en mostres de camp o de cultiu, que ofereixen informació complementària a la cromatografia. En conclusió, tot i que encara cal optimitzar alguns paràmetres experimentals, la integració dels biosensors a programes de seguiment és viable i podria proporcionar avantatges respecte a altres tècniques analítiques pel que fa al temps d’anàlisi, la simplicitat, la selectivitat, la sensibilitat, el fet de poder ser d’un sol ús i l’efectivitat de cost. This article describes the electrochemical enzyme sensors and immunosensors that have been developed by our groups for the detection of marine biotoxin okadaic acid (OA), and discusses the possibility of integrating them into monitoring programmes. The enzyme sensors for OA reported herein are based on the inhibition of immobilised protein phosphatase 2A (PP2A) by this toxin and the electrochemical measurement of the enzyme activity through the use of appropriate enzyme substrates, which are electrochemically active after dephosphorylation by the enzyme. The electrochemical immunosensors described in this article are based on a competitive indirect Enzyme- Linked ImmunoSorbent Assay (ciELISA), using alkaline phosphatase (ALP) or horseradish peroxidase (HRP) as labels, and an enzymatic recycling system with diaphorase (DI). The biosensors presented herein have been applied to the analysis of dinoflagellates, mussels and oysters. Preliminary validations with colorimetric assays and LC-MS/MS have demonstrated the possibility of using the developed biotools for the preliminary screening of marine biotoxins in field or cultured samples, offering complementary information to chromatography. In conclusion, although optimisation of some experimental parameters is still required, the integration of biosensors into monitoring programmes is viable and may provide advantages over other analytical techniques in terms of analysis time, simplicity, selectivity, sensitivity, disposability of electrodes and cost effectiveness.
Resumo:
MOTIVATION: Microarray results accumulated in public repositories are widely reused in meta-analytical studies and secondary databases. The quality of the data obtained with this technology varies from experiment to experiment, and an efficient method for quality assessment is necessary to ensure their reliability. RESULTS: The lack of a good benchmark has hampered evaluation of existing methods for quality control. In this study, we propose a new independent quality metric that is based on evolutionary conservation of expression profiles. We show, using 11 large organ-specific datasets, that IQRray, a new quality metrics developed by us, exhibits the highest correlation with this reference metric, among 14 metrics tested. IQRray outperforms other methods in identification of poor quality arrays in datasets composed of arrays from many independent experiments. In contrast, the performance of methods designed for detecting outliers in a single experiment like Normalized Unscaled Standard Error and Relative Log Expression was low because of the inability of these methods to detect datasets containing only low-quality arrays and because the scores cannot be directly compared between experiments. AVAILABILITY AND IMPLEMENTATION: The R implementation of IQRray is available at: ftp://lausanne.isb-sib.ch/pub/databases/Bgee/general/IQRray.R. CONTACT: Marta.Rosikiewicz@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
En aquest treball realitzem un estudi sobre la detecció y la descripció de punts característics, una tecnologia que permet extreure informació continguda en les imatges. Primerament presentem l'estat de l'art juntament amb una avaluació dels mètodes més rellevants. A continuació proposem els nous mètodes que hem creat de detecció i descripció, juntament amb l'algorisme òptim anomenat DART, el qual supera l'estat de l'art. Finalment mostrem algunes aplicacions on s'utilitzen els punts DART. Basant-se en l'aproximació de l'espai d'escales Gaussià, el detector proposat pot extreure punts de distint tamany invariants davant canvis en el punt de vista, la rotació i la iluminació. La reutilització de l'espai d'escales durant el procés de descripció, així com l'ús d'estructures simplificades i optimitzades, permeten realitzar tot el procediment en un temps computacional menor a l'obtingut fins al moment. Així s'aconsegueixen punts invariants i distingibles de forma ràpida, el qual permet la seva utilització en aplicacions com el seguiment d'objectes, la reconstrucció d'escenaris 3D i en motors de cerca visual.
Resumo:
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Resumo:
Summary: Global warming has led to an average earth surface temperature increase of about 0.7 °C in the 20th century, according to the 2007 IPCC report. In Switzerland, the temperature increase in the same period was even higher: 1.3 °C in the Northern Alps anal 1.7 °C in the Southern Alps. The impacts of this warming on ecosystems aspecially on climatically sensitive systems like the treeline ecotone -are already visible today. Alpine treeline species show increased growth rates, more establishment of young trees in forest gaps is observed in many locations and treelines are migrating upwards. With the forecasted warming, this globally visible phenomenon is expected to continue. This PhD thesis aimed to develop a set of methods and models to investigate current and future climatic treeline positions and treeline shifts in the Swiss Alps in a spatial context. The focus was therefore on: 1) the quantification of current treeline dynamics and its potential causes, 2) the evaluation and improvement of temperaturebased treeline indicators and 3) the spatial analysis and projection of past, current and future climatic treeline positions and their respective elevational shifts. The methods used involved a combination of field temperature measurements, statistical modeling and spatial modeling in a geographical information system. To determine treeline shifts and assign the respective drivers, neighborhood relationships between forest patches were analyzed using moving window algorithms. Time series regression modeling was used in the development of an air-to-soil temperature transfer model to calculate thermal treeline indicators. The indicators were then applied spatially to delineate the climatic treeline, based on interpolated temperature data. Observation of recent forest dynamics in the Swiss treeline ecotone showed that changes were mainly due to forest in-growth, but also partly to upward attitudinal shifts. The recent reduction in agricultural land-use was found to be the dominant driver of these changes. Climate-driven changes were identified only at the uppermost limits of the treeline ecotone. Seasonal mean temperature indicators were found to be the best for predicting climatic treelines. Applying dynamic seasonal delimitations and the air-to-soil temperature transfer model improved the indicators' applicability for spatial modeling. Reproducing the climatic treelines of the past 45 years revealed regionally different attitudinal shifts, the largest being located near the highest mountain mass. Modeling climatic treelines based on two IPCC climate warming scenarios predicted major shifts in treeline altitude. However, the currently-observed treeline is not expected to reach this limit easily, due to lagged reaction, possible climate feedback effects and other limiting factors. Résumé: Selon le rapport 2007 de l'IPCC, le réchauffement global a induit une augmentation de la température terrestre de 0.7 °C en moyenne au cours du 20e siècle. En Suisse, l'augmentation durant la même période a été plus importante: 1.3 °C dans les Alpes du nord et 1.7 °C dans les Alpes du sud. Les impacts de ce réchauffement sur les écosystèmes - en particuliers les systèmes sensibles comme l'écotone de la limite des arbres - sont déjà visibles aujourd'hui. Les espèces de la limite alpine des forêts ont des taux de croissance plus forts, on observe en de nombreux endroits un accroissement du nombre de jeunes arbres s'établissant dans les trouées et la limite des arbres migre vers le haut. Compte tenu du réchauffement prévu, on s'attend à ce que ce phénomène, visible globalement, persiste. Cette thèse de doctorat visait à développer un jeu de méthodes et de modèles pour étudier dans un contexte spatial la position présente et future de la limite climatique des arbres, ainsi que ses déplacements, au sein des Alpes suisses. L'étude s'est donc focalisée sur: 1) la quantification de la dynamique actuelle de la limite des arbres et ses causes potentielles, 2) l'évaluation et l'amélioration des indicateurs, basés sur la température, pour la limite des arbres et 3) l'analyse spatiale et la projection de la position climatique passée, présente et future de la limite des arbres et des déplacements altitudinaux de cette position. Les méthodes utilisées sont une combinaison de mesures de température sur le terrain, de modélisation statistique et de la modélisation spatiale à l'aide d'un système d'information géographique. Les relations de voisinage entre parcelles de forêt ont été analysées à l'aide d'algorithmes utilisant des fenêtres mobiles, afin de mesurer les déplacements de la limite des arbres et déterminer leurs causes. Un modèle de transfert de température air-sol, basé sur les modèles de régression sur séries temporelles, a été développé pour calculer des indicateurs thermiques de la limite des arbres. Les indicateurs ont ensuite été appliqués spatialement pour délimiter la limite climatique des arbres, sur la base de données de températures interpolées. L'observation de la dynamique forestière récente dans l'écotone de la limite des arbres en Suisse a montré que les changements étaient principalement dus à la fermeture des trouées, mais aussi en partie à des déplacements vers des altitudes plus élevées. Il a été montré que la récente déprise agricole était la cause principale de ces changements. Des changements dus au climat n'ont été identifiés qu'aux limites supérieures de l'écotone de la limite des arbres. Les indicateurs de température moyenne saisonnière se sont avérés le mieux convenir pour prédire la limite climatique des arbres. L'application de limites dynamiques saisonnières et du modèle de transfert de température air-sol a amélioré l'applicabilité des indicateurs pour la modélisation spatiale. La reproduction des limites climatiques des arbres durant ces 45 dernières années a mis en évidence des changements d'altitude différents selon les régions, les plus importants étant situés près du plus haut massif montagneux. La modélisation des limites climatiques des arbres d'après deux scénarios de réchauffement climatique de l'IPCC a prédit des changements majeurs de l'altitude de la limite des arbres. Toutefois, l'on ne s'attend pas à ce que la limite des arbres actuellement observée atteigne cette limite facilement, en raison du délai de réaction, d'effets rétroactifs du climat et d'autres facteurs limitants.
Resumo:
Internet va creixent i pot implicar que no sempre es garanteixi la qualitat de continguts. Aquest treball planteja veure com els individus usen una sèrie de mètodes (o etnomètodes) (Garfinkel, 1968), que poden ser més o menys sistemàtics, o més o menys informals, i que fan servir per a trobar la informació més vàlida. Gràcies a aquests mètodes, els individus quotidianament avaluen la credibilitat de les pàgines web.
Resumo:
Paleopathology is the study of disease, physiological disruptions and impairment in the past. After two centuries of mainly descriptive studies, efforts are being made towards better methodological approaches to the study of diseases in human populations of ancient times whose remains are recovered by archaeology. Paleoepidemiology can be defined as an interdisciplinary area that aims to develop more suitable epidemiological methods, and to apply those in current use, to the study of disease determinants in human populations in the past. In spite of the limits of funerary or other archaeological series of human remains, paleoepidemiology tries to reconstruct past conditions of disease and health in those populations and its relation to lifestyle and environment. Although considering the limits of studying populations of deceased, most of them represented exclusively by bones and teeth, the frequency of lesions and other biological signs of interest to investigations on health, and their relative distribution in the skeletal remains by age and sex, can be calculated, and interpreted according to the ecological and cultural information available in each case. Building better models for bone pathology and bone epidemiology, besides a more complex theoretical frame for paleoepidemiological studies is a big job for the future that will need the incorporation of methods and technology from many areas, including the tools of molecular biology.
Resumo:
To assess the preferred methods to quit smoking among current smokers. Cross-sectional, population-based study conducted in Lausanne between 2003 and 2006 including 988 current smokers. Preference was assessed by questionnaire. Evidence-based (EB) methods were nicotine replacement, bupropion, physician or group consultations; non-EB-based methods were acupuncture, hypnosis and autogenic training. EB methods were frequently (physician consultation: 48%, 95% confidence interval (45-51); nicotine replacement therapy: 35% (32-38)) or rarely (bupropion and group consultations: 13% (11-15)) preferred by the participants. Non-EB methods were preferred by a third (acupuncture: 33% (30-36)), a quarter (hypnosis: 26% (23-29)) or a seventh (autogenic training: 13% (11-15)) of responders. On multivariate analysis, women preferred both EB and non-EB methods more frequently than men (odds ratio and 95% confidence interval: 1.46 (1.10-1.93) and 2.26 (1.72-2.96) for any EB and non-EB method, respectively). Preference for non-EB methods was higher among highly educated participants, while no such relationship was found for EB methods. Many smokers are unaware of the full variety of methods to quit smoking. Better information regarding these methods is necessary.
Resumo:
It was important to us to engage with as many students as possible throughout the process of developing a new name for the reformed junior cycle. In this vein, we used a wide variety of methods to engage with students in order to capture as many ideas as possible; text messaging, Facebook, Twitter, email and consultation sessions. We circulated posters to all schools via post and/or email, and contacted schools in catchment areas for the consultation sessions by phone. In our consultation sessions, we had discussions with the participating students about what the new junior cycle would be, closely guided by the content of “Towards a Framework for Junior Cycle” from the National Council for Curriculum and assessment. In these sessions, students then gave feedback on what they thought of the reformed junior cycle, developed their own ideas, and identified what they thought should be reflected in the name of the reformed junior cycle
Resumo:
BACKGROUND: The strength of the association between intensive care unit (ICU)-acquired nosocomial infections (NIs) and mortality might differ according to the methodological approach taken. OBJECTIVE: To assess the association between ICU-acquired NIs and mortality using the concept of population-attributable fraction (PAF) for patient deaths caused by ICU-acquired NIs in a large cohort of critically ill patients. SETTING: Eleven ICUs of a French university hospital. DESIGN: We analyzed surveillance data on ICU-acquired NIs collected prospectively during the period from 1995 through 2003. The primary outcome was mortality from ICU-acquired NI stratified by site of infection. A matched-pair, case-control study was performed. Each patient who died before ICU discharge was defined as a case patient, and each patient who survived to ICU discharge was defined as a control patient. The PAF was calculated after adjustment for confounders by use of conditional logistic regression analysis. RESULTS: Among 8,068 ICU patients, a total of 1,725 deceased patients were successfully matched with 1,725 control patients. The adjusted PAF due to ICU-acquired NI for patients who died before ICU discharge was 14.6% (95% confidence interval [CI], 14.4%-14.8%). Stratified by the type of infection, the PAF was 6.1% (95% CI, 5.7%-6.5%) for pulmonary infection, 3.2% (95% CI, 2.8%-3.5%) for central venous catheter infection, 1.7% (95% CI, 0.9%-2.5%) for bloodstream infection, and 0.0% (95% CI, -0.4% to 0.4%) for urinary tract infection. CONCLUSIONS: ICU-acquired NI had an important effect on mortality. However, the statistical association between ICU-acquired NI and mortality tended to be less pronounced in findings based on the PAF than in study findings based on estimates of relative risk. Therefore, the choice of methods does matter when the burden of NI needs to be assessed.
Resumo:
Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed
Resumo:
Optimisation of reproductive investment is crucial for Darwinian fitness, and detailed long-term studies are especially suited to unravel reproductive allocation strategies. Allocation strategies depend on the timing of resource acquisition, the timing of resource allocation, and trade-offs between different life-history traits. A distinction can be made between capital breeders that fuel reproduction with stored resources and income breeders that use recently acquired resources. In capital breeders, but not in income breeders, energy allocation may be decoupled from energy acquisition. Here, we tested the influence of extrinsic (weather conditions) and intrinsic (female characteristics) factors during energy storage, vitellogenesis and early gestation on reproductive investment, including litter mass, litter size, offspring mass and the litter size and offspring mass trade-off. We used data from a long-term study of the viviparous lizard, Lacerta (Zootoca) vivipara. In terms of extrinsic factors, rainfall during vitellogenesis was positively correlated with litter size and mass, but temperature did not affect reproductive investment. With respect to intrinsic factors, litter size and mass were positively correlated with current body size and postpartum body condition of the previous year, but negatively with parturition date of the previous year. Offspring mass was negatively correlated with litter size, and the strength of this trade-off decreased with the degree of individual variation in resource acquisition, which confirms theoretical predictions. The combined effects of past intrinsic factors and current weather conditions suggest that common lizards combine both recently acquired and stored resources to fuel reproduction. The effect of past energy store points out a trade-off between current and future reproduction.
Resumo:
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
Resumo:
RésuméL'origine de l'obésité, qui atteint des proportions épidémiques, est complexe. Elle est liée au mode de vie et au comportement des individus par rapport à l'activité physique, expression des choix individuels et de l'interaction avec l'environnement. Les mesures du comportement au niveau de l'activité physique des individus face à leur environnement, la répartition des types d'activité physique, la durée, la fréquence, l'intensité, et la dépense énergétique sont d'une grande importance. Aujourd'hui, il y a un manque de méthodes permettant une évaluation précise et objective de l'activité physique et du comportement des individus. Afin de compléter les recherches relatives à l'activité physique, à l'obésité et à certaines maladies, le premier objectif du travail de thèse était de développer un modèle pour l'identification objective des types d'activité physique dans des conditions de vie réelles et l'estimation de la dépense énergétique basée sur une combinaison de 2 accéléromètres et 1 GPS. Le modèle prend en compte qu'une activité donnée peut être accomplie de différentes façons dans la vie réelle. Les activités quotidiennes ont pu être classées en 8 catégories, de sédentaires à actives, avec une précision de 1 min. La dépense énergétique a pu peut être prédite avec précision par le modèle. Après validation du modèle, le comportement des individus de l'activité physique a été évalué dans une seconde étude. Nous avons émis l'hypothèse que, dans un environnement caractérisé par les pentes, les personnes obèses sont tentées d'éviter les pentes raides et de diminuer la vitesse de marche au cours d'une activité physique spontanée, ainsi que pendant les exercices prescrits et structurés. Nous avons donc caractérisé, par moyen du modèle développé, le comportement des individus obèses dans un environnement vallonné urbain. La façon dont on aborde un environnement valloné dans les déplacements quotidiens devrait également être considérée lors de la prescription de marche supplémentaire afin d'augmenter l'activité physique.SummaryOrigin of obesity, that reached epidemic proportion, is complex and may be linked to different lifestyle and physical activity behaviour. Measurement of physical activity behaviour of individuals towards their environment, the distribution of physical activity in terms of physical activity type, volume, duration, frequency, intensity, and energy expenditure is of great importance. Nowadays, there is a lack of methods for accurate and objective assessment of physical activity and of individuals' physical activity behaviour. In order to complement the research relating physical activity to obesity and related diseases, the first aim of the thesis work was to develop a model for objective identification of physical activity types in real-life condition and energy expenditure based on a combination of 2 accelerometers and 1 GPS device. The model takes into account that a given activity can be achieved in many different ways in real life condition. Daily activities could be classified in 8 categories, as sedentary to active physical activity, within 1 min accuracy, and physical activity patterns determined. The energy expenditure could be predicted accurately with an accuracy below 10%. Furthermore, individuals' physical activity behaviour is expression of individual choices and their interaction with the neighbourhood environment. In a second study, we hypothesized that, in an environment characterized by inclines, obese individuals are tempted to avoid steep positive slopes and to decrease walking speed during spontaneous outdoor physical activity, as well as during prescribed structured bouts of exercise. Finally, we characterized, by mean of the developed model, the physical activity behaviour of obese individuals in a hilly urban environment. Quantifying how one tackles hilly environment or avoids slope in their everyday displacements should be also considered while prescribing extra walking in free-living conditions in order to increase physical activity.
Resumo:
MR connectomics is an emerging framework in neuro-science that combines diffusion MRI and whole brain tractography methodologies with the analytical tools of network science. In the present work we review the current methods enabling structural connectivity mapping with MRI and show how such data can be used to infer new information of both brain structure and function. We also list the technical challenges that should be addressed in the future to achieve high-resolution maps of structural connectivity. From the resulting tremendous amount of data that is going to be accumulated soon, we discuss what new challenges must be tackled in terms of methods for advanced network analysis and visualization, as well data organization and distribution. This new framework is well suited to investigate key questions on brain complexity and we try to foresee what fields will most benefit from these approaches.