42 resultados para Joint research projects


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Food intake increases to a varying extent during pregnancy to provide extra energy for the growing fetus. Measuring the respiratory quotient (RQ) during the course of pregnancy (by quantifying O2 consumption and CO2 production with indirect calorimetry) could be potentially useful since it gives an insight into the evolution of the proportion of carbohydrate vs. fat oxidized during pregnancy and thus allows recommendations on macronutrients for achieving a balanced (or slightly positive) substrate intake. A systematic search of the literature for papers reporting RQ changes during normal pregnancy identified 10 papers reporting original research. The existing evidence supports an increased RQ of varying magnitude in the third trimester of pregnancy, while the discrepant results reported for the first and second trimesters (i.e. no increase in RQ), explained by limited statistical power (small sample size) or fragmentary data, preclude safe conclusions about the evolution of RQ during early pregnancy. From a clinical point of view, measuring RQ during pregnancy requires not only sophisticated and costly indirect calorimeters but appears of limited value outside pure research projects, because of several confounding variables: (1) spontaneous changes in food intake and food composition during the course of pregnancy (which influence RQ); (2) inter-individual differences in weight gain and composition of tissue growth; (3) technical factors, notwithstanding the relatively small contribution of fetal metabolism per se (RQ close to 1.0) to overall metabolism of the pregnant mother.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Synthesizing research evidence using systematic and rigorous methods has become a key feature of evidence-based medicine and knowledge translation. Systematic reviews (SRs) may or may not include a meta-analysis depending on the suitability of available data. They are often being criticised as 'secondary research' and denied the status of original research. Scientific journals play an important role in the publication process. How they appraise a given type of research influences the status of that research in the scientific community. We investigated the attitudes of editors of core clinical journals towards SRs and their value for publication.¦METHODS: We identified the 118 journals labelled as "core clinical journals" by the National Library of Medicine, USA in April 2009. The journals' editors were surveyed by email in 2009 and asked whether they considered SRs as original research projects; whether they published SRs; and for which section of the journal they would consider a SR manuscript.¦RESULTS: The editors of 65 journals (55%) responded. Most respondents considered SRs to be original research (71%) and almost all journals (93%) published SRs. Several editors regarded the use of Cochrane methodology or a meta-analysis as quality criteria; for some respondents these criteria were premises for the consideration of SRs as original research. Journals placed SRs in various sections such as "Review" or "Feature article". Characterization of non-responding journals showed that about two thirds do publish systematic reviews.¦DISCUSSION: Currently, the editors of most core clinical journals consider SRs original research. Our findings are limited by a non-responder rate of 45%. Individual comments suggest that this is a grey area and attitudes differ widely. A debate about the definition of 'original research' in the context of SRs is warranted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To identify characteristics of consultations that do not conform to the traditionally understood communication 'dyad', in order to highlight implications for medical education and develop a reflective 'toolkit' for use by medical practitioners and educators in the analysis of consultations. DESIGN: A series of interdisciplinary research workshops spanning 12 months explored the social impact of globalisation and computerisation on the clinical consultation, focusing specifically on contemporary challenges to the clinician-patient dyad. Researchers presented detailed case studies of consultations, taken from their recent research projects. Drawing on concepts from applied sociolinguistics, further analysis of selected case studies prompted the identification of key emergent themes. SETTING: University departments in the UK and Switzerland. PARTICIPANTS: Six researchers with backgrounds in medicine, applied linguistics, sociolinguistics and medical education. One workshop was also attended by PhD students conducting research on healthcare interactions. RESULTS: The contemporary consultation is characterised by a multiplicity of voices. Incorporation of additional voices in the consultation creates new forms of order (and disorder) in the interaction. The roles 'clinician' and 'patient' are blurred as they become increasingly distributed between different participants. These new consultation arrangements make new demands on clinicians, which lie beyond the scope of most educational programmes for clinical communication. CONCLUSIONS: The consultation is changing. Traditional consultation models that assume a 'dyadic' consultation do not adequately incorporate the realities of many contemporary consultations. A paradox emerges between the need to manage consultations in a 'super-diverse' multilingual society, while also attending to increasing requirements for standardised protocol-driven approaches to care prompted by computer use. The tension between standardisation and flexibility requires addressing in educational contexts. Drawing on concepts from applied sociolinguistics and the findings of these research observations, the authors offer a reflective 'toolkit' of questions to ask of the consultation in the context of enquiry-based learning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research projects aimed at proposing fingerprint statistical models based on the likelihood ratio framework have shown that low quality finger impressions left on crime scenes may have significant evidential value. These impressions are currently either not recovered, considered to be of no value when first analyzed by fingerprint examiners, or lead to inconclusive results when compared to control prints. There are growing concerns within the fingerprint community that recovering and examining these low quality impressions will result in a significant increase of the workload of fingerprint units and ultimately of the number of backlogged cases. This study was designed to measure the number of impressions currently not recovered or not considered for examination, and to assess the usefulness of these impressions in terms of the number of additional detections that would result from their examination.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While it has often been stated that prevalence of schizophrenia is the same around the world, many publications have shown this illness is twice more frequent in urban areas. Although many hypotheses have been proposed, the mechanisms explaining this phenomenon are still unknown. Besides potential biological explanations, a certain number of hypotheses emerging from social sciences have recently enriched the debate. This article reviews the literature related to this issue and describes the development of a research projects conducted in collaboration between the Institut of Geography at the University of Neuchâtel, the Department of Psychiatry at the Lausanne University and the Swiss branch of ISPS, a society promoting the psychological treatment of schizophrenia and other psychoses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many research projects in life sciences require purified biologically active recombinant protein. In addition, different formats of a given protein may be needed at different steps of experimental studies. Thus, the number of protein variants to be expressed and purified in short periods of time can expand very quickly. We have therefore developed a rapid and flexible expression system based on described episomal vector replication to generate semi-stable cell pools that secrete recombinant proteins. We cultured these pools in serum-containing medium to avoid time-consuming adaptation of cells to serum-free conditions, maintain cell viability and reuse the cultures for multiple rounds of protein production. As such, an efficient single step affinity process to purify recombinant proteins from serum-containing medium was optimized. Furthermore, a series of multi-cistronic vectors were designed to enable simultaneous expression of proteins and their biotinylation in vivo as well as fast selection of protein-expressing cell pools. Combining these improved procedures and innovative steps, exemplified with seven cytokines and cytokine receptors, we were able to produce biologically active recombinant endotoxin free protein at the milligram scale in 4-6weeks from molecular cloning to protein purification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In IVF around 70% of embryos fail to implant. Often more than one embryo is transferred in order to enhance the chances of pregnancy, but this is at the price of an increased multiple pregnancy risk. In the aim to increase the success rate with a single embryo, research projects on prognostic factors of embryo viability have been initiated, but no marker has found a routine clinical application to date. Effects of soluble human leukocyte antigen-G (sHLA-G) on both NK cell activity and on Th1/Th2 cytokine balance suggest a role in the embryo implantation process, but the relevance of sHLA-G measurements in embryo culture medium and in follicular fluid (FF) are inconsistent to date. In this study, we have investigated the potential of sHLA-G in predicting the achievement of a pregnancy after IVF-ICSI in a large number of patients (n = 221). sHLA-G was determined in media and in FF by ELISA. In both FF and embryo medium, no significant differences in sHLA-G concentrations were observed between the groups "pregnancy" and "implantation failure", or between the groups "ongoing" versus "miscarried pregnancies". Our results do not favour routine sHLA-G determinations in the FF nor in embryo conditioned media, with the current assay technology available.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The National Center of Competence in Research project "SYNAPSY" aims at identifying certain mechanisms of psychiatric and cognitive disorders, in order to improve the understanding and the genesis of such pathologies, and to promote the development of better diagnostic tools and of new therapeutic approaches. It provides an excellent opportunity for clinical psychiatrists and neuroscientists to develop a synergic mode of collaboration. On the basis of questions stemming from clinical practice and in the frame of patients cohorts, various research projects in neuroscience should lead to progresses that may have a considerable impact on clinical practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Astract: The aim of this thesis was to investigate how the presence of multiple queens (polygyny) affects social organization in colonies of the ant Formica exsecta. This is important because polygyny results in reduced relatedness among colony members and therefore reflects a potential paradox for altruistic cooperation being explained by inclusive fitness theory. The reason for this is that workers in polygynous colonies rear no longer only their siblings (high inclusive fitness gain) but also more distantly ox even unrelated brood (low or no inclusive fitness gain). All research projects conducted in this thesis are novel and significant contributions to the understanding of the social evolution of insect societies. We used a mixture of experimental and observational methodologies in laboratory and field colonies of F. exsecta to examine four important aspects of social life that are impacted by polygyny. First, we investigated the influence of queen number on colony sex allocation and found that the number of queens present in a colony significantly affects colony sex ratio investment. The data were consistent with the queen-replenishment hypothesis, which is based on the observation that newly mated queens are often recruited back to their parental nest. According to this theory, colonies containing many queens should only produce males due to local resource competition (i.e. related queens compete for common resources), whereas colonies hosting few queens benefit most from producing new queens to ensure colony survival. Second, we examined how reproduction is partitioned among nestmate queens. We detected a novel pattern of reproductive partitioning whereby a high proportion of queens were completely specialized in the production of only a subset of offspring classes produced within a colony, which might translate into great differences in reproductive success between queens. Third, we could demonstrate that F. exsecta workers indiscriminately reared highly related and unrelated brood although such nepotistic behaviour (preferential rearing of relatives) would be predicted by inclusive fitness theory. The absence of nepotism is probably best explained by its negative effects on overall colony efficiency. Finally, we conducted a detailed population genetic analysis, which revealed that the genetic population structure is different for queens and workers. Our data were best explained with queens forming family-based groups (multicolonial population structure), whereas workers from several nests seemed to be grouped into larger unites (unicolonial population structure) with workers moving freely between neighbouring nests. Altogether, the presented work significantly increased our understanding of the complex organization of polygynous social insect colonies and shows how an important life history trait such as queen number affects social organization at various levels. Résumé: Le but de cette thèse était d'étudier comment la présence de plusieurs reines par colonie (polygynie) influence la vie sociale chez la fourmi Formica exsecta. Ce sujet est important parce que la polygynie chez les insectes sociaux présente un passible paradoxe au niveau de la théorie du "fitness inclusive". Ce paradoxe est basé sur le fait que les ouvrières n'élèvent plus uniquement leurs frères et soeurs (gain de "fitness inclusive" maximale), mais également des individus moins ou pas du tout apparentés (gain de "fitness inclusive" réduit ou absent). Tous les projets de recherche présentés au cours de cette thèse apportent une meilleure compréhension et connaissance au niveau de l'organisation des colonies chez les insectes sociaux. Nous avons employé des méthodes d'observation et de laboratoire afin de mettre en évidence des aspects importants de la vie sociale chez les fourmis influencés par la polygynie. Quatre aspects ont été caractérisés : (1) l'influence du nombre de reines sur le sexe ratio produit par la colonie. Nous avons démontré que les colonies contenant beaucoup de reines produisaient rarement des reines tandis que les colonies contenant peu de reines souvent investissaient beaucoup de ressources dans la production des reines. Ces résultats sont en accord avec la "queen-replenishment hypothesis" qui est basé sur l'observation que les nouvelles reines sont recrutées dans la colonie où elles étaient nées. Cette hypothèse postule que la production des reines est défavorable dans les colonies contenant beaucoup de reines, parce que ces reines apparentées, rentrent en compétition pour des ressources communes. Au contraire, la production des reines est favorable dans des colonies contenant peu de reines afin d'assurer la survie de la colonie ; (2) comment les reines dans une colonie répartissent leur reproduction. Nous avons mis en évidence un nouveau pattern de cette répartition où une grande proportion de reines est complètement spécialisée dans la production d'un seul type de couvain ce qui probablement aboutit à des différences significatives entre reines dans le succès reproducteur ; (3) la capacité des ouvrières à discriminer un couvain de soeur d'un couvain non apparenté. Les résultats ont montré que les ouvrières ne font pas de discrimination entre le couvain de soeur et le couvain non apparenté ce qui n'est pas en accord avec la théorie de la "fitness inclusive". Cette absence de discrimination est probablement due à des effets négatifs comme par exemple la diminution de la production du couvain; (4) la structure génétique d'une population de F. exsecta. Nous avons mis en évidence que la structure génétique entre des groupes de reines est significativement différente de la structure génétique entre des groupes d'ouvrières. Les données suggèrent que les reines forment des groupes basés sur une structure familiale tandis que les ouvrières sont groupées dans des unités plus grandes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Meta-analyses are particularly vulnerable to the effects of publication bias. Despite methodologists' best efforts to locate all evidence for a given topic the most comprehensive searches are likely to miss unpublished studies and studies that are published in the gray literature only. If the results of the missing studies differ systematically from the published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention's effects.As part of the OPEN project (http://www.open-project.eu) we will conduct a systematic review with the following objectives:â-ª To assess the impact of studies that are not published or published in the gray literature on pooled effect estimates in meta-analyses (quantitative measure).â-ª To assess whether the inclusion of unpublished studies or studies published in the gray literature leads to different conclusions in meta-analyses (qualitative measure). METHODS/DESIGN: Inclusion criteria: Methodological research projects of a cohort of meta-analyses which compare the effect of the inclusion or exclusion of unpublished studies or studies published in the gray literature.Literature search: To identify relevant research projects we will conduct electronic searches in Medline, Embase and The Cochrane Library; check reference lists; and contact experts.Outcomes: 1) The extent to which the effect estimate in a meta-analyses changes with the inclusion or exclusion of studies that were not published or published in the gray literature; and 2) the extent to which the inclusion of unpublished studies impacts the meta-analyses' conclusions.Data collection: Information will be collected on the area of health care; the number of meta-analyses included in the methodological research project; the number of studies included in the meta-analyses; the number of study participants; the number and type of unpublished studies; studies published in the gray literature and published studies; the sources used to retrieve studies that are unpublished, published in the gray literature, or commercially published; and the validity of the methodological research project.Data synthesis: Data synthesis will involve descriptive and statistical summaries of the findings of the included methodological research projects. DISCUSSION: Results are expected to be publicly available in the middle of 2013.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While the development of early psychosis intervention programs have improved outcome of such disorders, primary prevention strategies are still out of reach. The elaboration, over the last 15 years, of scales and criteria to identify populations at high risk for psychosis is a real progress, but their low specificity is still a major obstacle to their use outside of research projects. For this reason, even if "ultra high risk", subjects present with real psychiatric disorders and sometimes significant decrease in functioning level, the fact that only a small proportion will eventually develop full blown psychosis will probably lead to the rejection of a "psychosis risk syndrom" from the future DSM-V classification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanotechnology encompasses the design, characterisation, production and application of materials and systems by controlling shape and size at the nanoscale (nanometres). Nanomaterials may differ from other materials because of their relatively large specific surface area, such that surface properties become particularly important. There has been rapid growth in investment in nanotechnology by both the public and private sectors worldwide. In the EU, nanotechnology is expected to become an important strategic contributor to achieving economic gain and societal and individual benefits. At the same time there is continuing scientific uncertainty and controversy about the safety of nanomaterials. It is important to ensure that timely policy development takes this into consideration. Uncertainty about safety may lead to polarised public debate and to business unwillingness to invest further. A clear regulatory framework to address potential health and environmental impacts, within the wider context of evaluating and communicating the benefit-risk balance, must be a core part of Europe's integrated efforts for nanotechnology innovation. While a number of studies have been carried out on the effect of environmental nanoparticles, e.g. from combustion processes, on human health, there is yet no generally acceptable paradigm for safety assessment of nanomaterials in consumer and other products. Therefore, a working group was established to consider issues for the possible impact of nanomaterials on human health focussing specifically on engineered nanomaterials. This represents the first joint initiative between EASAC and the Joint Research Centre of the European Commission. The working group was given the remit to describe the state of the art of benefits and potential risks, current methods for safety assessment, and to evaluate their relevance, identify knowledge gaps in studying the safety of current nanomaterials, and recommend on priorities for nanomaterial research and the regulatory framework. This report focuses on key principles and issues, cross-referencing other sources for detailed information, rather than attempting a comprehensive account of the science. The focus is on human health although environmental effects are also discussed when directly relevant to health

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.