22 resultados para Scientific network evolution
Resumo:
We propose a procedure for analyzing and characterizing complex networks. We apply this to the social network as constructed from email communications within a medium sized university with about 1700 employees. Email networks provide an accurate and nonintrusive description of the flow of information within human organizations. Our results reveal the self-organization of the network into a state where the distribution of community sizes is self-similar. This suggests that a universal mechanism, responsible for emergence of scaling in other self-organized complex systems, as, for instance, river networks, could also be the underlying driving force in the formation and evolution of social networks.
Resumo:
What determines which inputs are initially considered and eventually adopted in the productionof new or improved goods? Why are some inputs much more prominent than others? We modelthe evolution of input linkages as a process where new producers first search for potentially usefulinputs and then decide which ones to adopt. A new product initially draws a set of 'essentialsuppliers'. The search stage is then confined to the network neighborhood of the latter, i.e., to theinputs used by the essential suppliers. The adoption decision is driven by a tradeoff between thebenefits accruing from input variety and the costs of input adoption. This has important implicationsfor the number of forward linkages that a product (input variety) develops over time. Inputdiffusion is fostered by network centrality ? an input that is initially represented in many networkneighborhoods is subsequently more likely to be adopted. This mechanism also delivers a powerlaw distribution of forward linkages. Our predictions continue to hold when varieties are aggregatedinto sectors. We can thus test them, using detailed sectoral US input-output tables. We showthat initial network proximity of a sector in 1967 significantly increases the likelihood of adoptionthroughout the subsequent four decades. The same is true for rapid productivity growth in aninput-producing sector. Our empirical results highlight two conditions for new products to becomecentral nodes: initial network proximity to prospective adopters, and technological progress thatreduces their relative price. Semiconductors met both conditions.
Resumo:
Background: Reconstruction of genes and/or protein networks from automated analysis of the literature is one of the current targets of text mining in biomedical research. Some user-friendly tools already perform this analysis on precompiled databases of abstracts of scientific papers. Other tools allow expert users to elaborate and analyze the full content of a corpus of scientific documents. However, to our knowledge, no user friendly tool that simultaneously analyzes the latest set of scientific documents available on line and reconstructs the set of genes referenced in those documents is available. Results: This article presents such a tool, Biblio-MetReS, and compares its functioning and results to those of other user-friendly applications (iHOP, STRING) that are widely used. Under similar conditions, Biblio-MetReS creates networks that are comparable to those of other user friendly tools. Furthermore, analysis of full text documents provides more complete reconstructions than those that result from using only the abstract of the document. Conclusions: Literature-based automated network reconstruction is still far from providing complete reconstructions of molecular networks. However, its value as an auxiliary tool is high and it will increase as standards for reporting biological entities and relationships become more widely accepted and enforced. Biblio- MetReS is an application that can be downloaded from http://metres.udl.cat/. It provides an easy to use environment for researchers to reconstruct their networks of interest from an always up to date set of scientific documents.
Resumo:
This article explores how to enrich scaffolding processes among university students using specific Computer Supported Collaborative Learning –CSCL- software. A longitudinal case study was designed, in which eighteen students participated in a twelve-month learning project. During this period the students followed an instructional process, using the CSCL software to support and improve the students’ interaction processes, in particular the processes of giving and receiving assistance. Our research analyzed the evolution of the quality of the students’ interaction processes and the students’ learning results. The effects of the students’ participation in the CSCL environment have been described in terms of their development of affective, cognitive and metacognitive learning processes. Our results showed that the specific activities that students performed while working with the CSCL system triggered specific learning processes, which had a positive incidence on their learning results.
Resumo:
We are progressively immersed in technology to such extend that in our everyday life we are and we do what technology allows us to be and to do. In this process, cyborgs and robots constitute elements that we analyze from a number of techno scientific and philosophical approaches. Additionally, we propose that a new concept: GEH (Genetic Engineered Human) as a new potential social imaginary element, which would be the human being improved by the broad-sense genetics engineer (that is, changing many genes by genetics engineer, modifications in the genome, cloning, and so on). If our aspirations as humans pass through technology and in particular for cyborgs, robots and GEH, the bidirectional links between these theoretical or real entities and our personal identities will be the more and more substantial in our society.
Resumo:
Although approximately 50% of Down Syndrome (DS) patients have heart abnormalities, they exhibit an overprotection against cardiac abnormalities related with the connective tissue, for example a lower risk of coronary artery disease. A recent study reported a case of a person affected by DS who carried mutations in FBN1, the gene causative for a connective tissue disorder called Marfan Syndrome (MFS). The fact that the person did not have any cardiac alterations suggested compensation effects due to DS. This observation is supported by a previous DS meta-analysis at the molecular level where we have found an overall upregulation of FBN1 (which is usually downregulated in MFS). Additionally, that result was cross-validated with independent expression data from DS heart tissue. The aim of this work is to elucidate the role of FBN1 in DS and to establish a molecular link to MFS and MFS-related syndromes using a computational approach. To reach that, we conducted different analytical approaches over two DS studies (our previous meta-analysis and independent expression data from DS heart tissue) and revealed expression alterations in the FBN1 interaction network, in FBN1 co-expressed genes and FBN1-related pathways. After merging the significant results from different datasets with a Bayesian approach, we prioritized 85 genes that were able to distinguish control from DS cases. We further found evidence for several of these genes (47%), such as FBN1, DCN, and COL1A2, being dysregulated in MFS and MFS-related diseases. Consequently, we further encourage the scientific community to take into account FBN1 and its related network for the study of DS cardiovascular characteristics.
Resumo:
Real-time predictions are an indispensable requirement for traffic management in order to be able to evaluate the effects of different available strategies or policies. The combination of predicting the state of the network and the evaluation of different traffic management strategies in the short term future allows system managers to anticipate the effects of traffic control strategies ahead of time in order to mitigate the effect of congestion. This paper presents the current framework of decision support systems for traffic management based on short and medium-term predictions and includes some reflections on their likely evolution, based on current scientific research and the evolution of the availability of new types of data and their associated methodologies.