937 resultados para INFORMATION AND COMPUTING SCIENCES
Resumo:
This is an exploratory study that aims, on the one hand, to examine in more detail how children between 12 and 16 years of age use different audiovisual technologies, what they feel and think when using them, and whom they like to speak to about such experiences. On the other hand, we look more deeply into the interactions between adults and children, particularly between parents and their children, in relation to these technologies when children use them at home or in other places. We analysed responses to questionnaires with several common items, administered separately to parents and children. Children’s responses reflect an important level of dissatisfaction when talking with different adults about media activities. Our findings support the thesis that more and more children socialise through new information and communication technologies with little or no recourse to adult criteria, giving rise to the emergence of specific children’s cultures. Crossing of the responses of parents and those of their own children shows us which aspects of media reality adults overestimate or underestimate in comparison to children, and to what degree certain judgements coincide and differ between generations. The results can be applied to the improvement of relations between adults and adolescents, taking advantage of adolescents’ strong motivation to engage in activities using audiovisual media
Resumo:
Reliable information is a crucial factor influencing decision-making and, thus, fitness in all animals. A common source of information comes from inadvertent cues produced by the behavior of conspecifics. Here we use a system of experimental evolution with robots foraging in an arena containing a food source to study how communication strategies can evolve to regulate information provided by such cues. The robots could produce information by emitting blue light, which the other robots could perceive with their cameras. Over the first few generations, the robots quickly evolved to successfully locate the food, while emitting light randomly. This behavior resulted in a high intensity of light near food, which provided social information allowing other robots to more rapidly find the food. Because robots were competing for food, they were quickly selected to conceal this information. However, they never completely ceased to produce information. Detailed analyses revealed that this somewhat surprising result was due to the strength of selection on suppressing information declining concomitantly with the reduction in information content. Accordingly, a stable equilibrium with low information and considerable variation in communicative behaviors was attained by mutation selection. Because a similar coevolutionary process should be common in natural systems, this may explain why communicative strategies are so variable in many animal species.
Resumo:
One of the most relevant difficulties faced by first-year undergraduate students is to settle into the educational environment of universities. This paper presents a case study that proposes a computer-assisted collaborative experience designed to help students in their transition from high school to university. This is done by facilitating their first contact with the campus and its services, the university community, methodologies and activities. The experience combines individual and collaborative activities, conducted in and out of the classroom, structured following the Jigsaw Collaborative Learning Flow Pattern. A specific environment including portable technologies with network and computer applications has been developed to support and facilitate the orchestration of a flow of learning activities into a single integrated learning setting. The result is a Computer-Supported Collaborative Blended Learning scenario, which has been evaluated with first-year university students of the degrees of Software and Audiovisual Engineering within the subject Introduction to Information and Communications Technologies. The findings reveal that the scenario improves significantly students’ interest in their studies and their understanding about the campus and services provided. The environment is also an innovative approach to successfully support the heterogeneous activities conducted by both teachers and students during the scenario. This paper introduces the goals and context of the case study, describes how the technology was employed to conduct the learning scenario, the evaluation methods and the main results of the experience.
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
Independent Auditor's Reports, Basic Financial Statements, Supplementary Information and Schedule of Findings
Resumo:
Rural Iowa Waste Management Association Independent Auditor's Reports, Financial Statement, Required Supplementary Information and Schedule of Findings for the Period Ending June 30, 2004.
Resumo:
Drawing on Social Representations Theory, this study investigates focalisation and anchoring during the diffusion of information concerning the Large Hadron Collider (LHC), the particle accelerator at the European Organisation for Nuclear Research (CERN). We hypothesised that people focus on striking elements of the message, abandoning others, that the nature of the initial information affects diffusion of information, and that information is anchored in prior attitudes toward CERN and science. A serial reproduction experiment with two generations and four chains of reproduction diffusing controversial versus descriptive information about the LHC shows a reduction of information through generations, the persistence of terminology regarding the controversy and a decrease of other elements for participants exposed to polemical information. Concerning anchoring, positive attitudes toward CERN and science increase the use of expert terminology unrelated to the controversy. This research highlights the relevance of a social representational approach in the public understanding of science.
Resumo:
In some markets, such as the market for drugs or for financial services, sellers have better information than buyersregarding the matching between the buyer's needs and the good's actual characteristics. Depending on the market structure,this may lead to conflicts of interest and/or the underprovision of information by the seller. This paper studies this issuein the market for financial services. The analysis presents a new model of competition between banks, as banks' pricecompetition influences the ensuing incentives for truthful information revelation. We compare two different firm structures,specialized banking, where financial institutions provide a unique financial product, and one-stop banking, where a financialinstitution is able to provide several financial products which are horizontally differentiated. We show first that, althoughconflicts of interest may prevent information disclosure under monopoly, competition forces full information provision forsufficiently high reputation costs. Second, in the presence of market power, one-stop banks will use information strategicallyto increase product differentiation and therefore will always provide reliable information and charge higher rices thanspecialized banks, thus providing a new justification for the creation of one-stop banks. Finally, we show that, ifindependent financial advisers are able to provide reliable information, this increases product differentiation and thereforemarket power, so that it is in the interest of financial intermediaries to promote external independent financial advice.
Resumo:
Previous works on asymmetric information in asset markets tendto focus on the potential gains in the asset market itself. We focus on the market for information and conduct an experimental study to explore, in a game of finite but uncertain duration, whether reputation can be an effective constraint on deliberate misinformation. At the beginning of each period, an uninformed potential asset buyer can purchase information, at a fixed price and from a fully-informed source, about the value of the asset in that period. The informational insiders cannot purchase the asset and are given short-term incentives to provide false information when the asset value is low. Our model predicts that, in accordance with the Folk Theorem, Pareto-superior outcomes featuring truthful revelation should be sustainable. However, this depends critically on beliefs about rationality and behavior. We find that, overall, sellers are truthful 89% of the time. More significantly, the observed frequency of truthfulness is 81% when the asset value is low. Our result is consistent with both mixed-strategy and trigger strategy interpretations and provides evidence that most subjects correctly anticipate rational behavior. We discuss applications to financial markets, media regulation, and the stability of cartels.
Resumo:
There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.
Resumo:
[eng] Aim: The paper examines the current situation of recognition of patients' right to information in international standards and in the national laws of Belgium, France, Italy, Spain (and Catalonia), Switzerland and the United Kingdom.Methodology: International standards, laws and codes of ethics of physicians and librarians that are currently in force were identified and analyzed with regard to patients' right to information and the ownership of this right. The related subjects of access to clinical history, advance directives and informed consent were not taken into account.Results: All the standards, laws and codes analyzed deal with guaranteeing access to information. The codes of ethics of both physicians and librarians establish the duty to inform.Conclusions: Librarians must collaborate with physicians in the process of informing patients.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
BACKGROUND: Little is known about how to most effectively deliver relevant information to patients scheduled for endoscopy. METHODS: To assess the effects of combined written and oral information, compared with oral information alone on the quality of information before endoscopy and the level of anxiety. We designed a prospective study in two Swiss teaching hospitals which enrolled consecutive patients scheduled for endoscopy over a three-month period. Patients were randomized either to receiving, along with the appointment notice, an explanatory leaflet about the upcoming examination, or to oral information delivered by each patient's doctor. Evaluation of quality of information was rated on scales between 0 (none received) and 5 (excellent). The analysis of outcome variables was performed on the basis of intention to treat-analysis. Multivariate analysis of predictors of information scores was performed by linear regression analysis. RESULTS: Of 718 eligible patients 577 (80%) returned their questionnaire. Patients who received written leaflets (N = 278) rated the quality of information they received higher than those informed verbally (N = 299), for all 8 quality-of-information items. Differences were significant regarding information about the risks of the procedure (3.24 versus 2.26, p < 0.001), how to prepare for the procedure (3.56 versus 3.23, p = 0.036), what to expect after the procedure (2.99 versus 2.59, p < 0.001), and the 8 quality-of-information items (3.35 versus 3.02, p = 0.002). The two groups reported similar levels of anxiety before procedure (p = 0.66), pain during procedure (p = 0.20), tolerability throughout the procedure (p = 0.76), problems after the procedure (p = 0.22), and overall rating of the procedure between poor and excellent (p = 0.82). CONCLUSION: Written information led to more favourable assessments of the quality of information and had no impact on patient anxiety nor on the overall assessment of the endoscopy. Because structured and comprehensive written information is perceived as beneficial by patients, gastroenterologists should clearly explain to their patients the risks, benefits and alternatives of endoscopic procedures. Trial registration: Current Controlled trial number: ISRCTN34382782.
Resumo:
In the last few years, a need to account for molecular flexibility in drug-design methodologies has emerged, even if the dynamic behavior of molecular properties is seldom made explicit. For a flexible molecule, it is indeed possible to compute different values for a given conformation-dependent property and the ensemble of such values defines a property space that can be used to describe its molecular variability; a most representative case is the lipophilicity space. In this review, a number of applications of lipophilicity space and other property spaces are presented, showing that this concept can be fruitfully exploited: to investigate the constraints exerted by media of different levels of structural organization, to examine processes of molecular recognition and binding at an atomic level, to derive informative descriptors to be included in quantitative structure--activity relationships and to analyze protein simulations extracting the relevant information. Much molecular information is neglected in the descriptors used by medicinal chemists, while the concept of property space can fill this gap by accounting for the often-disregarded dynamic behavior of both small ligands and biomacromolecules. Property space also introduces some innovative concepts such as molecular sensitivity and plasticity, which appear best suited to explore the ability of a molecule to adapt itself to the environment variously modulating its property and conformational profiles. Globally, such concepts can enhance our understanding of biological phenomena providing fruitful descriptors in drug-design and pharmaceutical sciences.
Resumo:
The capacity to interact socially and share information underlies the success of many animal species, humans included. Researchers of many fields have emphasized the evo¬lutionary significance of how patterns of connections between individuals, or the social networks, and learning abilities affect the information obtained by animal societies. To date, studies have focused on the dynamics either of social networks, or of the spread of information. The present work aims to study them together. We make use of mathematical and computational models to study the dynamics of networks, where social learning and information sharing affect the structure of the population the individuals belong to. The number and strength of the relationships between individuals, in turn, impact the accessibility and the diffusion of the shared information. Moreover, we inves¬tigate how different strategies in the evaluation and choice of interacting partners impact the processes of knowledge acquisition and social structure rearrangement. First, we look at how different evaluations of social interactions affect the availability of the information and the network topology. We compare a first case, where individuals evaluate social exchanges by the amount of information that can be shared by the partner, with a second case, where they evaluate interactions by considering their partners' social status. We show that, even if both strategies take into account the knowledge endowments of the partners, they have very different effects on the system. In particular, we find that the first case generally enables individuals to accumulate higher amounts of information, thanks to the more efficient patterns of social connections they are able to build. Then, we study the effects that homophily, or the tendency to interact with similar partners, has on knowledge accumulation and social structure. We compare the case where individuals who know the same information are more likely to learn socially from each other, to the opposite case, where individuals who know different information are instead more likely to learn socially from each other. We find that it is not trivial to claim which strategy is better than the other. Depending on the possibility of forgetting information, the way new social partners can be chosen, and the population size, we delineate the conditions for which each strategy allows accumulating more information, or in a faster way For these conditions, we also discuss the topological characteristics of the resulting social structure, relating them to the information dynamics outcome. In conclusion, this work paves the road for modeling the joint dynamics of the spread of information among individuals and their social interactions. It also provides a formal framework to study jointly the effects of different strategies in the choice of partners on social structure, and how they favor the accumulation of knowledge in the population. - La capacité d'interagir socialement et de partager des informations est à la base de la réussite de nombreuses espèces animales, y compris les humains. Les chercheurs de nombreux domaines ont souligné l'importance évolutive de la façon dont les modes de connexions entre individus, ou réseaux sociaux et les capacités d'apprentissage affectent les informations obtenues par les sociétés animales. À ce jour, les études se sont concentrées sur la dynamique soit des réseaux sociaux, soit de la diffusion de l'information. Le présent travail a pour but de les étudier ensemble. Nous utilisons des modèles mathématiques et informatiques pour étudier la dynamique des réseaux, où l'apprentissage social et le partage d'information affectent la structure de la population à laquelle les individus appartiennent. Le nombre et la solidité des relations entre les individus ont à leurs tours un impact sur l'accessibilité et la diffusion de l'informa¬tion partagée. Par ailleurs, nous étudions comment les différentes stratégies d'évaluation et de choix des partenaires d'interaction ont une incidence sur les processus d'acquisition des connaissances ainsi que le réarrangement de la structure sociale. Tout d'abord, nous examinons comment des évaluations différentes des interactions sociales influent sur la disponibilité de l'information ainsi que sur la topologie du réseau. Nous comparons un premier cas, où les individus évaluent les échanges sociaux par la quantité d'information qui peut être partagée par le partenaire, avec un second cas, où ils évaluent les interactions en tenant compte du statut social de leurs partenaires. Nous montrons que, même si les deux stratégies prennent en compte le montant de connaissances des partenaires, elles ont des effets très différents sur le système. En particulier, nous constatons que le premier cas permet généralement aux individus d'accumuler de plus grandes quantités d'information, grâce à des modèles de connexions sociales plus efficaces qu'ils sont capables de construire. Ensuite, nous étudions les effets que l'homophilie, ou la tendance à interagir avec des partenaires similaires, a sur l'accumulation des connaissances et la structure sociale. Nous comparons le cas où des personnes qui connaissent les mêmes informations sont plus sus¬ceptibles d'apprendre socialement l'une de l'autre, au cas où les individus qui connaissent des informations différentes sont au contraire plus susceptibles d'apprendre socialement l'un de l'autre. Nous constatons qu'il n'est pas trivial de déterminer quelle stratégie est meilleure que l'autre. En fonction de la possibilité d'oublier l'information, la façon dont les nouveaux partenaires sociaux peuvent être choisis, et la taille de la population, nous déterminons les conditions pour lesquelles chaque stratégie permet d'accumuler plus d'in¬formations, ou d'une manière plus rapide. Pour ces conditions, nous discutons également les caractéristiques topologiques de la structure sociale qui en résulte, les reliant au résultat de la dynamique de l'information. En conclusion, ce travail ouvre la route pour la modélisation de la dynamique conjointe de la diffusion de l'information entre les individus et leurs interactions sociales. Il fournit également un cadre formel pour étudier conjointement les effets de différentes stratégies de choix des partenaires sur la structure sociale et comment elles favorisent l'accumulation de connaissances dans la population.