78 resultados para One-shot information theory
Resumo:
Background: All patients should be fully informed about the risks and benefits of anaesthetic procedures before giving a written consent. Moreover, the satisfaction level may vary in proportion to the information given. We aimed to determine, in a single-blind randomized-controlled study, whether an information form given before the pre-anaesthetic consultation could improve perceived information, information gain and satisfaction level. Methods: Two hundred patients ASA 1-3 scheduled for an elective orthopaedic surgery were randomized into two groups: a group that received an information form before the pre-anaesthetic consultation (IF group) and a control group (no information form). A standardized questionnaire was submitted after the pre-anaesthetic consultation and after the operation. This 17-item questionnaire explored perceived information (five items), information gain (three items) and satisfaction level (nine items). The items of each topic were pooled and compared between groups. Results: One hundred and eighty-five patients (92.5%) completed the study. The IF group had better perceived information (IF group 73% vs. control group 63%, P=0.002), higher information gain (IF group 75% vs. control group 62%, P=0.001) and a higher satisfaction level (IF group 95% vs. control group 92%, P=0.048). Conclusions: Our study suggests that an information form given before the pre-anaesthetic consultation enhances perceived information, information gain and satisfaction level. Méthode Cette étude prospective randomisée en simple aveugle a été conduite à l'hôpital Orthopédique du Centre Hospitalier Universitaire Vaudois. Deux cents patients prévus pour une chirurgie élective orthopédique ont été recrutés entre avril et juin 2008 et repartis en deux groupes selon une table de randomisation : un groupe recevait une feuille d'information 30 minutes avant la consultation préanesthésique, et l'autre pas. Les patients des deux groupes étaient ensuite examinés à la consultation preoperatoire par un anesthésiste indépendant de l'étude, puis recevaient un questionnaire standardisé. Ce questionnaire, issu de questionnaires existants, et validé préalablement sur un échantillon de 50 patients, comportait 17 questions qui exploraient la perception- de l'information (5 questions), le gain d'information (3 questions) et le niveau de satisfaction (9 questions) Parmi ces 17 questions, 3 étaient posées 24 h après l'intervention chirurgicale lors d'une visite dans la chambre ou lors d'un contact téléphonique. Les réponses étaient analysées et comparées entre les deux groupes. Résultats Cent huitante-cinq patients ont terminé l'étude. Le groupe qui a reçu la feuille d'information avait une meilleure perception de l'information (73% vs 63% dans le groupe de contrôle, ρ = 0 002) un gain d'information plus élevé (75% vs. 62% dans le groupe de contrôle, ρ = 0.001) et un niveau de satisfaction plus élevé (95% vs. 92% dans le groupe de contrôle, ρ= 0.048). Discussion et conclusion Cette étude a permis de démontrer que la remise d'une feuille d'information explicative avant la consultation préanesthésique était un moyen simple et bon marché pour améliorer la perception de l'information et le niveau de satisfaction.
Resumo:
Background: Medical prescription after organ transplant must prevent both rejection and infectious complications. We assessed the 1-year effectiveness and cost of introducing a new combined regimen in kidney transplantation. Methods: Patients transplanted from January 2000 to March 2003 (Period 1) were compared to patients transplanted from April 2003 to July 2005 (Period 2). In period 1, patients were treated with Basiliximab, Cyclosporin, steroids and Mycophenolate (MMF) or Azathioprine. Prophylaxis with Valacyclovir was prescribed only in CMV D+/R- patients. In period 2, immunosuppressive drugs were Basiliximab, Tacrolimus, steroids and MMF. A 3-month universal CMV prophylaxis with Valganciclovir was used. Medical charts of outpatient visits allowed identifying drug, laboratory and radiological tests use, and hospital information system causes of hospitalisation and length of stay (LOS) over the first year after transplant. Patients with incomplete costs data were excluded. Results: 53 patients were analysed in period 1, and 60 in period 2. CMV serostatus patterns were not significantly different between the 2 periods. Over 12 months, acute rejection decreased from 22 patients (42%) in period 1 to 4 patients (7%) in period 2 (p<0.001), and CMV infection from 25 patients (47%) to 9 patients (15%, p<0.001). Average total rehospitalisation LOS decreased from 28±19 to 20±11 days (p<0.007). Average outpatient visits decreased from 49±10 to 39±8 (p<0.001). Average immunosuppression and CMV prophylaxis costs increased from US$ 18,362±6,546 to 24,637±5,457 (p<0.001), while average graft rejection costs decreased form US$ 4,135±9,164 to 585±2,850 (p=0.005), and average CMV treatment costs from US$ 2,043±5,545 to 91±293 (p=0.008). Average outpatient visits costs decreased from US$ 7,619±1,549 to 6,074±1,043 (p<0.001), and other hospital costs from US$ 3,801±6,519 to 1,196±3,146 (p=0.007). Altogether, average 1-year treatment costs decreased from US$ 35,961±14,916 to 32,584±6,211 (p=0.115). Cost-effectiveness ratios to avoid graft rejection and CMV infection decreased from US$ 61,482±9,292 to 34,911± 1,639 (p=0.006) and US$ 68,070±11,122 to 39,899±2,650 (p=0.015), respectively. Conclusion: The new combined regimen administered in period 2 was significantly more effective. Its additional cost was more than offset by savings linked with complications avoidance.
Resumo:
OBJECTIVE: When potentially dangerous patients reveal criminal fantasies to their therapists, the latter must decide whether this information has to be transmitted to a third person in order to protect potential victims. We were interested in how medical and legal professionals handle such situations in the context of prison medicine and forensic evaluations. We aimed to explore the motives behind their actions and to compare these professional groups. METHOD: A mail survey was conducted among medical and legal professionals using five fictitious case vignettes. For each vignette, participants were asked to answer questions exploring what the professional should do in the situation and to explain their justification for the chosen response. RESULTS: A total of 147 questionnaires were analysed. Agreement between participants varied from one scenario to another. Overall, legal professionals tended to disclose information to a third party more easily than medical professionals, the latter tending to privilege confidentiality and patient autonomy over security. Perception of potential danger in a given situation was not consistently associated with actions. CONCLUSION: Professionals' opinions and attitudes regarding the confidentiality of potentially dangerous patients differ widely and appear to be subjectively determined. Shared discussions about clinical situations could enhance knowledge and competencies and reduce differences between professional groups.
Resumo:
Drawing on Social Representations Theory, this study investigates focalisation and anchoring during the diffusion of information concerning the Large Hadron Collider (LHC), the particle accelerator at the European Organisation for Nuclear Research (CERN). We hypothesised that people focus on striking elements of the message, abandoning others, that the nature of the initial information affects diffusion of information, and that information is anchored in prior attitudes toward CERN and science. A serial reproduction experiment with two generations and four chains of reproduction diffusing controversial versus descriptive information about the LHC shows a reduction of information through generations, the persistence of terminology regarding the controversy and a decrease of other elements for participants exposed to polemical information. Concerning anchoring, positive attitudes toward CERN and science increase the use of expert terminology unrelated to the controversy. This research highlights the relevance of a social representational approach in the public understanding of science.
Resumo:
Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.
Resumo:
Guided by a modified information-motivation-behavioral skills model, this study identified predictors of condom use among heterosexual people living with HIV with their steady partners. Consecutive patients at 14 European HIV outpatient clinics received an anonymous, standardized, self-administered questionnaire between March and December 2007. Data were analyzed using descriptive statistics and two-step backward elimination regression analyses stratified by gender. The survey included 651 participants (n = 364, 56% women; n = 287, 44%). Mean age was 39 years for women and 43 years for men. Most had acquired HIV sexually and more than half were in a serodiscordant relationship. Sixty-three percent (n = 229) of women and 59% of men (n = 169) reported at least one sexual encounter with a steady partner 6 months prior to the survey. Fifty-one percent (n = 116) of women and 59% of men (n = 99) used condoms consistently with that partner. In both genders, condom use was positively associated with subjective norm conducive to condom use, and self-efficacy to use condoms. Having a partner whose HIV status was positive or unknown reduced condom use. In men, higher education and knowledge about condom use additionally increased condom use, while the use of erectile-enhancing medication decreased it. For women, HIV disclosure to partners additionally reduced the likelihood of condom use. Positive attitudes to condom use and subjective norm increased self-efficacy in both genders, however, a number of gender-related differences appeared to influence self-efficacy. Service providers should pay attention to the identified predictors of condom use and adopt comprehensive and gender-related approaches for preventive interventions with people living with HIV.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
Cannabis use among adolescents and young adults has become a major public health challenge. Several European countries are currently developing short screening instruments to identify 'problematic' forms of cannabis use in general population surveys. One such instrument is the Cannabis Use Disorders Identification Test (CUDIT), a 10-item questionnaire based on the Alcohol Use Disorders Identification Test. Previous research found that some CUDIT items did not perform well psychometrically. In the interests of improving the psychometric properties of the CUDIT, this study replaces the poorly performing items with new items that specifically address cannabis use. Analyses are based on a sub-sample of 558 recent cannabis users from a representative population sample of 5722 individuals (aged 13-32) who were surveyed in the 2007 Swiss Cannabis Monitoring Study. Four new items were added to the original CUDIT. Psychometric properties of all 14 items, as well as the dimensionality of the supplemented CUDIT were then examined using Item Response Theory. Results indicate the unidimensionality of CUDIT and an improvement in its psychometric performance when three original items (usual hours being stoned; injuries; guilt) are replaced by new ones (motives for using cannabis; missing out leisure time activities; difficulties at work/school). However, improvements were limited to cannabis users with a high problem score. For epidemiological purposes, any further revision of CUDIT should therefore include a greater number of 'easier' items.
Resumo:
BACKGROUND: Healthcare professionals regularly read the summary of product characteristics (SmPC) as one of the various sources of information on the risks of drug use in women of childbearing age and during pregnancy. The aim of this article is to present an overview of the teratogenic potential of various antiepileptic drugs and to compare these data with the information provided by the SmPCs. METHODS: A literature search on the teratogenic risks of 19 antiepileptic agents was conducted and the results were compared with the information on the use in women of childbearing age and during pregnancy provided by the SmPCs of 38 commercial products available in Switzerland and Germany. RESULTS: The teratogenic risk is discussed in all available SmPCs. Quantification of the risk for birth defects and the numbers of documented pregnancies are mostly missing. Reproductive safety information in SmPCs showed poor concordance with risk levels reported in the literature. Recommendations concerning the need to monitor plasma levels and possibly perform dose adjustments during pregnancy to prevent treatment failure were missing in five Swiss and two German SmPCs. DISCUSSION: The information regarding use in women of childbearing age and during pregnancy provided by the SmPCs is heterogeneous and poorly reflects the current state of knowledge. Regular updates of SmPCs are warranted in order for these documents to be of reliable use for health care professionals.
Resumo:
Intuitively, we think of perception as providing us with direct cognitive access to physical objects and their properties. But this common sense picture of perception becomes problematic when we notice that perception is not always veridical. In fact, reflection on illusions and hallucinations seems to indicate that perception cannot be what it intuitively appears to be. This clash between intuition and reflection is what generates the puzzle of perception. The task and enterprise of unravelling this puzzle took, and still takes, centre stage in the philosophy of perception. The goal of my dissertation is to make a contribution to this enterprise by formulating and defending a new structural approach to perception and perceptual consciousness. The argument for my structural approach is developed in several steps. Firstly, I develop an empirically inspired causal argument against naïve and direct realist conceptions of perceptual consciousness. Basically, the argument says that perception and hallucination can have the same proximal causes and must thus belong to the same mental kind. I emphasise that this insight gives us good reasons to abandon what we are instinctively driven to believe - namely that perception is directly about the outside physical world. The causal argument essentially highlights that the information that the subject acquires in perceiving a worldly object is always indirect. To put it another way, the argument shows that what we, as perceivers, are immediately aware of, is not an aspect of the world but an aspect of our sensory response to it. A view like this is traditionally known as a Representative Theory of Perception. As a second step, emphasis is put on the task of defending and promoting a new structural version of the Representative Theory of Perception; one that is immune to some major objections that have been standardly levelled at other Representative Theories of Perception. As part of this defence and promotion, I argue that it is only the structural features of perceptual experiences that are fit to represent the empirical world. This line of thought is backed up by a detailed study of the intriguing phenomenon of synaesthesia. More precisely, I concentrate on empirical cases of synaesthetic experiences and argue that some of them provide support for a structural approach to perception. The general picture that emerges in this dissertation is a new perspective on perceptual consciousness that is structural through and through.
Resumo:
Technological developments in the information society bring new challenges, both to the applicability and to the enforceability of the law. One major challenge is posed by new entities such as pseudonyms, avatars, and software agents that operate at an increasing distance from the physical persons "behind" them (the "principal"). In case of accidents or misbehavior, current laws require that the physical or legal principal behind the entity be found so that she can be held to account. This may be problematic if the linkability of the principal and the operating entity is questionable. In light of the ongoing developments in electronic agents, there is sufficient reason to conduct a review of the literature in order to more closely examine arguments for and against legal personhood for some nonhuman acting entities. This article also includes a discussion of alternative approaches to solving the "accountability gap."
Resumo:
This article builds on the recent policy diffusion literature and attempts to overcome one of its major problems, namely the lack of a coherent theoretical framework. The literature defines policy diffusion as a process where policy choices are interdependent, and identifies several diffusion mechanisms that specify the link between the policy choices of the various actors. As these mechanisms are grounded in different theories, theoretical accounts of diffusion currently have little internal coherence. In this article we put forward an expected-utility model of policy change that is able to subsume all the diffusion mechanisms. We argue that the expected utility of a policy depends on both its effectiveness and the payoffs it yields, and we show that the various diffusion mechanisms operate by altering these two parameters. Each mechanism affects one of the two parameters, and does so in distinct ways. To account for aggregate patterns of diffusion, we embed our model in a simple threshold model of diffusion. Given the high complexity of the process that results, strong analytical conclusions on aggregate patterns cannot be drawn without more extensive analysis which is beyond the scope of this article. However, preliminary considerations indicate that a wide range of diffusion processes may exist and that convergence is only one possible outcome.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).