732 resultados para Reputation
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Integrative review (IR) has an international reputation in nursing research and evidence-based practice. This IR aimed at identifying and analyzing the concepts and methods recommended to undertaking IR in nursing. Nine information resources,including electronic databases and grey literature were searched. Seventeen studies were included. The results indicate that: primary studies were mostly from USA; it is possible to have several research questions or hypotheses and include primary studies in the review from different theoretical and methodological approaches; it is a type of review that can go beyond the analysis and synthesis of findings from primary studies allowing exploiting other research dimensions, and that presents potentialities for the development of new theories and new problems for research. Conclusion: IR is understood as a very complex type of review and it is expected to be developed using standardized and systematic methods to ensure the required rigor of scientific research and therefore the legitimacy of the established evidence.
Resumo:
The spectacular failure of top-rated structured finance products has broughtrenewed attention to the conflicts of interest of Credit Rating Agencies (CRAs). We modelboth the CRA conflict of understating credit risk to attract more business, and the issuerconflict of purchasing only the most favorable ratings (issuer shopping), and examine theeffectiveness of a number of proposed regulatory solutions of CRAs. We find that CRAs aremore prone to inflate ratings when there is a larger fraction of naive investors in the marketwho take ratings at face value, or when CRA expected reputation costs are lower. To theextent that in booms the fraction of naive investors is higher, and the reputation risk forCRAs of getting caught understating credit risk is lower, our model predicts that CRAs aremore likely to understate credit risk in booms than in recessions. We also show that, due toissuer shopping, competition among CRAs in a duopoly is less efficient (conditional on thesame equilibrium CRA rating policy) than having a monopoly CRA, in terms of both totalex-ante surplus and investor surplus. Allowing tranching decreases total surplus further.We argue that regulatory intervention requiring upfront payments for rating services (beforeCRAs propose a rating to the issuer) combined with mandatory disclosure of any ratingproduced by CRAs can substantially mitigate the con.icts of interest of both CRAs andissuers.
Resumo:
We first establish that policymakers on the Bank of England's Monetary PolicyCommittee choose lower interest rates with experience. We then reject increasingconfidence in private information or learning about the structure of the macroeconomy as explanations for this shift. Instead, a model in which voters signal theirhawkishness to observers better fits the data. The motivation for signalling is consistent with wanting to control inflation expectations, but not career concerns orpleasing colleagues. There is also no evidence of capture by industry. The papersuggests that policy-motivated reputation building may be important for explainingdynamics in experts' policy choices.
Resumo:
The collapse of so many AAA-rated structured finance products in 2007-2008has brought renewed attention to the causes of ratings failures and the conflicts of interestin the Credit Ratings Industry. We provide a model of competition among Credit RatingsAgencies (CRAs) in which there are three possible sources of conflicts: 1) the CRA conflictof interest of understating credit risk to attract more business; 2) the ability of issuersto purchase only the most favorable ratings; and 3) the trusting nature of some investorclienteles who may take ratings at face value. We show that when combined, these give riseto three fundamental equilibrium distortions. First, competition among CRAs can reducemarket efficiency, as competition facilitates ratings shopping by issuers. Second, CRAs aremore prone to inflate ratings in boom times, when there are more trusting investors, andwhen the risks of failure which could damage CRA reputation are lower. Third, the industrypractice of tranching of structured products distorts market efficiency as its role is to deceivetrusting investors. We argue that regulatory intervention requiring: i) upfront paymentsfor rating services (before CRAs propose a rating to the issuer), ii) mandatory disclosure ofany rating produced by CRAs, and iii) oversight of ratings methodology can substantiallymitigate ratings inflation and promote efficiency.
Resumo:
I study a repeated buyer-seller relationship for the exchange of a givengood. Asymmetric information over the buyer's reservation price, which issubject to random shocks, may lead the seller to use a rigid pricing policydespite the possibility of making higher profits through price discriminationacross the different satates of the buyer's reservation price. The existence of a flexible price subgame perfect equilibrium is shown for the buyerssufficiently locked-in. When the seller faces a population of buyers whose degree of involvmentin the relatioship is unknown, the flexible price equilibrium is notnecessarily optimal. Thus tipically the seller will prefer to use therigid price strategy. A learning process allowing the seller to screenthe population of buyers is derived abd the existence of a switching pointbetween the two regimes (i.e. price rigidity and price felxibility) isshown.
Resumo:
In some markets, such as the market for drugs or for financial services, sellers have better information than buyersregarding the matching between the buyer's needs and the good's actual characteristics. Depending on the market structure,this may lead to conflicts of interest and/or the underprovision of information by the seller. This paper studies this issuein the market for financial services. The analysis presents a new model of competition between banks, as banks' pricecompetition influences the ensuing incentives for truthful information revelation. We compare two different firm structures,specialized banking, where financial institutions provide a unique financial product, and one-stop banking, where a financialinstitution is able to provide several financial products which are horizontally differentiated. We show first that, althoughconflicts of interest may prevent information disclosure under monopoly, competition forces full information provision forsufficiently high reputation costs. Second, in the presence of market power, one-stop banks will use information strategicallyto increase product differentiation and therefore will always provide reliable information and charge higher rices thanspecialized banks, thus providing a new justification for the creation of one-stop banks. Finally, we show that, ifindependent financial advisers are able to provide reliable information, this increases product differentiation and thereforemarket power, so that it is in the interest of financial intermediaries to promote external independent financial advice.
Resumo:
Whereas much literature exists on choice overload, little is known about effects of numbers of alternatives in donation decisions. How do these affect both the size and distribution of donations? We hypothesize that donations are affected by the reputation of recipients and increase with their number, albeit at a decreasing rate. Allocations to recipients reflect different concepts of fairness equity and equality. Both may be employed but, since they differ in cognitive and emotional costs, numbers of recipients are important. Using a cognitive (emotional) argument, distributions become more uniform (skewed) as numbers increase. In a survey, respondents indicated how they would donate lottery winnings of 50 Euros. Results indicated that more was donated to NGO s that respondents knew better. Second, total donations increased with the number of recipients albeit at a decreasing rate. Third, distributions of donations became more skewed as numbers increased. We comment on theoretical and practical implications.
Resumo:
Aquest treball busca analitzar com poden emprar-se els mitjans digitals per contribuir a la gestió de la Reputació Corporativa, per ajudar a la comprensió del fenomen i a abordar-la en la pràctica des dels marcs de referència desenvolupats i validats en l'acadèmia. En primer lloc, s'empra una metodologia qualitativa i una anàlisi de continguts per analitzar els principals mitjans digitals –Webs, Blogs, Facebook i Twitter- en funció del seu potencial per a la generació de diàleg entre les organitzacions i els seus grups d'interès. S'analitzen a profunditat els mitjans digitals de 5 companyies Top 20 en el Rànquing RepTrak 2012. En segon lloc, es planteja una anàlisi d'aquests mitjans digitals en funció de les set dimensions que impulsen la Reputació Corporativa, seguint la metodologia RepTrak del Reputation Institute
Resumo:
La recerca tracta sobre els diferents mecanismes que les ciutats i les regions utilitzen per millorar la seva imatge. En aquest sentit, el treball se centra en la marca-ciutat, el màrqueting de ciutats i els diferents models de comunicació emprats per exaltar la reputació dels territoris. Pel que fa la segona part de la recerca, consisteix en l’estudi de la marca-ciutat Barcelona en els àmbits dels negocis, la societat del coneixement, el turisme, la sostenibilitat i la qualitat de vida i la cultura. Per tant, el propòsit d’aquesta recerca és comprendre quines accions desenvolupen els principals actors per incentivar els negocis a Barcelona, descobrir quines iniciatives s’han pres per potenciar l’economia del coneixement, el turisme, la sostenibilitat i la qualitat de vida i la indústria cultural a la ciutat de Barcelona
Resumo:
As the nation’s leading producer of ethanol and biodiesel, Iowa is building upon its national reputation as an innovative renewable fuel and energy leader by aggressively pursuing more wind energy production. We invite you to take a closer look at Iowa as we harness the winds of renewable energy
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The identity [r]evolution is happening. Who are you, who am I in the information society? In recent years, the convergence of several factors - technological, political, economic - has accelerated a fundamental change in our networked world. On a technological level, information becomes easier to gather, to store, to exchange and to process. The belief that more information brings more security has been a strong political driver to promote information gathering since September 11. Profiling intends to transform information into knowledge in order to anticipate one's behaviour, or needs, or preferences. It can lead to categorizations according to some specific risk criteria, for example, or to direct and personalized marketing. As a consequence, new forms of identities appear. They are not necessarily related to our names anymore. They are based on information, on traces that we leave when we act or interact, when we go somewhere or just stay in one place, or even sometimes when we make a choice. They are related to the SIM cards of our mobile phones, to our credit card numbers, to the pseudonyms that we use on the Internet, to our email addresses, to the IP addresses of our computers, to our profiles... Like traditional identities, these new forms of identities can allow us to distinguish an individual within a group of people, or describe this person as belonging to a community or a category. How far have we moved through this process? The identity [r]evolution is already becoming part of our daily lives. People are eager to share information with their "friends" in social networks like Facebook, in chat rooms, or in Second Life. Customers take advantage of the numerous bonus cards that are made available. Video surveillance is becoming the rule. In several countries, traditional ID documents are being replaced by biometric passports with RFID technologies. This raises several privacy issues and might actually even result in changing the perception of the concept of privacy itself, in particular by the younger generation. In the information society, our (partial) identities become the illusory masks that we choose -or that we are assigned- to interplay and communicate with each other. Rights, obligations, responsibilities, even reputation are increasingly associated with these masks. On the one hand, these masks become the key to access restricted information and to use services. On the other hand, in case of a fraud or negative reputation, the owner of such a mask can be penalized: doors remain closed, access to services is denied. Hence the current preoccupying growth of impersonation, identity-theft and other identity-related crimes. Where is the path of the identity [r]evolution leading us? The booklet is giving a glance on possible scenarios in the field of identity.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The topic of cardiorespiratory interactions is of extreme importance to the practicing intensivist. It also has a reputation for being intellectually challenging, due in part to the enormous volume of relevant, at times contradictory literature. Another source of difficulty is the need to simultaneously consider the interrelated functioning of several organ systems (not necessarily limited to the heart and lung), in other words, to adopt a systemic (as opposed to analytic) point of view. We believe that the proper understanding of a few simple physiological concepts is of great help in organizing knowledge in this field. The first part of this review will be devoted to demonstrating this point. The second part, to be published in a coming issue of Intensive Care Medicine, will apply these concepts to clinical situations. We hope that this text will be of some use, especially to intensivists in training, to demystify a field that many find intimidating.