32 resultados para Context Model
Resumo:
Introduction: With the setting up of the newly Athlete's Biological Passport antidoping programme, novel guidelines have been introduced to guarantee results beyond reproach. We investigated in this context, the effect of storage time on the variables commonly measured for the haematological passport. We also wanted to assess for these variables, the within and between analyzer variations. Methods: Blood samples were obtained from top level male professional cyclists (27 samples for the first part of the study and 102 for the second part) taking part to major stage races. After collection, they were transported under refrigerated conditions (2 °C < T < 12 °C), delivered to the antidoping laboratory, analysed and then stored at approximately 4 °C to conduct analysis at different time points up to 72 h after delivery. A mixed-model procedure was used to determine the stability of the different variables. Results: As expected haemoglobin concentration was not affected by storage and showed stability for at least 72 h. Under the conditions of our investigation, the reticulocytes percentage showed a much better stability than previous published data (> 48 h) and the technical comparison of the haematology analyzer demonstrated excellent results. Conclusion: In conclusion, our data clearly demonstrate that as long as the World Anti-Doping Agency's guidelines are followed rigorously, all blood results reach the quality level required in the antidoping context.
Resumo:
The methylation status of the O(6)-methylguanine-DNA methyltransferase (MGMT) gene is an important predictive biomarker for benefit from alkylating agent therapy in glioblastoma. Recent studies in anaplastic glioma suggest a prognostic value for MGMT methylation. Investigation of pathogenetic and epigenetic features of this intriguingly distinct behavior requires accurate MGMT classification to assess high throughput molecular databases. Promoter methylation-mediated gene silencing is strongly dependent on the location of the methylated CpGs, complicating classification. Using the HumanMethylation450 (HM-450K) BeadChip interrogating 176 CpGs annotated for the MGMT gene, with 14 located in the promoter, two distinct regions in the CpG island of the promoter were identified with high importance for gene silencing and outcome prediction. A logistic regression model (MGMT-STP27) comprising probes cg1243587 and cg12981137 provided good classification properties and prognostic value (kappa = 0.85; log-rank p < 0.001) using a training-set of 63 glioblastomas from homogenously treated patients, for whom MGMT methylation was previously shown to be predictive for outcome based on classification by methylation-specific PCR. MGMT-STP27 was successfully validated in an independent cohort of chemo-radiotherapy-treated glioblastoma patients (n = 50; kappa = 0.88; outcome, log-rank p < 0.001). Lower prevalence of MGMT methylation among CpG island methylator phenotype (CIMP) positive tumors was found in glioblastomas from The Cancer Genome Atlas than in low grade and anaplastic glioma cohorts, while in CIMP-negative gliomas MGMT was classified as methylated in approximately 50 % regardless of tumor grade. The proposed MGMT-STP27 prediction model allows mining of datasets derived on the HM-450K or HM-27K BeadChip to explore effects of distinct epigenetic context of MGMT methylation suspected to modulate treatment resistance in different tumor types.
Resumo:
In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.
Resumo:
This research was conducted in the context of the project IRIS 8A Health and Society (2002-2008) and financially supported by the University of Lausanne. It was aomed at developping a model based on the elder people's experience and allowed us to develop a "Portrait evaluation" of fear of falling using their examples and words. It is a very simple evaluation, which can be used by professionals, but by the elder people themselves. The "Portrait evaluation" and the user's guide are on free access, but we would very much approciate to know whether other people or scientists have used it and collect their comments. (contact: Chantal.Piot-Ziegler@unil.ch)The purpose of this study is to create a model grounded in the elderly people's experience allowing the development of an original instrument to evaluate FOF.In a previous study, 58 semi-structured interviews were conducted with community-dwelling elderly people. The qualitative thematic analysis showed that fear of falling was defined through the functional, social and psychological long-term consequences of falls (Piot-Ziegler et al., 2007).In order to reveal patterns in the expression of fear of falling, an original qualitative thematic pattern analysis (QUAlitative Pattern Analysis - QUAPA) is developed and applied on these interviews.The results of this analysis show an internal coherence across the three dimensions (functional, social and psychological). Four different patterns are found, corresponding to four degrees of fear of falling. They are formalized in a fear of falling intensity model.This model leads to a portrait-evaluation for fallers and non-fallers. The evaluation must be confronted to large samples of elderly people, living in different environments. It presents an original alternative to the concept of self-efficacy to evaluate fear of falling in older people.The model of FOF presented in this article is grounded on elderly people's experience. It gives an experiential description of the three dimensions constitutive of FOF and of their evolution as fear increases, and defines an evaluation tool using situations and wordings based on the elderly people's discourse.
Resumo:
There is a lack of dedicated tools for business model design at a strategic level. However, in today's economic world the need to be able to quickly reinvent a company's business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts: First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity. The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched. The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition. The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.
Resumo:
Cataract surgery is a common ocular surgical procedure consisting in the implantation of an artificial intraocular lens (IOL) to replace the ageing, dystrophic or damaged natural one. The management of postoperative ocular inflammation is a major challenge especially in the context of pre-existing uveitis. The association of the implanted IOL with a drug delivery system (DDS) allows the prolonged intraocular release of anti-inflammatory agents after surgery. Thus IOL-DDS represents an "all in one" strategy that simultaneously addresses both cataract and inflammation issues. Polymeric DDS loaded with two model anti-inflammatory drugs (triamcinolone acetonide (TA) and cyclosporine A (CsA)) were manufactured in a novel way and tested regarding their efficiency for the management of intraocular inflammation during the 3 months following surgery. The study involved an experimentally induced uveitis in rabbits. Experimental results showed that medicated DDS efficiently reduced ocular inflammation (decrease of protein concentration in aqueous humour, inflammatory cells in aqueous humour and clinical score). Additionally, more than 60% of the loading dose remained in the DDS at the end of the experiment, suggesting that the system could potentially cover longer inflammatory episodes. Thus, IOL-DDS were demonstrated to inhibit intraocular inflammation for at least 3 months after cataract surgery, representing a potential novel approach to cataract surgery in eyes with pre-existing uveitis.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Although the precise signaling mechanisms underlying the vulnerability of some sub-populations of motoneurons in ALS remain unclear, critical factors such as metallo-proteinase 9 expression, neuronal activity and endoplasmic reticulum stress have been shown to be involved. In the context of SOD1(G93A) ALS mouse model, we previously showed that a two-fold decrease in calreticulin (CRT) is occurring in the vulnerable fast motoneurons. Here, we asked to which extent the decrease in CRT levels was causative to muscle denervation and/or motoneuron degeneration. Toward this goal, a hemizygous deletion of the crt gene in SOD1(G93A) mice was generated since the complete ablation of crt is embryonic lethal. We observed that SOD1(G93A);crt(+/-) mice display increased and earlier muscle weakness and muscle denervation compared to SOD1(G93A) mice. While CRT reduction in motoneurons leads to a strong upregulation of two factors important in motoneuron dysfunction, ER stress and mTOR activation, it does not aggravate motoneuron death. Our results underline a prevalent role for CRT levels in the early phase of muscle denervation and support CRT regulation as a potential therapeutic approach.
Resumo:
Continuous positive airway pressure, aimed at preventing pulmonary atelectasis, has been used for decades to reduce lung injury in critically ill patients. In neonatal practice, it is increasingly used worldwide as a primary form of respiratory support due to its low cost and because it reduces the need for endotracheal intubation and conventional mechanical ventilation. We studied the anesthetized in vivo rat and determined the optimal circuit design for delivery of continuous positive airway pressure. We investigated the effects of continuous positive airway pressure following lipopolysaccharide administration in the anesthetized rat. Whereas neither continuous positive airway pressure nor lipopolysaccharide alone caused lung injury, continuous positive airway pressure applied following intravenous lipopolysaccharide resulted in increased microvascular permeability, elevated cytokine protein and mRNA production, and impaired static compliance. A dose-response relationship was demonstrated whereby higher levels of continuous positive airway pressure (up to 6 cmH(2)O) caused greater lung injury. Lung injury was attenuated by pretreatment with dexamethasone. These data demonstrate that despite optimal circuit design, continuous positive airway pressure causes significant lung injury (proportional to the airway pressure) in the setting of circulating lipopolysaccharide. Although we would currently avoid direct extrapolation of these findings to clinical practice, we believe that in the context of increasing clinical use, these data are grounds for concern and warrant further investigation.
Resumo:
The goal of this dissertation is to find and provide the basis for a managerial tool that allows a firm to easily express its business logic. The methodological basis for this work is design science, where the researcher builds an artifact to solve a specific problem. In this case the aim is to provide an ontology that makes it possible to explicit a firm's business model. In other words, the proposed artifact helps a firm to formally describe its value proposition, its customers, the relationship with them, the necessary intra- and inter-firm infrastructure and its profit model. Such an ontology is relevant because until now there is no model that expresses a company's global business logic from a pure business point of view. Previous models essentially take an organizational or process perspective or cover only parts of a firm's business logic. The four main pillars of the ontology, which are inspired by management science and enterprise- and processmodeling, are product, customer interface, infrastructure and finance. The ontology is validated by case studies, a panel of experts and managers. The dissertation also provides a software prototype to capture a company's business model in an information system. The last part of the thesis consists of a demonstration of the value of the ontology in business strategy and Information Systems (IS) alignment. Structure of this thesis: The dissertation is structured in nine parts: Chapter 1 presents the motivations of this research, the research methodology with which the goals shall be achieved and why this dissertation present a contribution to research. Chapter 2 investigates the origins, the term and the concept of business models. It defines what is meant by business models in this dissertation and how they are situated in the context of the firm. In addition this chapter outlines the possible uses of the business model concept. Chapter 3 gives an overview of the research done in the field of business models and enterprise ontologies. Chapter 4 introduces the major contribution of this dissertation: the business model ontology. In this part of the thesis the elements, attributes and relationships of the ontology are explained and described in detail. Chapter 5 presents a case study of the Montreux Jazz Festival which's business model was captured by applying the structure and concepts of the ontology. In fact, it gives an impression of how a business model description based on the ontology looks like. Chapter 6 shows an instantiation of the ontology into a prototype tool: the Business Model Modelling Language BM2L. This is an XML-based description language that allows to capture and describe the business model of a firm and has a large potential for further applications. Chapter 7 is about the evaluation of the business model ontology. The evaluation builds on literature review, a set of interviews with practitioners and case studies. Chapter 8 gives an outlook on possible future research and applications of the business model ontology. The main areas of interest are alignment of business and information technology IT/information systems IS and business model comparison. Finally, chapter 9 presents some conclusions.
Resumo:
It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.
Resumo:
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Resumo:
Evidences collected from smartphones users show a growing desire of personalization offered by services for mobile devices. However, the need to accurately identify users' contexts has important implications for user's privacy and it increases the amount of trust, which users are requested to have in the service providers. In this paper, we introduce a model that describes the role of personalization and control in users' assessment of cost and benefits associated to the disclosure of private information. We present an instantiation of such model, a context-aware application for smartphones based on the Android operating system, in which users' private information are protected. Focus group interviews were conducted to examine users' privacy concerns before and after having used our application. Obtained results confirm the utility of our artifact and provide support to our theoretical model, which extends previous literature on privacy calculus and user's acceptance of context-aware technology.
Resumo:
Diagrams and tools help to support task modelling in engi- neering and process management. Unfortunately they are unfit to help in a business context at a strategic level, because of the flexibility needed for creative thinking and user friendly interactions. We propose a tool which bridges the gap between freedom of actions, encouraging creativity, and constraints, allowing validation and advanced features.
Resumo:
Bandura (1986) developed the concept of moral disengagement to explain how individuals can engage in detrimental behavior while experiencing low levels of negative feelings such as guilt-feelings. Most of the research conducted on moral disengagement investigated this concept as a global concept (e.g., Bandura, Barbaranelli, Caprara, & Pastorelli, 1996; Moore, Detert, Klebe Treviño, Baker, & Mayer, 2012) while Bandura (1986, 1990) initially developed eight distinct mechanisms of moral disengagement grouped into four categories representing the various means through which moral disengagement can operate. In our work, we propose to develop measures of this concept based on its categories, namely rightness of actions, rejection of personal responsibility, distortion of negative consequences, and negative perception of the victims, and which is not specific a particular area of research. Through our measures, we aim at better understanding the cognitive process leading individuals to behave unethically by investigating which category plays a role in explaining unethical behavior depending on the situations in which individuals are. To this purpose, we conducted five studies to develop the measures and to test its predictive validity. Particularly, we assessed the ability of the newly developed measures to predict two types of unethical behaviors, i.e. discriminatory behavior and cheating behavior. Confirmatory Factor analyses demonstrated a good fit of the model and findings generally supported our predictions.