81 resultados para Video Game Industry
em Université de Lausanne, Switzerland
Resumo:
There is no agreement about the distinction between pathological, excessive and normal gaming. The present study compared two classifications for defining pathological gaming: the polythetic format (gamers who met at least half of the criteria) and monothetic format (gamers who met all criteria). Associations with mental, health and social issues were examined to assess differences between subgroups of gamers. A representative sample of 5,663 young Swiss men filled in a questionnaire as part of the ongoing Cohort Study on Substance Use Risk Factors (C-SURF). Game use was assessed with the Game Addiction Scale. Mental, social and physical factors (depression, anxiety, aggressiveness, physical and mental health, social and health consequences), gambling and substance use (illicit drug use, alcohol dependence and problematic cannabis use) were also assessed. The results indicated that monothetic gamers shared problems with polythetic gamers, but were even more inclined to mental health issues (depression, anxiety, and aggressiveness) and were more vulnerable to other dependencies like substance use, alcohol dependence or gambling. A second analysis using Latent Class Analysis confirmed the distinction between monothetic and polythetic gamers. These findings support the use of a monothetic format to diagnose pathological gaming and to differentiate it from excessive gaming.
Resumo:
Background: In children, video game experience improves spatial performance, a predictor of surgical performance. This study aims at comparing laparoscopic virtual reality (VR) task performance of children with different levels of experience in video games and residents. Participants and methods: A total of 32 children (8.4 to 12.1 years), 20 residents, and 14 board-certified surgeons (total n = 66) performed several VR and 2 conventional tasks (cube/spatial and pegboard/fine motor). Performance between the groups was compared (primary outcome). VR performance was correlated with conventional task performance (secondary outcome). Results: Lowest VR performance was found in children with low video game experience, followed by those with high video game experience, residents, and board-certified surgeons. VR performance correlated well with the spatial test and moderately with the fine motor test. Conclusions: The use of computer games can be considered not only as pure entertainment but may also contribute to the development of skills relevant for adequate performance in VR laparoscopic tasks. Spatial skills are relevant for VR laparoscopic task performance.
Resumo:
Le self est une notion polysémique qui fait l'objet d'un consensus relatif dans plusieurs domaines, dont la psychologie du développement. Elle rend compte de la faculté de s'éprouver le même au fil du temps et de distinguer le « je » qui regarde du « moi » regardé. C'est le garant d'un sens de soi plus ou moins cohérent au fil du temps, en dépit des changements qui surviennent au cours de la vie. Le self combine des processus de réflexivité et d'intersubjectivité. Nous en avons analysé trois composantes fonctionnelles : la mémoire de travail, la mémoire épisodique et la narration, à partir d'un protocole expérimental témoignant de son ontogenèse chez des enfants de 6 à 9 ans (n=24 répartis en deux groupes de 6‐7 et 8-9 ans). Nous avons créé le « jeu informatique du lutin » qui propose un parcours semiorienté dans un monde imaginaire. C'est une narration de soi, opérant la mise en sens des temporalités et des espaces auxquels les événements se réfèrent. Deux semaines après cette « aventure », on recueille la narration des souvenirs épisodiques de cette histoire. Nous avons également utilisé un test de mémoire de travail visuospatiale non verbale. Des différences développementales affectent les dimensions narratives de la mémoire de l'épisode du jeu, comme l'efficacité de la mémoire de travail visuospatiale. Ces développements témoignent d'une augmentation de « l'épaisseur temporelle de la conscience» entre 6 et 9 ans. L'épaisseur de la conscience renvoie fondamentalement à la faculté du self de vivre le temps dans une cyclicité incluant le passé, le présent et le futur anticipé. Le développment observé élargit les possibilités de mettre en lien des mémoires et des scénarios futurs, tout comme les mises en sens des relations aux autres et à soi-même. Self is a polysemic concept of common use in various scientific fields, among which developmental psychology. It accounts for the capacity to maintain the conviction to be « oneself », always the same through circumstances and throughout my life. This important function contributes in maintaining coherence and some sorte of Ariadne's thread in memory. To analyse the ontogeny of the self, we have focused upon three components : working memory, episodic memory and narration in children aged between 6 and 9 years. We used a non verbal working memory task. It was completed by a video game specially designed for our purpose, in which children were engaged in moving an elf in a landscape changing through seasons, in order to deliver a princess from a mischievous wizard. Two weeks after the game, the children had to tell what happened while they moved the elf. It is a self-narrative that creates a link‐up of temporality and spaces to which the events refer. The narrated episode was assessed for its coherence and continuity dimensions. Developmental differences affect the narrative dimensions of the memory of the episode of the game, as the effectiveness of visuospatial working memory. These developments show an increase in "temporal thickness of consciousness" between 6 and 9 years. The thickness of consciousness basically refers to the ability of the self to live in a cyclical time including past, present and anticipated future. The observed development broadens the possibilities to link memories and future scenarios, like setting sense of relations with others and with oneself.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Among the negative consequences of video gaming disorder, decreased participation in sport and exercise has received little attention. This study aimed to assess the longitudinal association between video gaming disorder and the level of sport and exercise in emerging adult men. A questionnaire was completed at baseline and 15-month follow-up by a representative national sample of 4,933 respondents. The seven items of the Game Addiction Scale were used to construct a latent variable representing video gaming disorder. Level of sport and exercise was also self-reported. Cross-lagged path modeling indicated a reciprocal causality between video gaming disorder and the level of sport and exercise, even after adjusting for a large set of confounders. These findings support the need for better promotion of sport and exercise among emerging adults in order to contribute to the prevention of video gaming disorder, and to raise the level of sport and exercise activity in addicted gamers.
Resumo:
INTRODUCTION: Although long-term video-EEG monitoring (LVEM) is routinely used to investigate paroxysmal events, short-term video-EEG monitoring (SVEM) lasting <24 h is increasingly recognized as a cost-effective tool. Since, however, relatively few studies addressed the yield of SVEM among different diagnostic groups, we undertook the present study to investigate this aspect. METHODS: We retrospectively analyzed 226 consecutive SVEM recordings over 6 years. All patients were referred because routine EEGs were inconclusive. Patients were classified into 3 suspected diagnostic groups: (1) group with epileptic seizures, (2) group with psychogenic nonepileptic seizures (PNESs), and (3) group with other or undetermined diagnoses. We assessed recording lengths, interictal epileptiform discharges, epileptic seizures, PNESs, and the definitive diagnoses obtained after SVEM. RESULTS: The mean age was 34 (±18.7) years, and the median recording length was 18.6 h. Among the 226 patients, 127 referred for suspected epilepsy - 73 had a diagnosis of epilepsy, none had a diagnosis of PNESs, and 54 had other or undetermined diagnoses post-SVEM. Of the 24 patients with pre-SVEM suspected PNESs, 1 had epilepsy, 12 had PNESs, and 11 had other or undetermined diagnoses. Of the 75 patients with other diagnoses pre-SVEM, 17 had epilepsy, 11 had PNESs, and 47 had other or undetermined diagnoses. After SVEM, 15 patients had definite diagnoses other than epilepsy or PNESs, while in 96 patients, diagnosis remained unclear. Overall, a definitive diagnosis could be reached in 129/226 (57%) patients. CONCLUSIONS: This study demonstrates that in nearly 3/5 patients without a definitive diagnosis after routine EEG, SVEM allowed us to reach a diagnosis. This procedure should be encouraged in this setting, given its time-effectiveness compared with LVEM.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
We aimed to determine whether human subjects' reliance on different sources of spatial information encoded in different frames of reference (i.e., egocentric versus allocentric) affects their performance, decision time and memory capacity in a short-term spatial memory task performed in the real world. Subjects were asked to play the Memory game (a.k.a. the Concentration game) without an opponent, in four different conditions that controlled for the subjects' reliance on egocentric and/or allocentric frames of reference for the elaboration of a spatial representation of the image locations enabling maximal efficiency. We report experimental data from young adult men and women, and describe a mathematical model to estimate human short-term spatial memory capacity. We found that short-term spatial memory capacity was greatest when an egocentric spatial frame of reference enabled subjects to encode and remember the image locations. However, when egocentric information was not reliable, short-term spatial memory capacity was greater and decision time shorter when an allocentric representation of the image locations with respect to distant objects in the surrounding environment was available, as compared to when only a spatial representation encoding the relationships between the individual images, independent of the surrounding environment, was available. Our findings thus further demonstrate that changes in viewpoint produced by the movement of images placed in front of a stationary subject is not equivalent to the movement of the subject around stationary images. We discuss possible limitations of classical neuropsychological and virtual reality experiments of spatial memory, which typically restrict the sensory information normally available to human subjects in the real world.
Resumo:
Nandrolone (19-nortestosterone) is a widely used anabolic steroid in sports where strength plays an essential role. Once nandrolone has been metabolised, two major metabolites are excreted in urine, 19-norandrosterone (NA) and 19-noretiocholanolone (NE). In 1997, in France, quite a few sportsmen had concentrations of 19-norandrosterone very close to the IOC cut off limit (2ng/ml). At that time, a debate took place about the capability of the human male body to produce by itself these metabolites without any intake of nandrolone or related compounds. The International Football Federation (FIFA) was very concerned with this problematic, especially because the World Cup was about to start in France. In this respect, a statistical study was held with all football players from the first and second divisions of the Swiss Football National League. All players gave a urine sample after effort and around 6% of them showed traces of 19-norandrosterone. These results were compared with amateur football players (control group) and around 6% of them had very small amounts of 19-norandrosterone and/or 19-noretiocholanolone in urine after effort, whereas none of them had detectable traces of one or the other metabolite before effort. The origin of these compounds in urine after a strenuous physical activity is still unknown, but three hypotheses can be put forward. First, an endogenous production of nandrolone metabolites takes place. Second, nandrolone metabolites are released from the fatty tissues after an intake of nandrolone, some related compounds or some contaminated nutritive supplements. Finally, the sportsmen may have taken something during or just before the football game.