20 resultados para Digital Games
em Helda - Digital Repository of University of Helsinki
Resumo:
The main goal of this study was to explore experiences induced by playing digital games (i.e. meaning of playing). In addition, the study aimed at structuring the larger entities of gaming experience. This was done by using theory-driven and data grounded approaches. Previously gaming experiences have not been explored as a whole. The consideration of gaming experiences on the basis of psychological theories and studies has also been rare. The secondary goal of this study was to clarify, whether the individual meanings of playing are connected with flow experience in an occasional gaming situation. Flow is an enjoyable experience and usually activities that induce flow are gladly repeated. Previously, flow has been proved to be an essential concept in the context of playing, but the relations between meanings of playing and flow have not been studied. The relations between gender and gaming experiences were examined throughout the study, as well as the relationship between gaming frequency and experiences. The study was divided into two sections, of which the first was composed according to the main goals. Its data was gathered by using an Internet questionnaire. The other section covered the themes that were formulated on the basis of the secondary aims. In that section, the participants played a driving game for 40 minutes and then filled in a questionnaire, which measured flow related experiences. In both sections, the participants were mainly young Finnish adults. All the participants in the second section (n = 60) had already participated in the first section (n = 267). Both qualitative and quantitative research techniques were used in the study. In the first section, freely described gaming experiences were classified according to the grounded theory. After that, the most common categories were further classified into the basic structures of gaming experience, some according to the existing theories of experience structure and some according to the data (i.e. grounded theory). In the other section flow constructs were measured and used as grouping variables in a cluster analysis. Three meaningful groups were compared regarding the meanings of gaming that were explored in the first section. The descriptions of gaming experiences were classified into four main categories, which were conceptions of the gaming process, emotions, motivations and focused attention. All the theory-driven categories were found in the data. This frame of reference can be utilized in future when reliability and validity of already existing methods for measuring gaming experiences are considered or new methods will be developed. The connection between the individual relevance of gaming and flow was minor. However, as the scope was specified to relations between primary meanings of playing and flow, it was noticed that attributing enjoyment to gaming did not lead to the strongest flow-experiences. This implies that the issue should be studied more in future. As a whole this study proves that gamer-related research from numerous vantage points can benefit from concentrating on gaming experiences.
Resumo:
In daily life, rich experiences evolve in every environmental and social interaction. Because experience has a strong impact on how people behave, scholars in different fields are interested in understanding what constitutes an experience. Yet even if interest in conscious experience is on the increase, there is no consensus on how such experience should be studied. Whatever approach is taken, the subjective and psychologically multidimensional nature of experience should be respected. This study endeavours to understand and evaluate conscious experiences. First I intro-duce a theoretical approach to psychologically-based and content-oriented experience. In the experiential cycle presented here, classical psychology and orienting-environmental content are connected. This generic approach is applicable to any human-environment interaction. Here I apply the approach to entertainment virtual environments (VEs) such as digital games and develop a framework with the potential for studying experiences in VEs. The development of the methodological framework included subjective and objective data from experiences in the Cave Automatic Virtual Environment (CAVE) and with numerous digital games (N=2,414). The final framework consisted of fifteen factor-analytically formed subcomponents of the sense of presence, involvement and flow. Together, these show the multidimensional experiential profile of VEs. The results present general experiential laws of VEs and show that the interface of a VE is related to (physical) presence, which psychologically means attention, perception and the cognitively evaluated realness and spatiality of the VE. The narrative of the VE elicits (social) presence and involvement and affects emotional outcomes. Psychologically, these outcomes are related to social cognition, motivation and emotion. The mechanics of a VE affect the cognitive evaluations and emotional outcomes related to flow. In addition, at the very least, user background, prior experience and use context affect the experiential variation. VEs are part of many peoples lives and many different outcomes are related to them, such as enjoyment, learning and addiction, depending on who is making the evalua-tion. This makes VEs societally important and psychologically fruitful to study. The approach and framework presented here contribute to our understanding of experiences in general and VEs in particular. The research can provide VE developers with a state-of-the art method (www.eveqgp.fi) that can be utilized whenever new product and service concepts are designed, prototyped and tested.
Resumo:
This thesis is a comparative case study in Japanese video game localization for the video games Sairen, Sairen 2 and Sairen Nyûtoransurêshon, and English-language localized versions of the same games as published in Scandinavia and Australia/New Zealand. All games are developed by Sony Computer Entertainment Inc. and published exclusively for Playstation2 and Playstation3 consoles. The fictional world of the Sairen games draws much influence from Japanese history, as well as from popular and contemporary culture, and in doing so caters mainly to a Japanese audience. For localization, i.e. the adaptation of a product to make it accessible to users outside the original market it was intended for in the first place, this is a challenging issue. Video games are media of entertainment, and therefore localization practice must preserve the games’ effects on the players’ emotions. Further, video games are digital products that are comprised of a multitude of distinct elements, some of which are part of the game world, while others regulate the connection between the player as part of the real world and the game as digital medium. As a result, video game localization is also a practice that has to cope with the technical restrictions that are inherent to the medium. The main theory used throughout the thesis is Anthony Pym’s framework for localization studies that considers the user of the localized product as a defining part of the localization process. This concept presupposes that localization is an adaptation that is performed to make a product better suited for use during a specific reception situation. Pym also addresses the factor that certain products may resist distribution into certain reception situations because of their content, and that certain aspects of localization aim to reduce this resistance through significant alterations of the original product. While Pym developed his ideas with mainly regular software in mind, they can also be adapted well to study video games from a localization angle. Since modern video games are highly complex entities that often switch between interactive and non-interactive modes, Pym’s ideas are adapted throughout the thesis to suit the particular elements being studied. Instances analyzed in this thesis include menu screens, video clips, in-game action and websites. The main research questions focus on how the games’ rules influence localization, and how the games’ fictional domain influences localization. Because there are so many peculiarities inherent to the medium of the video game, other theories are introduced as well to complement the research at hand. These include Lawrence Venuti’s discussions of foreiginizing and domesticating translation methods for literary translation, and Jesper Juul’s definition of games. Additionally, knowledge gathered from interviews with video game localization professionals in Japan during September and October 2009 is also utilized for this study. Apart from answering the aforementioned research questions, one of this thesis’ aims is to enrich the still rather small field of game localization studies, and the study of Japanese video games in particular, one of Japan’s most successful cultural exports.
Resumo:
Tutkielma käsittelee suomalaisten televisiotekstittäjien ammatillisuutta, käännösprosessia ja digitaalisten tekstitysohjelmien vaikutuksia tekstitysprosessiin ammattitekstittäjien näkökulmasta. Suomen television digitalisoituminen on aiheuttanut mullistuksia myös tekstitysalalla kun tekstitettävä kuvamateriaali on ryhdytty toimittamaan käännöstoimistoille ja tekstittäjille digitaalisena. Teoriaosuudessa käsitellään käännös- ja tekstitystutkimusta sekä koulutusta Suomessa, ammattitaitoa ja ammatillisuutta sekä kääntämisen apukeinoja. Tekstittäminen esitellään erikoistuneena kääntämisen muotona. On kuitenkin myös huomioitava, että kääntäminen on yksi vaihe tekstitysprosessissa. Teoriaosuus päättyy suomalaisten televisiotekstittäjien arjen ja työkentän nykytilanteen käsittelyyn – tekstittäjät työskentelevät monenlaisilla työehdoilla ja laadun kriteerit saatetaan joutua arvioimaan uudelleen. Empiirisen osan alussa esitetään, että suomalaisia televisiotekstittäjiä on haastateltu yllättävän vähän, ja Jääskeläisen ajatuksiin nojaten mainitaan, että tekstittämisen alalla on vielä paljon tutkimatta – etenkin suomalaisesta tekstitysprosessista löytyy tutkittavaa. Tutkimuskohde on ammatikseen televisioon tekstityksiä tekevät kääntäjät. Suomalaiselle tekstitykseen erikoistuneelle käännöstoimistolle työskenteleville tekstittäjille lähetettiin alkutalvesta 2008 kyselylomake, jolla kartoitettiin sekä monivalintakysymyksillä että avoimilla kysymyksillä heidän ammatillisuuttaan, työmenetelmiään, käännös- ja tekstitysprosessiaan, ammattiylpeyttään ja -identiteettiään, ajanhallintaansa, sekä heidän käyttämäänsä digitaalista tekstitysohjelmaa. Tutkimuksessa kävi ilmi, että lähes kolmanneksella vastaajista on ammatistaan neutraali tai jopa negatiivinen käsitys. Näitä tekstittäjiä yhdistää se seikka, että kaikilla on alle 5 vuotta kokemusta alalta. Valtaosa vastanneista on kuitenkin ylpeitä siitä, että toimivat suomen kielen ammattilaisina. Tekstitysprosessi oli lomakkeessa jaettu esikatseluvaiheeseen, käännösvaiheeseen, ajastamisvaiheeseen ja korjauskatseluvaiheeseen. Tekstittäjät pyydettiin mm. arvioimaan tekstitysprosessinsa kokonaiskestoa. Kestoissa ilmeni suuria eroavaisuuksia, joista ainakin osa korreloi kokemuksen kanssa. Runsas puolet vastaajista on hankkinut digitaalisen tekstitysohjelmiston käyttöönsä ja osa ajastaa edelleen käännöstoimistossa muun muassa ohjelmiston kalleuden vuoksi. Digitaalisen ohjelmiston myötä tekstitysprosessiin ja työkäytänteisiin on tullut muutoksia, kun videonauhureista ja televisioista on siirrytty pelkän tietokoneen käyttöön. On mahdollista tehdä etätyötä kaukomailta käsin, kääntää ja ajastaa lomittain tai tehdä esiajastus ja kääntää sitten. Digitaalinen tekniikka on siis mahdollistanut tekstitysprosessin muuttumisen ja vaihtoehtoiset työmenetelmät, mutta kaikista menetelmistä ei välttämättä ole tekstittäjälle hyötyä. Perinteinen tekstitysprosessi (esikatselu, repliikkijakojen merkitseminen käsikirjoitukseen, kääntäminen ja repliikkien laadinta, korjaukset ja tarkastuskatselu) vaikuttaa edelleen tehokkaimmalta. Vaikka työkäytänteet eroavat toisistaan, kokonaiskäsitys on se, että digitalisoitumisen alkukangertelujen jälkeen tekstittäjien työskentely on tehostunut.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
In this thesis we study a few games related to non-wellfounded and stationary sets. Games have turned out to be an important tool in mathematical logic ranging from semantic games defining the truth of a sentence in a given logic to for example games on real numbers whose determinacies have important effects on the consistency of certain large cardinal assumptions. The equality of non-wellfounded sets can be determined by a so called bisimulation game already used to identify processes in theoretical computer science and possible world models for modal logic. Here we present a game to classify non-wellfounded sets according to their branching structure. We also study games on stationary sets moving back to classical wellfounded set theory. We also describe a way to approximate non-wellfounded sets with hereditarily finite wellfounded sets. The framework used to do this is domain theory. In the Banach-Mazur game, also called the ideal game, the players play a descending sequence of stationary sets and the second player tries to keep their intersection stationary. The game is connected to precipitousness of the corresponding ideal. In the pressing down game first player plays regressive functions defined on stationary sets and the second player responds with a stationary set where the function is constant trying to keep the intersection stationary. This game has applications in model theory to the determinacy of the Ehrenfeucht-Fraisse game. We show that it is consistent that these games are not equivalent.
Resumo:
Purpose: The aim of the present study was to develop and test new digital imaging equipment and methods for diagnosis and follow-up of ocular diseases. Methods: The whole material comprised 398 subjects (469 examined eyes), including 241 patients with melanocytic choroidal tumours, 56 patients with melanocytic iris tumours, 42 patients with diabetes, a 52-year old patient with chronic phase of VKH disease, a 30-year old patient with an old blunt eye injury, and 57 normal healthy subjects. Digital 50° (Topcon TRC 50 IA) and 45° (Canon CR6-45NM) fundus cameras, a new handheld digital colour videocamera for eye examinations (MediTell), a new subtraction method using the Topcon Image Net Program (Topcon corporation, Tokyo, Japan), a new method for digital IRT imaging of the iris we developed, and Zeiss photoslitlamp with a digital camera body were used for digital imaging. Results: Digital 50° red-free imaging had a sensitivity of 97.7% and two-field 45° and 50° colour imaging a sensitivity of 88.9-94%. The specificity of the digital 45°-50° imaging modalities was 98.9-100% versus the reference standard and ungradeable images that were 1.2-1.6%. By using the handheld digital colour video camera only, the optic disc and central fundus located inside 20° from the fovea could be recorded with a sensitivity of 6.9% for detection of at least mild NPDR when compared with the reference standard. Comparative use of digital colour, red-free, and red light imaging showed 85.7% sensitivity, 99% specificity, and 98.2 % exact agreement versus the reference standard in differentiation of small choroidal melanoma from pseudomelanoma. The new subtraction method showed growth in four of 94 melanocytic tumours (4.3%) during a mean ±SD follow-up of 23 ± 11 months. The new digital IRT imaging of the iris showed the sphincter muscle and radial contraction folds of Schwalbe in the pupillary zone and radial structural folds of Schwalbe and circular contraction furrows in the ciliary zone of the iris. The 52-year-old patient with a chronic phase of VKH disease showed extensive atrophy and occasional pigment clumps in the iris stroma, detachment of the ciliary body with severe ocular hypotony, and shallow retinal detachment of the posterior pole in both eyes. Infrared transillumination imaging and fluorescein angiographic findings of the iris showed that IR translucence (p=0.53), complete masking of fluorescence (p=0.69), presence of disorganized vessels (p=0.32), and fluorescein leakage (p=1.0) at the site of the lesion did not differentiate an iris nevus from a melanoma. Conclusions: Digital 50° red-free and two-field 50° or 45° colour imaging were suitable for DR screening, whereas the handheld digital video camera did not fulfill the needs of DR screening. Comparative use of digital colour, red-free and red light imaging was a suitable method in the differentiation of small choroidal melanoma from different pseudomelanomas. The subtraction method may reveal early growth of the melanocytic choroidal tumours. Digital IRT imaging may be used to study changes of the stroma and posterior surface of the iris in various diseases of the uvea. It contributed to the revealment of iris atrophy and serous detachment of the ciliary body with ocular hypotony together with the shallow retinal detachment of the posterior pole as new findings of the chronic phase of VKH disease. Infrared translucence and angiographic findings are useful in differential diagnosis of melanocytic iris tumours, but they cannot be used to determine if the lesion is benign or malignant.
Resumo:
The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.
Resumo:
In this paper we define a game which is played between two players I and II on two mathematical structures A and B. The players choose elements from both structures in moves, and at the end of the game the player II wins if the chosen structures are isomorphic. Thus the difference of this to the ordinary Ehrenfeucht-Fra¨ıss´e game is that the isomorphism can be arbitrary, whereas in the ordinary EF-game it is determined by the moves of the players. We investigate determinacy of the weak EF-game for different (the length of the game) and its relation to the ordinary EF-game.