16 resultados para Localization accuracy metrics
em Helda - Digital Repository of University of Helsinki
Resumo:
This thesis is a comparative case study in Japanese video game localization for the video games Sairen, Sairen 2 and Sairen Nyûtoransurêshon, and English-language localized versions of the same games as published in Scandinavia and Australia/New Zealand. All games are developed by Sony Computer Entertainment Inc. and published exclusively for Playstation2 and Playstation3 consoles. The fictional world of the Sairen games draws much influence from Japanese history, as well as from popular and contemporary culture, and in doing so caters mainly to a Japanese audience. For localization, i.e. the adaptation of a product to make it accessible to users outside the original market it was intended for in the first place, this is a challenging issue. Video games are media of entertainment, and therefore localization practice must preserve the games’ effects on the players’ emotions. Further, video games are digital products that are comprised of a multitude of distinct elements, some of which are part of the game world, while others regulate the connection between the player as part of the real world and the game as digital medium. As a result, video game localization is also a practice that has to cope with the technical restrictions that are inherent to the medium. The main theory used throughout the thesis is Anthony Pym’s framework for localization studies that considers the user of the localized product as a defining part of the localization process. This concept presupposes that localization is an adaptation that is performed to make a product better suited for use during a specific reception situation. Pym also addresses the factor that certain products may resist distribution into certain reception situations because of their content, and that certain aspects of localization aim to reduce this resistance through significant alterations of the original product. While Pym developed his ideas with mainly regular software in mind, they can also be adapted well to study video games from a localization angle. Since modern video games are highly complex entities that often switch between interactive and non-interactive modes, Pym’s ideas are adapted throughout the thesis to suit the particular elements being studied. Instances analyzed in this thesis include menu screens, video clips, in-game action and websites. The main research questions focus on how the games’ rules influence localization, and how the games’ fictional domain influences localization. Because there are so many peculiarities inherent to the medium of the video game, other theories are introduced as well to complement the research at hand. These include Lawrence Venuti’s discussions of foreiginizing and domesticating translation methods for literary translation, and Jesper Juul’s definition of games. Additionally, knowledge gathered from interviews with video game localization professionals in Japan during September and October 2009 is also utilized for this study. Apart from answering the aforementioned research questions, one of this thesis’ aims is to enrich the still rather small field of game localization studies, and the study of Japanese video games in particular, one of Japan’s most successful cultural exports.
Resumo:
The tackling of coastal eutrophication requires water protection measures based on status assessments of water quality. The main purpose of this thesis was to evaluate whether it is possible both scientifically and within the terms of the European Union Water Framework Directive (WFD) to assess the status of coastal marine waters reliably by using phytoplankton biomass (ww) and chlorophyll a (Chl) as indicators of eutrophication in Finnish coastal waters. Empirical approaches were used to study whether the criteria, established for determining an indicator, are fulfilled. The first criterion (i) was that an indicator should respond to anthropogenic stresses in a predictable manner and has low variability in its response. Summertime Chl could be predicted accurately by nutrient concentrations, but not from the external annual loads alone, because of the rapid affect of primary production and sedimentation close to the loading sources in summer. The most accurate predictions were achieved in the Archipelago Sea, where total phosphorus (TP) and total nitrogen (TN) alone accounted for 87% and 78% of the variation in Chl, respectively. In river estuaries, the TP mass-balance regression model predicted Chl most accurately when nutrients originated from point-sources, whereas land-use regression models were most accurate in cases when nutrients originated mainly from diffuse sources. The inclusion of morphometry (e.g. mean depth) into nutrient models improved accuracy of the predictions. The second criterion (ii) was associated with the WFD. It requires that an indicator should have type-specific reference conditions, which are defined as "conditions where the values of the biological quality elements are at high ecological status". In establishing reference conditions, the empirical approach could only be used in the outer coastal water types, where historical observations of Secchi depth of the early 1900s are available. The most accurate prediction was achieved in the Quark. In the inner coastal water types, reference Chl, estimated from present monitoring data, are imprecise - not only because of the less accurate estimation method but also because the intrinsic characteristics, described for instance by morphometry, vary considerably inside these extensive inner coastal types. As for phytoplankton biomass, the reference values were less accurate than in the case of Chl, because it was possible to estimate reference conditions for biomass only by using the reconstructed Chl values, not the historical Secchi observations. An paleoecological approach was also applied to estimate annual average reference conditions for Chl. In Laajalahti, an urban embayment off Helsinki, strongly loaded by municipal waste waters in the 1960s and 1970s, reference conditions prevailed in the mid- and late 1800s. The recovery of the bay from pollution has been delayed as a consequence of benthic release of nutrients. Laajalahti will probably not achieve the good quality objectives of the WFD on time. The third criterion (iii) was associated with coastal management including the resources it has available. Analyses of Chl are cheap and fast to carry out compared to the analyses of phytoplankton biomass and species composition; the fact which has an effect on number of samples to be taken and thereby on the reliability of assessments. However, analyses on phytoplankton biomass and species composition provide more metrics for ecological classification, the metrics which reveal various aspects of eutrophication contrary to what Chl alone does.
Resumo:
The removal of non-coding sequences, introns, is an essential part of messenger RNA processing. In most metazoan organisms, the U12-type spliceosome processes a subset of introns containing highly conserved recognition sequences. U12-type introns constitute less than 0,5% of all introns and reside preferentially in genes related to information processing functions, as opposed to genes encoding for metabolic enzymes. It has previously been shown that the excision of U12-type introns is inefficient compared to that of U2-type introns, supporting the model that these introns could provide a rate-limiting control for gene expression. The low efficiency of U12-type splicing is believed to have important consequences to gene expression by limiting the production of mature mRNAs from genes containing U12-type introns. The inefficiency of U12-type splicing has been attributed to the low abundance of the components of the U12-type spliceosome in cells, but this hypothesis has not been proven. The aim of the first part of this work was to study the effect of the abundance of the spliceosomal snRNA components on splicing. Cells with a low abundance of the U12-type spliceosome were found to inefficiently process U12-type introns encoded by a transfected construct, but the expression levels of endogenous genes were not found to be affected by the abundance of the U12-type spliceosome. However, significant levels of endogenous unspliced U12-type intron-containing pre-mRNAs were detected in cells. Together these results support the idea that U12-type splicing may limit gene expression in some situations. The inefficiency of U12-type splicing has also promoted the idea that the U12-type spliceosome may control gene expression, limiting the mRNA levels of some U12-type intron-containing genes. While the identities of the primary target genes that contain U12-type introns are relatively well known, little has previously been known about the downstream genes and pathways potentially affected by the efficiency of U12-type intron processing. Here, the effects of U12-type splicing efficiency on a whole organism were studied in a Drosophila line with a mutation in an essential U12-type spliceosome component. Genes containing U12-type introns showed variable gene-specific responses to the splicing defect, which points to variation in the susceptibility of different genes to changes in splicing efficiency. Surprisingly, microarray screening revealed that metabolic genes were enriched among downstream effects, and that the phenotype could largely be attributed to one U12-type intron-containing mitochondrial gene. Gene expression control by the U12-type spliceosome could thus have widespread effects on metabolic functions in the organism. The subcellular localization of the U12-type spliceosome components was studied as a response to a recent dispute on the localization of the U12-type spliceosome. All components studied were found to be nuclear indicating that the processing of U12-type introns occurs within the nucleus, thus clarifying a question central to the field. The results suggest that the U12-type spliceosome can limit the expression of genes that contain U12-type introns in a gene-specific manner. Through its limiting role in pre-mRNA processing, the U12-type splicing activity can affect specific genetic pathways, which in the case of Drosophila are involved in metabolic functions.
Resumo:
Viruses are submicroscopic, infectious agents that are obligate intracellular parasites. They adopt various types of strategies for their parasitic replication and proliferation in infected cells. The nucleic acid genome of a virus contains information that redirects molecular machinery of the cell to the replication and production of new virions. Viruses that replicate in the cytoplasm and are unable to use the nuclear transcription machinery of the host cell have developed their own transcription and capping systems. This thesis describes replication strategies of two distantly related viruses, hepatitis E virus (HEV) and Semliki Forest virus (SFV), which belong to the alphavirus-like superfamily of positive-strand RNA viruses. We have demonstrated that HEV and SFV share a unique cap formation pathway specific for alphavirus-like superfamily. The capping enzyme first acts as a methyltransferase, catalyzing the transfer of a methyl group from S-adenosylmethionine to GTP to yield m7GTP. It then transfers the methylated guanosine to the end of viral mRNA. Both reactions are virus-specific and differ from those described for the host cell. Therefore, these capping reactions offer attractive targets for the development of antiviral drugs. Additionally, it has been shown that replication of SFV and HEV takes place in association with cellular membranes. The origin of these membranes and the intracellular localization of the components of the replication complex were studied by modern microscopy techniques. It was demonstrated that SFV replicates in cytoplasmic membranes that are derived from endosomes and lysosomes. According to our studies, site for HEV replication seems to be the intermediate compartment which mediates the traffic between endoplasmic reticulum and the Golgi complex. As a result of this work, a unique mechanism of cap formation for hepatitis E virus replicase has been characterized. It represents a novel target for the development of specific inhibitors against viral replication.
Resumo:
Background: The aging population is placing increasing demands on surgical services, simultaneously with a decreasing supply of professional labor and a worsening economic situation. Under growing financial constraints, successful operating room management will be one of the key issues in the struggle for technical efficiency. This study focused on several issues affecting operating room efficiency. Materials and methods: The current formal operating room management in Finland and the use of performance metrics and information systems used to support this management were explored using a postal survey. We also studied the feasibility of a wireless patient tracking system as a tool for managing the process. The reliability of the system as well as the accuracy and precision of its automatically recorded time stamps were analyzed. The benefits of a separate anesthesia induction room in a prospective setting were compared with the traditional way of working, where anesthesia is induced in the operating room. Using computer simulation, several models of parallel processing for the operating room were compared with the traditional model with respect to cost-efficiency. Moreover, international differences in operating room times for two common procedures, laparoscopic cholecystectomy and open lung lobectomy, were investigated. Results: The managerial structure of Finnish operating units was not clearly defined. Operating room management information systems were found to be out-of-date, offering little support to online evaluation of the care process. Only about half of the information systems provided information in real time. Operating room performance was most often measured by the number of procedures in a time unit, operating room utilization, and turnover time. The wireless patient tracking system was found to be feasible for hospital use. Automatic documentation of the system facilitated patient flow management by increasing process transparency via more available and accurate data, while lessening work for staff. Any parallel work flow model was more cost-efficient than the traditional way of performing anesthesia induction in the operating room. Mean operating times for two common procedures differed by 50% among eight hospitals in different countries. Conclusions: The structure of daily operative management of an operating room warrants redefinition. Performance measures as well as information systems require updating. Parallel work flows are more cost-efficient than the traditional induction-in-room model.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
This study contributes to the neglect effect literature by looking at the relative trading volume in terms of value. The results for the Swedish market show a significant positive relationship between the accuracy of estimation and the relative trading volume. Market capitalisation and analyst coverage have in prior studies been used as proxies for neglect. These measures however, do not take into account the effort analysts put in when estimating corporate pre-tax profits. I also find evidence that the industry of the firm influence the accuracy of estimation. In addition, supporting earlier findings, loss making firms are associated with larger forecasting errors. Further, I find that the average forecast error increased in the year 2000 – in Sweden.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
The work presented here has focused on the role of cation-chloride cotransporters (CCCs) in (1) the regulation of intracellular chloride concentration within postsynaptic neurons and (2) on the consequent effects on the actions of the neurotransmitter gamma-aminobutyric acid (GABA) mediated by GABAA receptors (GABAARs) during development and in pathophysiological conditions such as epilepsy. In addition, (3) we found that a member of the CCC family, the K-Cl cotransporter isoform 2 (KCC2), has a structural role in the development of dendritic spines during the differentiation of pyramidal neurons. Despite the large number of publications dedicated to regulation of intracellular Cl-, our understanding of the underlying mechanisms is not complete. Experiments on GABA actions under resting steady-state have shown that the effect of GABA shifts from depolarizing to hyperpolarizing during maturation of cortical neurons. However, it remains unclear, whether conclusions from these steady-state measurements can be extrapolated to the highly dynamic situation within an intact and active neuronal network. Indeed, GABAergic signaling in active neuronal networks results in a continuous Cl- load, which must be constantly removed by efficient Cl- extrusion mechanisms. Therefore, it seems plausible to suggest that key parameters are the efficacy and subcellular distribution of Cl- transporters rather than the polarity of steady-state GABA actions. A further related question is: what are the mechanisms of Cl- regulation and homeostasis during pathophysiological conditions such as epilepsy in adults and neonates? Here I present results that were obtained by means of a newly developed method of measurements of the efficacy of a K-Cl cotransport. In Study I, the developmental profile of KCC2 functionality during development was analyzed both in dissociated neuronal cultures and in acute hippocampal slices. A novel method of photolysis of caged GABA in combination with Cl- loading to the somata was used in this study to assess the extrusion efficacy of KCC2. We demonstrated that these two preparations exhibit a different temporal profile of functional KCC2 upregulation. In Study II, we reported an observation of highly distorted dendritic spines in neurons cultured from KCC2-/- embryos. During their development in the culture dish, KCC2-lacking neurons failed to develop mature, mushroom-shaped dendritic spines but instead maintained an immature phenotype of long, branching and extremely motile protrusions. It was shown that the role of KCC2 in spine maturation is not based on its transport activity, but is mediated by interactions with cytoskeletal proteins. Another important player in Cl- regulation, NKCC1 and its role in the induction and maintenance of native Cl- gradients between the axon initial segment (AIS) and soma was the subject of Study III. There we demonstrated that this transporter mediates accumulation of Cl- in the axon initial segment of neocortical and hippocampal principal neurons. The results suggest that the reversal potential of the GABAA response triggered by distinct populations of interneurons show large subcellular variations. Finally, a novel mechanism of fast post-translational upregulation of the membrane-inserted, functionally active KCC2 pool during in-vivo neonatal seizures and epileptiform-like activity in vitro was identified and characterized in Study IV. The seizure-induced KCC2 upregulation may act as an intrinsic antiepileptogenic mechanism.
Resumo:
Tasaikäisen metsän alle muodostuvilla alikasvoksilla on merkitystä puunkorjuun, metsänuudistamisen, näkemä-ja maisema-analyysien sekä biodiversiteetin ja hiilitaseen arvioinnin kannalta. Ilma-aluksista tehtävä laserkeilaus on osoittautunut tehokkaaksi kaukokartoitusmenetelmäksi varttuneiden puustojen mittauksessa. Laserkeilauksen käyttöönotto operatiivisessa metsäsuunnittelussa mahdollistaa aiempaa tarkemman tiedon tuottamisen alikasvoksista, mikäli alikasvoksen ominaisuuksia voidaan tulkita laseraineistoista. Tässä työssä käytettiin tarkasti mitattuja maastokoealoja ja kaikulaserkeilausaineistoja (discrete return LiDAR) usealta vuodelta (1–2 km lentokorkeus, 0,9–9,7 pulssia m-2). Laserkeilausaineistot oli hankittu Optech ALTM3100 ja Leica ALS50-II sensoreilla. Koealat edustavat suomalaisia tasaikäisiä männiköitä eri kehitysvaiheissa. Tutkimuskysymykset olivat: 1) Minkälainen on alikasvoksesta saatu lasersignaali yksittäisen pulssin tasolla ja mitkä tekijät signaaliin vaikuttavat? 2) Mikä on käytännön sovelluksissa hyödynnettävien aluepohjaisten laserpiirteiden selitysvoima alikasvospuuston ominaisuuksien ennustamisessa? Erityisesti haluttiin selvittää, miten laserpulssin energiahäviöt ylempiin latvuskerroksiin vaikuttavat saatuun signaaliin, ja voidaanko laserkaikujen intensiteetille tehdä energiahäviöiden korjaus. Puulajien väliset erot laserkaiun intensiteetissä olivat pieniä ja vaihtelivat keilauksesta toiseen. Intensiteetin käyttömahdollisuudet alikasvoksen puulajin tulkinnassa ovat siten hyvin rajoittuneet. Energiahäviöt ylempiin latvuskerroksiin aiheuttivat alikasvoksesta saatuun lasersignaaliin kohinaa. Energiahäviöiden korjaus tehtiin alikasvoksesta saaduille laserpulssin 2. ja 3. kaiuille. Korjauksen avulla pystyttiin pienentämään kohteen sisäistä intensiteetin hajontaa ja parantamaan kohteiden luokittelutarkkuutta alikasvoskerroksessa. Käytettäessä 2. kaikuja oikeinluokitusprosentti luokituksessa maan ja yleisimmän puulajin välillä oli ennen korjausta 49,2–54,9 % ja korjauksen jälkeen 57,3–62,0 %. Vastaavat kappa-arvot olivat 0,03–0,13 ja 0,10–0,22. Tärkein energiahäviöitä selittävä tekijä oli pulssista saatujen aikaisempien kaikujen intensiteetti, mutta hieman merkitystä oli myös pulssin leikkausgeometrialla ylemmän latvuskerroksen puiden kanssa. Myös 3. kaiuilla luokitustarkkuus parani. Puulajien välillä havaittiin eroja siinä, kuinka herkästi ne tuottavat kaiun laserpulssin osuessa puuhun. Kuusi tuotti kaiun suuremmalla todennäköisyydellä kuin lehtipuut. Erityisen selvä tämä ero oli pulsseilla, joissa oli energiahäviöitä. Laserkaikujen korkeusjakaumapiirteet voivat siten olla riippuvaisia puulajista. Sensorien välillä havaittiin selviä eroja intensiteettijakaumissa, mikä vaikeuttaa eri sensoreilla hankittujen aineistojen yhdistämistä. Myös kaiun todennäköisyydet erosivat jonkin verran sensorien välillä, mikä aiheutti pieniä eroavaisuuksia kaikujen korkeusjakaumiin. Aluepohjaisista laserpiirteistä löydettiin alikasvoksen runkolukua ja keskipituutta hyvin selittäviä piirteitä, kun rajoitettiin tarkastelu yli 1 m pituisiin puihin. Piirteiden selitysvoima oli parempi runkoluvulle kuin keskipituudelle. Selitysvoima ei merkittävästi alentunut pulssitiheyden pienentyessä, mikä on hyvä asia käytännön sovelluksia ajatellen. Lehtipuun osuutta ei pystytty selittämään. Tulosten perusteella kaikulaserkeilausta voi olla mahdollista hyödyntää esimerkiksi ennakkoraivaustarpeen arvioinnissa. Sen sijaan alikasvoksen tarkempi luokittelu (esim. puulajitulkinta) voi olla vaikeaa. Kaikkein pienimpiä alikasvospuita ei pystytä havaitsemaan. Lisää tutkimuksia tarvitaan tulosten yleistämiseksi erilaisiin metsiköihin.