14 resultados para GPS tracking device
em Helda - Digital Repository of University of Helsinki
Resumo:
The use of head-mounted displays (HMDs) can produce both positive and negative experiences. In an effort increase positive experiences and avoid negative ones, researchers have identified a number of variables that may cause sickness and eyestrain, although the exact nature of the relationship to HMDs may vary, depending on the tasks and the environments. Other non-sickness-related aspects of HMDs, such as users opinions and future decisions associated with task enjoyment and interest, have attracted little attention in the research community. In this thesis, user experiences associated with the use of monocular and bi-ocular HMDs were studied. These include eyestrain and sickness caused by current HMDs, the advantages and disadvantages of adjustable HMDs, HMDs as accessories for small multimedia devices, and the impact of individual characteristics and evaluated experiences on reported outcomes and opinions. The results indicate that today s commercial HMDs do not induce serious sickness or eyestrain. Reported adverse symptoms have some influence on HMD-related opinions, but the nature of the impact depends on the tasks and the devices used. As an accessory to handheld devices and as a personal viewing device, HMDs may increase use duration and enable users to perform tasks not suitable for small screens. Well-designed and functional, adjustable HMDs, especially monocular HMDs, increase viewing comfort and usability, which in turn may have a positive effect on product-related satisfaction. The role of individual characteristics in understanding HMD-related experiences has not changed significantly. Explaining other HMD-related experiences, especially forward-looking interests, also requires understanding more stable individual traits and motivations.
Resumo:
Screening of wastewater effluents from municipal and industrial wastewater treatment plants with biotests showed that the treated wastewater effluents possess only minor acute toxic properties towards whole organisms (e.g. bacteria, algae, daphnia), if any. In vitro tests (sub-mitochondrial membranes and fish hepatocytes) were generally more susceptible to the effluents. Most of the effluents indicated the presence of hormonally active compounds, as the production of vitellogenin, an egg yolk precursor protein, was induced in fish hepatocytes exposed to wastewater. In addition, indications of slight genotoxic potential was found in one effluent concentrate with a recombinant bacteria test. Reverse electron transport (RET) of mitochondrial membranes was used as a model test to conduct effluent assessment followed by toxicant characterisations and identifications. Using a modified U.S. EPA Toxicity Identification Evaluation Phase I scheme and additional case-specific methods, the main compound in a pulp and paper mill effluent causing RET inhibition was characterised to be an organic, relatively hydrophilic high molecular weight (HMW) compound. The toxicant could be verified as HMW lignin by structural analyses using nuclear magnetic resonance. In the confirmation step commercial and in-house extracted lignin products were used. The possible toxicity related structures were characterised by statistical analysis of the chemical breakdown structures of laboratory-scale pulping and bleaching effluents and the toxicities of these effluents. Finally, the biological degradation of the identified toxicant and other wastewater constituents was evaluated using bioassays in combination with chemical analyses. Biological methods have not been used routinely in establishing effluent discharge limits in Finland. However, the biological effects observed in this study could not have been predicted using only routine physical and chemical effluent monitoring parameters. Therefore chemical parameters cannot be considered to be sufficient in controlling effluent discharges especially in case of unknown, possibly bioaccumulative, compounds that may be present in small concentrations and may cause chronic effects.
Resumo:
Topic detection and tracking (TDT) is an area of information retrieval research the focus of which revolves around news events. The problems TDT deals with relate to segmenting news text into cohesive stories, detecting something new, previously unreported, tracking the development of a previously reported event, and grouping together news that discuss the same event. The performance of the traditional information retrieval techniques based on full-text similarity has remained inadequate for online production systems. It has been difficult to make the distinction between same and similar events. In this work, we explore ways of representing and comparing news documents in order to detect new events and track their development. First, however, we put forward a conceptual analysis of the notions of topic and event. The purpose is to clarify the terminology and align it with the process of news-making and the tradition of story-telling. Second, we present a framework for document similarity that is based on semantic classes, i.e., groups of words with similar meaning. We adopt people, organizations, and locations as semantic classes in addition to general terms. As each semantic class can be assigned its own similarity measure, document similarity can make use of ontologies, e.g., geographical taxonomies. The documents are compared class-wise, and the outcome is a weighted combination of class-wise similarities. Third, we incorporate temporal information into document similarity. We formalize the natural language temporal expressions occurring in the text, and use them to anchor the rest of the terms onto the time-line. Upon comparing documents for event-based similarity, we look not only at matching terms, but also how near their anchors are on the time-line. Fourth, we experiment with an adaptive variant of the semantic class similarity system. The news reflect changes in the real world, and in order to keep up, the system has to change its behavior based on the contents of the news stream. We put forward two strategies for rebuilding the topic representations and report experiment results. We run experiments with three annotated TDT corpora. The use of semantic classes increased the effectiveness of topic tracking by 10-30\% depending on the experimental setup. The gain in spotting new events remained lower, around 3-4\%. The anchoring the text to a time-line based on the temporal expressions gave a further 10\% increase the effectiveness of topic tracking. The gains in detecting new events, again, remained smaller. The adaptive systems did not improve the tracking results.
Resumo:
Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.
Resumo:
Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.
Resumo:
We present three measurements of the top-quark mass in the lepton plus jets channel with approximately 1.9 fb-1 of integrated luminosity collected with the CDF II detector using quantities with minimal dependence on the jet energy scale. One measurement exploits the transverse decay length of b-tagged jets to determine a top-quark mass of 166.9+9.5-8.5 (stat) +/- 2.9 (syst) GeV/c2, and another the transverse momentum of electrons and muons from W-boson decays to determine a top-quark mass of 173.5+8.8-8.9 (stat) +/- 3.8 (syst) GeV/c2. These quantities are combined in a third, simultaneous mass measurement to determine a top-quark mass of 170.7 +/- 6.3 (stat) +/- 2.6 (syst) GeV/c2.
Resumo:
ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 micron in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10^5 charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.
Resumo:
We present three measurements of the top-quark mass in the lepton plus jets channel with approximately 1.9 fb-1 of integrated luminosity collected with the CDF II detector using quantities with minimal dependence on the jet energy scale. One measurement exploits the transverse decay length of b-tagged jets to determine a top-quark mass of 166.9+9.5-8.5 (stat) +/- 2.9 (syst) GeV/c2, and another the transverse momentum of electrons and muons from W-boson decays to determine a top-quark mass of 173.5+8.8-8.9 (stat) +/- 3.8 (syst) GeV/c2. These quantities are combined in a third, simultaneous mass measurement to determine a top-quark mass of 170.7 +/- 6.3 (stat) +/- 2.6 (syst) GeV/c2.
Resumo:
We show that information sharing among banks may serve as a collusive device. An informational sharing agreement is an a-priori commitment to reduce informational asymmetries between banks in future lending. Hence, information sharing tends to increase the intensity of competition in future periods and, thus, reduces the value of informational rents in current competition. We contribute to the existing literature by emphasizing that a reduction in informational rents will also reduce the intensity of competition in the current period, thereby reducing competitive pressure in current credit markets. We provide a large class of economic environments, where a ban on information sharing would be strictly welfare-enhancing.
Resumo:
Objectives: GPS technology enables the visualisation of a map reader s location on a mobile map. Earlier research on the cognitive aspects of map reading identified that searching for map-environment points is an essential element for the process of determining one s location on a mobile map. Map-environment points refer to objects that are visualized on the map and are recognizable in the environment. However, because the GPS usually adds only one point to the map that has a relation to the environment, it does not provide a sufficient amount of information for self-location. The aim of the present thesis was to assess the effect of GPS on the cognitive processes involved in determining one s location on a map. Methods: The effect of GPS on self-location was studied in a field experiment. The subjects were shown a target on a mobile map, and they were asked to point in the direction of the target. In order for the map reader to be able to deduce the direction of the target, he/she has to locate himself/herself on the map. During the pointing tasks, the subjects were asked to think aloud. The data from the experiment were used to analyze the effect of the GPS on the time needed to perform the task. The subjects verbal data was used to assess the effect of the GPS on the number of landmark concepts mentioned during a task (landmark concepts are words referring to objects that can be recognized both on the map and in the environment). Results and conclusions: The results from the experiment indicate that the GPS reduces the time needed to locate oneself on a map. The analysis of the verbal data revealed that the GPS reduces the number of landmark concepts in the protocols. The findings suggest that the GPS guides the subject s search for the map-environment points and narrows the area on the map that must be searched for self-location.
Resumo:
The world of mapping has changed. Earlier, only professional experts were responsible for map production, but today ordinary people without any training or experience can become map-makers. The number of online mapping sites, and the number of volunteer mappers has increased significantly. The development of the technology, such as satellite navigation systems, Web 2.0, broadband Internet connections, and smartphones, have had one of the key roles in enabling the rise of volunteered geographic information (VGI). As opening governmental data to public is a current topic in many countries, the opening of high quality geographical data has a central role in this study. The aim of this study is to investigate how is the quality of spatial data produced by volunteers by comparing it with the map data produced by public authorities, to follow what occurs when spatial data are opened for users, and to get acquainted with the user profile of these volunteer mappers. A central part of this study is OpenStreetMap project (OSM), which aim is to create a map of the entire world by volunteers. Anyone can become an OpenStreetMap contributor, and the data created by the volunteers are free to use for anyone without restricting copyrights or license charges. In this study OpenStreetMap is investigated from two viewpoints. In the first part of the study, the aim was to investigate the quality of volunteered geographic information. A pilot project was implemented by following what occurs when a high-resolution aerial imagery is released freely to the OpenStreetMap contributors. The quality of VGI was investigated by comparing the OSM datasets with the map data of The National Land Survey of Finland (NLS). The quality of OpenStreetMap data was investigated by inspecting the positional accuracy and the completeness of the road datasets, as well as the differences in the attribute datasets between the studied datasets. Also the OSM community was under analysis and the development of the map data of OpenStreetMap was investigated by visual analysis. The aim of the second part of the study was to analyse the user profile of OpenStreetMap contributors, and to investigate how the contributors act when collecting data and editing OpenStreetMap. The aim was also to investigate what motivates users to map and how is the quality of volunteered geographic information envisaged. The second part of the study was implemented by conducting a web inquiry to the OpenStreetMap contributors. The results of the study show that the quality of OpenStreetMap data compared with the data of National Land Survey of Finland can be defined as good. OpenStreetMap differs from the map of National Land Survey especially because of the amount of uncertainty, for example because of the completeness and uniformity of the map are not known. The results of the study reveal that opening spatial data increased notably the amount of the data in the study area, and both the positional accuracy and completeness improved significantly. The study confirms the earlier arguments that only few contributors have created the majority of the data in OpenStreetMap. The inquiry made for the OpenStreetMap users revealed that the data are most often collected by foot or by bicycle using GPS device, or by editing the map with the help of aerial imageries. According to the responses, the users take part to the OpenStreetMap project because they want to make maps better, and want to produce maps, which have information that is up-to-date and cannot be found from any other maps. Almost all of the users exploit the maps by themselves, most popular methods being downloading the map into a navigator or into a mobile device. The users regard the quality of OpenStreetMap as good, especially because of the up-to-dateness and the accuracy of the map.
Resumo:
Syövän diagnostiikassa ja hoidossa nanopartikkelit voivat toimia kuljetinaineina lääke- ja diagnostisille aineille tai nukleiinihappojaksoille. Kantaja-aineeseen voidaan liittää kohdennusmolekyylejä partikkelien passiivista tai aktiivista kohdennusta varten tai radioleima kuvantamista tai radioterapiaa varten. Kantaja-aineiden avulla voidaan parantaa lääkeaineen fysikaalis-kemiallisia ominaisuuksia ja biologista hyötyosuutta, vähentää systeemisiä sivuvaikutuksia, pidentää lääkeaineen puoliintumisaikaa ja siten harventaa annosteluväliä, sekä parantaa lääkeaineen pääsyä kohdekudokseen. Näin voidaan parantaa kemo- ja radioterapian tehoa ja hoidon onnistumisen todennäköisyyttä. Kirjallisuuskatsauksessa perehdytään nanokantajien rooliin syövän hoidossa. Vuosikymmeniä jatkuneesta tutkimuksesta huolimatta vain kaksi (Eurooppa) tai kolme (Yhdysvallat) nanopartikkeliformulaatiota on hyväksytty markkinoille syövän hoidossa. Ongelmina ovat riittämätön hakeutuminen kohdekudokseen, immunogeenisyys ja nanopartikkelien labiilius. Kokeellisessa osassa tutkitaan in vitro ja hiirillä in vivo 99mTc-leimattujen, PEG-verhoiltujen biotiiniliposomien kaksivaiheista kohdennusta ihmisen munasarjan adenokarsinoomasoluihin. Kohdentamiseen käytetään biotinyloitua setuksimabi-(Erbitux®) vasta-ainetta, joka sitoutuu solujen yli-ilmentämiin EGF-reseptoreihin. Kaksivaiheista kohdennusta verrataan suoraan ja/tai passiiviseen kohdennukseen. Tehokkaampien kuvantamismenetelmien kehitys on vauhdittanut kohdennettujen nanopartikkelien tutkimusta. Isotooppikuvantamista käyttäen pystytään seuraamaan radioleiman jakautumista elimistössä ja kuvantamaan solutasolla tapahtuvia ilmiöitä. Kirjallisuuskatsauksessa perehdytään SPECT- ja PET-kuvantamiseen syövän hoidossa, sekä niiden hyödyntämiseen lääkekehityksessä nanopartikkelien kuvantamisessa. Kyseiset kuvantamismenetelmät erottuvat muista menetelmistä korkean erotuskyvyn, herkkyyden ja helppokäyttöisyyden suhteen. Kokeellisessa osassa 99mTc-leimattujen liposomien distribuutiota hiirissä tutkittiin SPECT-CT-laitteen avulla. Aktiivisuus kasvaimessa, pernassa ja maksassa kvantifioitiin InVivoScope-ohjelman ja gammalaskijan avulla. Tuloksia verrattiin keskenään. In vitro-kokeessa saavutettiin kaksivaiheisella kohdennuksella 2,7- 3,5-kertainen (solulinjasta riippuen) hakeutuminen soluihin kontrolliliposomeihin verrattuna. Kuitenkin suora kohdennus toimi kaksivaiheista kohdennusta paremmin in vitro. In vivo –kokeissa liposomit jakautuivat kasvaimeen tehokkaammin i.p.-annosteltuna kuin i.v.-annosteltuna. Kaksivaiheisella kohdennuksella saavutettiin 1,24-kertainen jakautuminen kasvaimeen (% ID/g kudosta) passiivisesti kohdennettuihin liposomeihin verrattuna. %ID/elin oli kohdennetuilla liposomeilla 5,9 % ja passiivisesti kohdennetuilla 5,4%. Todellinen ero oli siis pieni. InVivoScope:n ja gammalaskijan tulokset eivät korreloineet keskenään. Lisätutkimuksia ja menetelmän optimointia vaaditaan liposomien kohdennuksessa kasvaimeen.
Resumo:
Research on reading has been successful in revealing how attention guides eye movements when people read single sentences or text paragraphs in simplified and strictly controlled experimental conditions. However, less is known about reading processes in more naturalistic and applied settings, such as reading Web pages. This thesis investigates online reading processes by recording participants eye movements. The thesis consists of four experimental studies that examine how location of stimuli presented outside the currently fixated region (Study I and III), text format (Study II), animation and abrupt onset of online advertisements (Study III), and phase of an online information search task (Study IV) affect written language processing. Furthermore, the studies investigate how the goal of the reading task affects attention allocation during reading by comparing reading for comprehension with free browsing, and by varying the difficulty of an information search task. The results show that text format affects the reading process, that is, vertical text (word/line) is read at a slower rate than a standard horizontal text, and the mean fixation durations are longer for vertical text than for horizontal text. Furthermore, animated online ads and abrupt ad onsets capture online readers attention and direct their gaze toward the ads, and distract the reading process. Compared to a reading-for-comprehension task, online ads are attended to more in a free browsing task. Moreover, in both tasks abrupt ad onsets result in rather immediate fixations toward the ads. This effect is enhanced when the ad is presented in the proximity of the text being read. In addition, the reading processes vary when Web users proceed in online information search tasks, for example when they are searching for a specific keyword, looking for an answer to a question, or trying to find a subjectively most interesting topic. A scanning type of behavior is typical at the beginning of the tasks, after which participants tend to switch to a more careful reading state before finishing the tasks in the states referred to as decision states. Furthermore, the results also provided evidence that left-to-right readers extract more parafoveal information to the right of the fixated word than to the left, suggesting that learning biases attentional orienting towards the reading direction.