13 resultados para format de particule tridimensionnelle

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on reading has been successful in revealing how attention guides eye movements when people read single sentences or text paragraphs in simplified and strictly controlled experimental conditions. However, less is known about reading processes in more naturalistic and applied settings, such as reading Web pages. This thesis investigates online reading processes by recording participants eye movements. The thesis consists of four experimental studies that examine how location of stimuli presented outside the currently fixated region (Study I and III), text format (Study II), animation and abrupt onset of online advertisements (Study III), and phase of an online information search task (Study IV) affect written language processing. Furthermore, the studies investigate how the goal of the reading task affects attention allocation during reading by comparing reading for comprehension with free browsing, and by varying the difficulty of an information search task. The results show that text format affects the reading process, that is, vertical text (word/line) is read at a slower rate than a standard horizontal text, and the mean fixation durations are longer for vertical text than for horizontal text. Furthermore, animated online ads and abrupt ad onsets capture online readers attention and direct their gaze toward the ads, and distract the reading process. Compared to a reading-for-comprehension task, online ads are attended to more in a free browsing task. Moreover, in both tasks abrupt ad onsets result in rather immediate fixations toward the ads. This effect is enhanced when the ad is presented in the proximity of the text being read. In addition, the reading processes vary when Web users proceed in online information search tasks, for example when they are searching for a specific keyword, looking for an answer to a question, or trying to find a subjectively most interesting topic. A scanning type of behavior is typical at the beginning of the tasks, after which participants tend to switch to a more careful reading state before finishing the tasks in the states referred to as decision states. Furthermore, the results also provided evidence that left-to-right readers extract more parafoveal information to the right of the fixated word than to the left, suggesting that learning biases attentional orienting towards the reading direction.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study investigates the formal integration of English loanwords into the Swedish language system. The aim has been to analyse and describe the morphological/morphosyntactic and the orthographical integration of the loanwords. I have studied how the foreign language elements get accommodated to Swedish and which factors are relevant in the integration. The material for the study consists of Swedish newspapers published in Sweden and Finland in paper format (with a focus on the years 1975 and 2000) and newspapers in digital format on the net. The theoretical frame for the study is contact linguistics. The study is based on a sociolinguistic, structural and language political perspective on what language is, and what language contact is. The method used is usage-based linguistic analysis. In the morphological study of the loanwords, I have made both a quantitative and a qualitative study. I have analysed the extent to which loanwords show some indication of integration in Swedish, and to what extent they show no signs of integration at all. I have also analysed integration in relation to word classes i.e., how nouns, adjectives and verbs integrate and which factors are relevant for the result of the integration. The result shows that most loanwords (36 %) do not show any signs of being formally integrated in Swedish. They undergo neither inflectional, nor derivational changes. One fifth of the loanwords are inflected according to the rules of Swedish grammar. Nouns are generally more often than verbs placed in positions in the sentence where no formal adaption is needed. Almost all of the verbs in the material are inflected according to Swedish rules of grammar. Only 3 % of the loanwords are inflected according to English rules or are placed in an ungrammatical position in the sentence. The orthographical study shows that English loanwords very seldom get adapted to Swedish orthography. Some English vowel and consonant graphemes are replaced with Swedish ones, for example a, ay and ai are replaced with aj or ej (mail → mejl). The study also indicates that morphological integration is related to orthographical integration: loanwords that are inflected according to Swedish grammar are more likely to be orthographical integrated than loanwords that are inflected according to English grammar. The results also shows that the integration of loanwords are affected by mostly language structural factors and language political factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study seeks to answer the question of what the language of administrative press releases is like, and how and why it has changed over the past few decades. The theoretical basis of the study is provided by critical text analysis, supplemented with, e.g., the metafunction theory of Systemic Functional Grammar, the theory of poetic function, and Finnish research into syntax. The data includes 83 press releases by the City of Helsinki Public Works Department, 14 of which were written between 1979 and 1980 (old press releases), and 69 of which were written between 1998 and 1999 (new press releases). The analysis focuses on the linguistic characteristics of the releases, their changes and variation, their relation to other texts and the extra linguistic context, as well as their genre. The core research method is linguistic text analysis. It is supplemented with an analysis of the communicative environment, based on the authors' interviews and written documents. The results can be applied to the improvement of texts produced by the authorities and even by other organizations. The linguistic analysis focuses on features that transform the texts in the data making them guiding, detailed, and poetic. The releases guide the residents of the city using modal verbal expressions and performative verbs that enable the mass media to publish the guiding expressions on their own behalf as such. The guiding is more persuasive in the new press releases than in the old ones, and the new ones also include imperative clauses and verbless directives that construct direct interaction. The language of the releases is made concrete and structurally detailed by, e.g., concrete vocabulary, proper nouns and terms, as well as definitions, adverbials and comparisons, which are used specifically to present places and administrative organizations in detail. The rhetorical features in the releases include alliteration and metaphors, which are found in the new releases especially in the titles. The emphasized features are used to draw the readers' attention and to highlight the core contents of the texts. The new releases also include words that are colloquial in style, making the communicative situations less official. Structurally, the releases have changed from being letter-like to a more newsflash-like format. The changes in the releases can be explained by the development towards more professional communications and the more market-oriented ideology adopted in the communicative environment. Key words: change in administrative language, press releases, critical text analysis, linguistic text analysis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation presents a functional model for analysis of song translation. The model is developed on the basis of an examination of theatrical songs and a comparison of three translations: the songs of the Broadway musical My Fair Lady (Lerner and Loewe, 1956), made for the premiere productions (1959–1960) in Swedish, Danish, and Norwegian. The analysis explores the three challenges of a song translator: the fitting of a text to existing music, the consideration of a prospective sung performance, and the verbal approximation of the content of the source lyric. The theoretical foundation is based on a functional approach to translation studies (Christiane Nord) and a structuralist/semiotic analysis of a theatrical message (Ivo Osolsobě, building on Roman Jakobson). Thus, three functional levels in the fitting of a text to music are explored: first, a prosodic/phonetic format; secondly, a poetic/rhetoric format; and thirdly, semantic/reflexive values (verbalizing musical expression). Similarly, three functional levels in the textual connections to a prospective performance are explored: first, a presentational goal; secondly, the theatrical potential; and thirdly, dramaturgic values (for example dramatic information and linguistic register). The functionality of Broadway musical theatre songs is analyzed, and the song score of My Fair Lady, source and target lyrics, is studied, with an in-depth analysis of seven of the songs. The three translations were all considered very well-made and are used in productions of the musical to this day. The study finds that the song translators appear to have worked from an understanding of the presentational goal, designed their target texts on the prosodic and poetic shape of the music, and pursued the theatrical functionality of the song, not by copying, but by recreating connections to relevant contexts, partly independently of the source lyrics, using the resources of the target languages. Besides metaphrases (closest possible transfer), paraphrases and additions seem normally to be expected in song translation, but song translators may also follow highly individual strategies – for example, the Norwegian translator is consistently more verbally faithful than the Danish and Swedish translators. As a conclusion, it is suggested that although linguistic and cultural difference play a significant role, a translator’s solution must nevertheless be arrived at, and assessed, in relation to the song as a multimedial piece of material. As far as a song can be considered a theatrical message – singers representing the voice, person, and situation of the song – the descriptive model presented in the study is also applicable to the translation of other types of song.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today information and communication technology allows us to use multimedia more than ever before in e-learning materials. Multimedia though can increase cognitive load in learning process. Because of that it cannot be taken granted what kind of learning materials should be produced. This paper intended to study the diversity of e-learning materials and the factors related cognitive load. The main purpose was to study the multimodality of the multimedia learning materials. The subject of this study is the learning materials on the web site Kansalaisen ABC published by YLE. Learning materials in the web site were approached from three different perspectives. The specific questions were: (1) What kind of form features are used in the representations of the learning material? Are certain form features preferred over others? (2) How do the cognitive load factors take shape in learning materials and between the forms? (3) How does the multimodality phenomenon appear in the learning materials and in what ways are form features and cognitive load factors related to multimodality? In this case study a qualitative approach was used. Analysis of the form features and the cognitive load factors in learning materials were based on content analysis. Form features included the specification of a format, the structure, the interactivity type and the type of learning material. The results showed that the web sites include various representations of both verbal and visual forms. Cognitive load factors were related mostly to visual than verbal material. Material presented according to the principles of cognitive multimedia theory multimedia representations did not cause cognitive overload in the informants. Cognitive load was increased in the case of students needing to split their attention between the multimedia forms in time and place. The results indicated how different individual characteristics are reflected by the cognitive load factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study the researcher wanted to show the observed connection of mathematics and textile work. To carry this out the researcher designed a textbook by herself for the upper secondary school in Tietoteollisuuden Naiset TiNA project at Helsinki University of Technology (URL:http://tina.tkk.fi/). The assignments were designed as additional teaching material to enhance and reinforce female students confidence in mathematics and in the management of their textile work. The research strategy applied action research, out of which two cycles two have been carried out. The first cycle consists of establishing the textbook and in the second cycle its usability is investigated. The third cycle is not included in this report. In the second cycle of the action research the data was collected from 15 teachers, five textile teachers, four mathematics teachers and six teachers of both subjects. They all got familiar with the textbook assignments and answered a questionnaire on the basis of their own teaching experience. The questionnaire was established by applying the theories of usability and teaching material assessment study. The data consisted of qualitative and quantitative information, which was analysed by content analysis with computer assisted table program to either qualitative or statistical description. According to the research results, the textbook assignments seamed to be applied better to mathematics lessons than textile work. The assignments pointed out, however, the clear interconnectedness of textile work and mathematics. Most of the assignments could be applied as such or as applications in the upper secondary school textile work and mathematics lessons. The textbook assignments were also applicable in different stages of the teaching process, e.g. as introduction, repetition or to support individual work or as group projects. In principle the textbook assignments were in well placed and designed in the correct level of difficulty. Negative findings concerned some too difficult assignments, lack of pupil motivation and unfamiliar form of task for the teacher. More clarity for some assignments was wished for and there was especially expressed a need for easy tasks and assignments in geometry. Assignments leading to the independent thinking of the pupil were additionally asked for. Two important improvements concerning the textbook attainability would be to get the assignments in html format over the Internet and to add a handicraft reference book.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microarrays have a wide range of applications in the biomedical field. From the beginning, arrays have mostly been utilized in cancer research, including classification of tumors into different subgroups and identification of clinical associations. In the microarray format, a collection of small features, such as different oligonucleotides, is attached to a solid support. The advantage of microarray technology is the ability to simultaneously measure changes in the levels of multiple biomolecules. Because many diseases, including cancer, are complex, involving an interplay between various genes and environmental factors, the detection of only a single marker molecule is usually insufficient for determining disease status. Thus, a technique that simultaneously collects information on multiple molecules allows better insights into a complex disease. Since microarrays can be custom-manufactured or obtained from a number of commercial providers, understanding data quality and comparability between different platforms is important to enable the use of the technology to areas beyond basic research. When standardized, integrated array data could ultimately help to offer a complete profile of the disease, illuminating mechanisms and genes behind disorders as well as facilitating disease diagnostics. In the first part of this work, we aimed to elucidate the comparability of gene expression measurements from different oligonucleotide and cDNA microarray platforms. We compared three different gene expression microarrays; one was a commercial oligonucleotide microarray and the others commercial and custom-made cDNA microarrays. The filtered gene expression data from the commercial platforms correlated better across experiments (r=0.78-0.86) than the expression data between the custom-made and either of the two commercial platforms (r=0.62-0.76). Although the results from different platforms correlated reasonably well, combining and comparing the measurements were not straightforward. The clone errors on the custom-made array and annotation and technical differences between the platforms introduced variability in the data. In conclusion, the different gene expression microarray platforms provided results sufficiently concordant for the research setting, but the variability represents a challenge for developing diagnostic applications for the microarrays. In the second part of the work, we performed an integrated high-resolution microarray analysis of gene copy number and expression in 38 laryngeal and oral tongue squamous cell carcinoma cell lines and primary tumors. Our aim was to pinpoint genes for which expression was impacted by changes in copy number. The data revealed that especially amplifications had a clear impact on gene expression. Across the genome, 14-32% of genes in the highly amplified regions (copy number ratio >2.5) had associated overexpression. The impact of decreased copy number on gene underexpression was less clear. Using statistical analysis across the samples, we systematically identified hundreds of genes for which an increased copy number was associated with increased expression. For example, our data implied that FADD and PPFIA1 were frequently overexpressed at the 11q13 amplicon in HNSCC. The 11q13 amplicon, including known oncogenes such as CCND1 and CTTN, is well-characterized in different type of cancers, but the roles of FADD and PPFIA1 remain obscure. Taken together, the integrated microarray analysis revealed a number of known as well as novel target genes in altered regions in HNSCC. The identified genes provide a basis for functional validation and may eventually lead to the identification of novel candidates for targeted therapy in HNSCC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XML documents are becoming more and more common in various environments. In particular, enterprise-scale document management is commonly centred around XML, and desktop applications as well as online document collections are soon to follow. The growing number of XML documents increases the importance of appropriate indexing methods and search tools in keeping the information accessible. Therefore, we focus on content that is stored in XML format as we develop such indexing methods. Because XML is used for different kinds of content ranging all the way from records of data fields to narrative full-texts, the methods for Information Retrieval are facing a new challenge in identifying which content is subject to data queries and which should be indexed for full-text search. In response to this challenge, we analyse the relation of character content and XML tags in XML documents in order to separate the full-text from data. As a result, we are able to both reduce the size of the index by 5-6\% and improve the retrieval precision as we select the XML fragments to be indexed. Besides being challenging, XML comes with many unexplored opportunities which are not paid much attention in the literature. For example, authors often tag the content they want to emphasise by using a typeface that stands out. The tagged content constitutes phrases that are descriptive of the content and useful for full-text search. They are simple to detect in XML documents, but also possible to confuse with other inline-level text. Nonetheless, the search results seem to improve when the detected phrases are given additional weight in the index. Similar improvements are reported when related content is associated with the indexed full-text including titles, captions, and references. Experimental results show that for certain types of document collections, at least, the proposed methods help us find the relevant answers. Even when we know nothing about the document structure but the XML syntax, we are able to take advantage of the XML structure when the content is indexed for full-text search.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we present and evaluate two pattern matching based methods for answer extraction in textual question answering systems. A textual question answering system is a system that seeks answers to natural language questions from unstructured text. Textual question answering systems are an important research problem because as the amount of natural language text in digital format grows all the time, the need for novel methods for pinpointing important knowledge from the vast textual databases becomes more and more urgent. We concentrate on developing methods for the automatic creation of answer extraction patterns. A new type of extraction pattern is developed also. The pattern matching based approach chosen is interesting because of its language and application independence. The answer extraction methods are developed in the framework of our own question answering system. Publicly available datasets in English are used as training and evaluation data for the methods. The techniques developed are based on the well known methods of sequence alignment and hierarchical clustering. The similarity metric used is based on edit distance. The main conclusions of the research are that answer extraction patterns consisting of the most important words of the question and of the following information extracted from the answer context: plain words, part-of-speech tags, punctuation marks and capitalization patterns, can be used in the answer extraction module of a question answering system. This type of patterns and the two new methods for generating answer extraction patterns provide average results when compared to those produced by other systems using the same dataset. However, most answer extraction methods in the question answering systems tested with the same dataset are both hand crafted and based on a system-specific and fine-grained question classification. The the new methods developed in this thesis require no manual creation of answer extraction patterns. As a source of knowledge, they require a dataset of sample questions and answers, as well as a set of text documents that contain answers to most of the questions. The question classification used in the training data is a standard one and provided already in the publicly available data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, XML has been accepted as the format of messages for several applications. Prominent examples include SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This XML usage is understandable, as the format itself is a well-accepted standard for structured data, and it has excellent support for many popular programming languages, so inventing an application-specific format no longer seems worth the effort. Simultaneously with this XML's rise to prominence there has been an upsurge in the number and capabilities of various mobile devices. These devices are connected through various wireless technologies to larger networks, and a goal of current research is to integrate them seamlessly into these networks. These two developments seem to be at odds with each other. XML as a fully text-based format takes up more processing power and network bandwidth than binary formats would, whereas the battery-powered nature of mobile devices dictates that energy, both in processing and transmitting, be utilized efficiently. This thesis presents the work we have performed to reconcile these two worlds. We present a message transfer service that we have developed to address what we have identified as the three key issues: XML processing at the application level, a more efficient XML serialization format, and the protocol used to transfer messages. Our presentation includes both a high-level architectural view of the whole message transfer service, as well as detailed descriptions of the three new components. These components consist of an API, and an associated data model, for XML processing designed for messaging applications, a binary serialization format for the data model of the API, and a message transfer protocol providing two-way messaging capability with support for client mobility. We also present relevant performance measurements for the service and its components. As a result of this work, we do not consider XML to be inherently incompatible with mobile devices. As the fixed networking world moves toward XML for interoperable data representation, so should the wireless world also do to provide a better-integrated networking infrastructure. However, the problems that XML adoption has touch all of the higher layers of application programming, so instead of concentrating simply on the serialization format we conclude that improvements need to be made in an integrated fashion in all of these layers.