47 resultados para software, translation, validation tool, VMNET, Wikipedia, XML


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Master´s thesis investigates the performance of the Olkiluoto 1 and 2 APROS model in case of fast transients. The thesis includes a general description of the Olkiluoto 1 and 2 nuclear power plants and of the most important safety systems. The theoretical background of the APROS code as well as the scope and the content of the Olkiluoto 1 and 2 APROS model are also described. The event sequences of the anticipated operation transients considered in the thesis are presented in detail as they will form the basis for the analysis of the APROS calculation results. The calculated fast operational transient situations comprise loss-of-load cases and two cases related to a inadvertent closure of one main steam isolation valve. As part of the thesis work, the inaccurate initial data values found in the original 1-D reactor core model were corrected. The input data needed for the creation of a more accurate 3-D core model were defined. The analysis of the APROS calculation results showed that while the main results were in good accordance with the measured plant data, also differences were detected. These differences were found to be caused by deficiencies and uncertainties related to the calculation model. According to the results the reactor core and the feedwater systems cause most of the differences between the calculated and measured values. Based on these findings, it will be possible to develop the APROS model further to make it a reliable and accurate tool for the analysis of the operational transients and possible plant modifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this study were to validate an international Health-Related Quality of Life (HRQL) instrument, to describe child self and parent-proxy assessed HRQL at child age 10 to 12 and to compare child self assessments with parent-proxy assessments and school nursing documentation. The study is part of the Schools on the Move –research project. In phase one, a cross-cultural translation and validation process was performed to develop a Finnish version of Pediatric Quality of Life Inventory™ 4.0 (PedsQL™ 4.0). The process included a two-way translation, cognitive interviews (children n=7, parents n=5) and a survey (children n=1097, parents n=999). In phase two, baseline and follow-up surveys (children n=986, parents n=710) were conducted to describe and compare the child self and parent-proxy assessed HRQL in school children between the ages 10 and 12. Phase three included two separate data, school nurse documented patient records (children n=270) and a survey (children n=986). The relation between child self assessed HRQL and school nursing documentation was evaluated. Validity and reliability of the Finnish version of PedsQL™ 4.0 was good (Child Self Report α=0.91, Parent-Proxy Report α=0.88). Children reported lower HRQL scores at the emotional (mean 76/80) than the physical (mean 85/89) health domains and significantly lower scores at the age of 10 than 12 (dMean=4, p=<0.001). Agreement between child self and parent-proxy assessment was fragile (r=0,4, p=<0.001) but increased as the child grew from age 10 to 12 years. At health check-ups, school nurses documented frequently children’s physical health, such as growth (97%) and posture (98/99%) but seldom emotional issues, such as mood (2/7%). The PedsQLTM 4.0 is a valid instrument to assess HRQL in Finnish school children although future research is recommended. Children’s emotional wellbeing needs future attention. HRQL scores increase during ages between childhood and adolescence. Concordance between child self and parent-proxy assessed HRQL is low. School nursing documentation, related to child health check-ups, is not in line with child self assessed HRQL and emotional issues need more attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli kehittää entistä tarkempi ja luotettavampi työkalu Loviisan ydinvoimalaitoksen ensimmäisen reaktoriyksikön sekundääripiirin 50-linjan stationääritilan toiminnan arviointiin. Toisena tavoitteena oli kartoittaa ja raportoida mallintamiseen käytetyn SOLVO-ohjelman kehitysmahdollisuuksia. Tutkimusten perusteella kehitettiin testatusti toimiva prosessimalli, joka dokumentoitiin ja validoitiin yksityiskohtaisesti. Tulevaisuudessa tapahtuvaa kehitystyötä, ja mallin käyttöä varten kehitettiin lisäksi Excel-pohjainen työkalu, jonka avulla SOLVO:n ja Excelin välinen integraatio voidaan myöhemmin julkaistavassa SOLVO:n versiossa viedä nykyistä pidemmälle. Mallintamistyön ohella tutkimuksen aikana selvitettiin olennaisimmat SOLVO:n erityispiirteet ja vahvuudet sekä kartoitettiin sen käyttöön liittyvät kehitystarpeet. Tärkeimpänä kehitysehdotuksena nousi esiin yksittäisissä komponenteissa suoritetun laskennan läpinäkyvyyden parantaminen. Seuraavassa kehitysvaiheessa myös komponenttikohtaiset laskentayhtälöt olisi suositeltavaa asettaa avoimiksi käyttäjäkohtaisille muutoksille. Työn aikana saavutettiin myös muita merkittäviä tuloksia, jotka liittyivät pääosin rinnak-kaisten 10- ja 50-linjojen välisiin yhteyksiin. Linjojen välisiä vaikutuksia analysoitaessa huomattiin niiden olevan olennaisessa asemassa erityisesti sarja-ajon aikana. Mikäli mallilla halutaan kuvata sekä sarja- että rinnanajoa, sen on käsitettävä molemmat linjat ja kaikki niihin liittyvät komponentit. Edellä mainitun lisäksi mallipohjaisen tarkastelun tuloksena tehtiin havaintoja, joiden perusteella nykyistä prosessia voidaan edelleen kehittää. Näistä havainnoista tärkeimpänä merivesipumppujen optimaalisen säätölämpötilan todettiin asettuvan 4,5 – 4,6 °C välille. Toinen huomio liittyi matalapaineturbiinien ulosvirtaushäviöihin, joihin hukataan juoksusta riippuen keskimäärin noin 10 kJ/kg enemmän entalpiaa kuin parhaassa mahdollisessa tapauksessa. Validoinnin yhteydessä havaituista pienistä poikkeamista huolimatta kehitetty malli vastaa hyvin laitokselta saatuja mittaustuloksia sekä muita samassa yhteydessä käytettyjä luotet-tavuuden arviointikriteerejä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cells of epithelial origin, e.g. from breast and prostate cancers, effectively differentiate into complex multicellular structures when cultured in three-dimensions (3D) instead of conventional two-dimensional (2D) adherent surfaces. The spectrum of different organotypic morphologies is highly dependent on the culture environment that can be either non-adherent or scaffold-based. When embedded in physiological extracellular matrices (ECMs), such as laminin-rich basement membrane extracts, normal epithelial cells differentiate into acinar spheroids reminiscent of glandular ductal structures. Transformed cancer cells, in contrast, typically fail to undergo acinar morphogenic patterns, forming poorly differentiated or invasive multicellular structures. The 3D cancer spheroids are widely accepted to better recapitulate various tumorigenic processes and drug responses. So far, however, 3D models have been employed predominantly in the Academia, whereas the pharmaceutical industry has yet to adopt a more widely and routine use. This is mainly due to poor characterisation of cell models, lack of standardised workflows and high throughput cell culture platforms, and the availability of proper readout and quantification tools. In this thesis, a complete workflow has been established entailing well-characterised 3D cell culture models for prostate cancer, a standardised 3D cell culture routine based on high-throughput-ready platform, automated image acquisition with concomitant morphometric image analysis, and data visualisation, in order to enable large-scale high-content screens. Our integrated suite of software and statistical analysis tools were optimised and validated using a comprehensive panel of prostate cancer cell lines and 3D models. The tools quantify multiple key cancer-relevant morphological features, ranging from cancer cell invasion through multicellular differentiation to growth, and detect dynamic changes both in morphology and function, such as cell death and apoptosis, in response to experimental perturbations including RNA interference and small molecule inhibitors. Our panel of cell lines included many non-transformed and most currently available classic prostate cancer cell lines, which were characterised for their morphogenetic properties in 3D laminin-rich ECM. The phenotypes and gene expression profiles were evaluated concerning their relevance for pre-clinical drug discovery, disease modelling and basic research. In addition, a spontaneous model for invasive transformation was discovered, displaying a highdegree of epithelial plasticity. This plasticity is mediated by an abundant bioactive serum lipid, lysophosphatidic acid (LPA), and its receptor LPAR1. The invasive transformation was caused by abrupt cytoskeletal rearrangement through impaired G protein alpha 12/13 and RhoA/ROCK, and mediated by upregulated adenylyl cyclase/cyclic AMP (cAMP)/protein kinase A, and Rac/ PAK pathways. The spontaneous invasion model tangibly exemplifies the biological relevance of organotypic cell culture models. Overall, this thesis work underlines the power of novel morphometric screening tools in drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Services are getting more complex and difficult to manage, but much less attention and resources are directed towards service development than product development both in literature and business life. The paper sheds light on how productization together with modularization and service blueprinting would help make consultancy services more manageable, scalable and efficient while retaining their customer focus. The research was qualitative and based on active research and participant observation. A theoretical framework was constructed on the basis of relevant literature and was then evaluated in two steps: first the overall framework was evaluated by mirroring it to a real life case at QPR Software. Then a service blueprint was created of a selected service, and its benefits and challenges were evaluated. The framework reflected the case company's situation well. Service blueprinting proved to be a valuable tool for facilitating discussion and knowledge sharing. The characteristics of consultancy services provide many challenges for productization. They are highly heterogeneous and people-centric whereas productization is based on standardizing the offering, the delivery processes and managing the service's tangible properties. The research indicated that by modularizing services, both customer focus and standardization can be achieved by creating variety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The target of this thesis is to evaluate a bid, project and resource management IT tool for service delivery process via proof-of-concept (POC) project to assess, if the tested software is an appropriate tool for the Case Company’s business requirements. Literature suggests that IT projects implementation is still a grey area in scientific research. Also, IT projects have a notably high rate of failure, one significant reason for this being insufficient planning. To tackle this risk, the Case Company decided to perform a POC project, which involved a hands-on testing period of the assessed system. End users from the business side feel that current, highly tailored project management tool is inflexible, difficult to use, and sets unnecessary limitations for the business. Semi-structured interviews and a survey form are used to collect information about current business practices and business requirements related to the IT tool. For the POC project, a project group involving members from each of the Case Company’s four business divisions was established to perform the hands-on testing. Based on data acquired during the interviews and the hands-on testing period, a target state was defined and a gap analysis was carried out by comparing the features provided by the current tool and the tested tool to the target state, which are, together with the current state description, the most important result of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkielman aiheena on ammattikääntäjien tiedonhaku, kun käytettävissä on ainoastaan verkkolähteitä. Tutkimuksessa on tarkasteltu, mistä ja miten ammattikääntäjät etsivät tietoa internetistä kääntäessään lähtötekstiä englannista suomeen. Lisäksi tutkimuksen tarkoituksena on osoittaa, että tiedonhakutaidot ja lähdekriittisyys ovat käännöskompetensseja, joita tulisi sekä ylläpitää että opettaa osana kääntäjäkoulutusta. Tutkimuksen aineisto kerättiin empiirisesti käyttämällä kolmea metodia. Käännösprosessi ja sen aikana tapahtunut tiedonhaku tallennettiin käyttäen Camtasia-näyttövideointiohjelmaa ja Translog-II -näppäilyntallennusohjelmaa. Lisäksi tutkimukseen osallistuneet kääntäjät täyttivät kaksi kyselyä, joista ensimmäinen sisälsi taustatietokysymyksiä ja toinen itse prosessiin liittyviä retrospektiivisiä kysymyksiä. Kyselyt toteutettiin Webropol-kyselytyökalulla. Aineistoa kerättiin yhteensä viidestä koetilanteesta. Tutkimuksessa tarkasteltiin lähemmin kolmen ammattikääntäjän tiedon-hakutoimintoja erottelemalla käännösprosesseista ne tauot, joiden aikana kääntäjät etsivät tietoa internetistä. Käytettyjen verkkolähteiden osalta tutkimuksessa saatiin vastaavia tuloksia kuin aiemmissakin tutkimuksissa: eniten käytettyjä olivat Google, Wikipedia sekä erilaiset verkkosanakirjat. Tässä tutkimuksessa kuitenkin paljastui, että ammattikääntäjien tiedonhaun toimintamallit vaihtelevat riippuen niin kääntäjän erikoisalasta kuin hänen tiedonhakutaitojensa tasosta. Joutuessaan työskentelemään tutun työympäristönsä ja oman erikoisalansa ulkopuolella turvautuu myös osa ammattikääntäjistä alkeellisimpiin tiedonhakutekniikoihin, joita käännöstieteen opiskelijoiden on havaittu yleisesti käyttävän. Tulokset paljastivat myös, että tiedonhaku voi viedä jopa 70 prosenttia koko käännösprosessiin kuluvasta ajasta riippuen kääntäjän aiemmasta lähtötekstin aihepiiriin liittyvästä tietopohjasta ja tiedonhaun tehokkuudesta. Tutkimuksessa saatujen tulosten pohjalta voidaan sanoa, että myös ammattikääntäjien tulisi kehittää tiedonhakutaitojaan pitääkseen käännösprosessinsa tehokkaana. Lisäksi kääntäjien pitäisi muistaa arvioida kriittisesti käyttämiään tietolähteitä: lähdekritiikki on tarpeen erityisesti verkkolähteitä käytettäessä. Tästä syystä tiedonhakutaitoja ja lähdekriittisyyttä tulisikin opettaa ja harjoitella jo osana kääntäjäkoulutusta. Kääntäjien ei myöskään pidä jättää tiedonhakua pelkkien verkkolähteiden varaan, vaan jatkossakin käyttää hyväkseen niin painettuja tietolähteitä kuin myös henkilölähteitä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood-based bioprocesses present one of the fields of interest with the most potential in the circular economy. Expanding the use of wood raw material in sustainable industrial processes is acknowledged on both a global and a regional scale. This thesis concerns the application of a capillary zone electrophoresis (CZE) method with the aim of monitoring wood-based bioprocesses. The range of detectable carbohydrate compounds is expanded to furfural and polydatin in aquatic matrices. The experimental portion has been conducted on a laboratory scale with samples imitating process samples. This thesis presents a novel strategy for the uncertainty evaluation via in-house validation. The focus of the work is on the uncertainty factors of the CZE method. The CZE equipment is sensitive to ambient conditions. Therefore, a proper validation is essential for robust application. This thesis introduces a tool for process monitoring of modern bioprocesses. As a result, it is concluded that the applied CZE method provides additional results to the analysed samples and that the profiling approach is suitable for detecting changes in process samples. The CZE method shows significant potential in process monitoring because of the capability of simultaneously detecting carbohydrate-related compound clusters. The clusters can be used as summary terms, indicating process variation and drift.