986 resultados para linguistic validation
Resumo:
Capillary electrophoresis method designed originally for the analysis of monosaccharides was validated using reference solutions of polydatin. The validation was conducted by studying and determining the concentration levels of LOD and LOQ and the range of linearity and by determining levels of uncertainty in respect to repeatability and reproducibility. The reliability of the gained results is also discussed. A guide with recommendations considering the validation and overall design of analysis sequences with CE is also produced as a result of this study.
Resumo:
We introduce a new tool for correcting OCR errors of materials in a repository of cultural materials. The poster is aimed to all who are interested in digital humanities and who might find our tool useful. The poster will focus on the OCR correction tool and on the background processes. We have started a project on materials published in Finno-Ugric languages in the Soviet Union in the 1920s and 1930s. The materials are digitised in Russia. As they arrive, we publish them in DSpace (fennougrica.kansalliskirjasto.fi). For research purposes, the results of the OCR must be corrected manually. For this we have built a new tool. Although similar tools exist, we found in-house development necessary in order to serve the researchers' needs. The tool enables exporting the corrected text as required by the researchers. It makes it possible to distribute the correction tasks and their supervision. After a supervisor has approved a text as finalised, the new version of the work will replace the old one in DSpace. The project has - benefitted the small language communities, - opened channels for cooperation in Russia. - increased our capabilities in digital humanities. The OCR correction tool will be available to others.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Can crowdsourcing solutions serve many masters? Can they be beneficial for both, for the layman or native speakers of minority languages on the one hand and serious linguistic research on the other? How did an infrastructure that was designed to support linguistics turn out to be a solution for raising awareness of native languages? Since 2012 the National Library of Finland has been developing the Digitisation Project for Kindred Languages, in which the key objective is to support a culture of openness and interaction in linguistic research, but also to promote crowdsourcing as a tool for participation of the language community in research. In the course of the project, over 1,200 monographs and nearly 111,000 pages of newspapers in Finno-Ugric languages will be digitised and made available in the Fenno-Ugrica digital collection. This material was published in the Soviet Union in the 1920s and 1930s, and users have had only sporadic access to the material. The publication of open-access and searchable materials from this period is a goldmine for researchers. Historians, social scientists and laymen with an interest in specific local publications can now find text materials pertinent to their studies. The linguistically-oriented population can also find writings to delight them: (1) lexical items specific to a given publication, and (2) orthographically-documented specifics of phonetics. In addition to the open access collection, we developed an open source code OCR editor that enables the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary since these rare and peripheral prints often include already archaic characters, which are neglected by modern OCR software developers but belong to the historical context of kindred languages, and are thus an essential part of the linguistic heritage. When modelling the OCR editor, it was essential to consider both the needs of researchers and the capabilities of lay citizens, and to have them participate in the planning and execution of the project from the very beginning. By implementing the feedback iteratively from both groups, it was possible to transform the requested changes as tools for research that not only supported the work of linguistics but also encouraged the citizen scientists to face the challenge and work with the crowdsourcing tools for the benefit of research. This presentation will not only deal with the technical aspects, developments and achievements of the infrastructure but will highlight the way in which user groups, researchers and lay citizens were engaged in a process as an active and communicative group of users and how their contributions were made to mutual benefit.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at the 12th Bibliotheca Baltica Symposium at Södertörn University Library
Resumo:
The objective of the present study was to validate the transit-time technique for long-term measurements of iliac and renal blood flow in rats. Flow measured with ultrasonic probes was confirmed ex vivo using excised arteries perfused at varying flow rates. An implanted 1-mm probe reproduced with accuracy different patterns of flow relative to pressure in freely moving rats and accurately quantitated the resting iliac flow value (on average 10.43 ± 0.99 ml/min or 2.78 ± 0.3 ml min-1 100 g body weight-1). The measurements were stable over an experimental period of one week but were affected by probe size (resting flows were underestimated by 57% with a 2-mm probe when compared with a 1-mm probe) and by anesthesia (in the same rats, iliac flow was reduced by 50-60% when compared to the conscious state). Instantaneous changes of iliac and renal flow during exercise and recovery were accurately measured by the transit-time technique. Iliac flow increased instantaneously at the beginning of mild exercise (from 12.03 ± 1.06 to 25.55 ± 3.89 ml/min at 15 s) and showed a smaller increase when exercise intensity increased further, reaching a plateau of 38.43 ± 1.92 ml/min at the 4th min of moderate exercise intensity. In contrast, exercise-induced reduction of renal flow was smaller and slower, with 18% and 25% decreases at mild and moderate exercise intensities. Our data indicate that transit-time flowmetry is a reliable method for long-term and continuous measurements of regional blood flow at rest and can be used to quantitate the dynamic flow changes that characterize exercise and recovery
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
The purpose of the present study was to translate the Roland-Morris (RM) questionnaire into Brazilian-Portuguese and adapt and validate it. First 3 English teachers independently translated the original questionnaire into Brazilian-Portuguese and a consensus version was generated. Later, 3 other translators, blind to the original questionnaire, performed a back translation. This version was then compared with the original English questionnaire. Discrepancies were discussed and solved by a panel of 3 rheumatologists and the final Brazilian version was established (Brazil-RM). This version was then pretested on 30 chronic low back pain patients consecutively selected from the spine disorders outpatient clinic. In addition to the traditional clinical outcome measures, the Brazil-RM, a 6-point pain scale (from no pain to unbearable pain), and its numerical pain rating scale (PS) (0 to 5) and a visual analog scale (VAS) (0 to 10) were administered twice by one interviewer (1 week apart) and once by one independent interviewer. Spearman's correlation coefficient (SCC) and intraclass correlation coefficient (ICC) were computed to assess test-retest and interobserver reliability. Cross-sectional construct validity was evaluated using the SCC. In the pretesting session, all questions were well understood by the patients. The mean time of questionnaire administration was 4 min and 53 s. The SCC and ICC were 0.88 (P<0.01) and 0.94, respectively, for the test-retest reliability and 0.86 (P<0.01) and 0.95, respectively, for interobserver reliability. The correlation coefficient was 0.80 (P<0.01) between the PS and Brazil-RM score and 0.79 (P<0.01) between the VAS and Brazil-RM score. We conclude that the Brazil-RM was successfully translated and adapted for application to Brazilian patients, with satisfactory reliability and cross-sectional construct validity.
Resumo:
The phonological loop is a component of the working memory system specifically involved in the processing and manipulation of limited amounts of information of a sound-based phonological nature. Phonological memory can be assessed by the Children's Test of Nonword Repetition (CNRep) in English speakers but not in Portuguese speakers due to phonotactic differences between the two languages. The objectives of the present study were: 1) to develop the Brazilian Children's Test of Pseudoword Repetition (BCPR), a Portuguese version of the CNRep, and 2) to validate the BCPR by correlation with the Auditory Digit Span Test from the Stanford-Binet Intelligence Scale. The BCPR and Digit Span were assessed in 182 children aged 4-10 years, 84 from Minas Gerais State (42 from a rural region) and 98 from the city of São Paulo. There are subject age and word length effects causing repetition accuracy to decline as a function of the number of syllables of the pseudowords. Correlations between BCPR and Digit Span forward (r = 0.50; P <= 0.01) and backward (r = 0.43; P <= 0.01) were found, and partial correlation indicated that higher BCPR scores were associated with higher Digit Span scores. BCPR appears to depend more on schooling, while Digit Span was more related to development. The results demonstrate that the BCPR is a reliable measure of phonological working memory, similar to the CNRep.
Resumo:
A gravimetric method was evaluated as a simple, sensitive, reproducible, low-cost alternative to quantify the extent of brain infarct after occlusion of the medial cerebral artery in rats. In ether-anesthetized rats, the left medial cerebral artery was occluded for 1, 1.5 or 2 h by inserting a 4-0 nylon monofilament suture into the internal carotid artery. Twenty-four hours later, the brains were processed for histochemical triphenyltetrazolium chloride (TTC) staining and quantitation of the schemic infarct. In each TTC-stained brain section, the ischemic tissue was dissected with a scalpel and fixed in 10% formalin at 0ºC until its total mass could be estimated. The mass (mg) of the ischemic tissue was weighed on an analytical balance and compared to its volume (mm³), estimated either by plethysmometry using platinum electrodes or by computer-assisted image analysis. Infarct size as measured by the weighing method (mg), and reported as a percent (%) of the affected (left) hemisphere, correlated closely with volume (mm³, also reported as %) estimated by computerized image analysis (r = 0.88; P < 0.001; N = 10) or by plethysmography (r = 0.97-0.98; P < 0.0001; N = 41). This degree of correlation was maintained between different experimenters. The method was also sensitive for detecting the effect of different ischemia durations on infarct size (P < 0.005; N = 23), and the effect of drug treatments in reducing the extent of brain damage (P < 0.005; N = 24). The data suggest that, in addition to being simple and low cost, the weighing method is a reliable alternative for quantifying brain infarct in animal models of stroke.
Resumo:
The objective of the present study was to translate the Kidney Disease Quality of Life - Short Form (KDQOL-SF™1.3) questionnaire into Portuguese to adapt it culturally and validate it for the Brazilian population. The KDQOL-SF was translated into Portuguese and back-translated twice into English. Patient difficulties in understanding the questionnaire were evaluated by a panel of experts and solved. Measurement properties such as reliability and validity were determined by applying the questionnaire to 94 end-stage renal disease patients on chronic dialysis. The Nottingham Health Profile Questionnaire, the Karnofsky Performance Scale and the Kidney Disease Questionnaire were administered to test validity. Some activities included in the original instrument were considered to be incompatible with the activities usually performed by the Brazilian population and were replaced. The mean scores for the 19 components of the KDQOL-SF questionnaire in Portuguese ranged from 22 to 91. The components "Social support" and "Dialysis staff encouragement" had the highest scores (86.7 and 90.8, respectively). The test-retest reliability and the inter-observer reliability of the instrument were evaluated by the intraclass correlation coefficient. The coefficients for both reliability tests were statistically significant for all scales of the KDQOL-SF (P < 0.001), ranging from 0.492 to 0.936 for test-retest reliability and from 0.337 to 0.994 for inter-observer reliability. The Cronbach's alpha coefficient was higher than 0.80 for most of components. The Portuguese version of the KDQOL-SF questionnaire proved to be valid and reliable for the evaluation of quality of life of Brazilian patients with end-stage renal disease on chronic dialysis.