922 resultados para methods and measurement


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT OBJECTIVE To assess the internal consistency of the measurements of the Self-Reporting Questionnaire (SRQ-20) in different occupational groups. METHODS A validation study was conducted with data from four surveys with groups of workers, using similar methods. A total of 9,959 workers were studied. In all surveys, the common mental disorders were assessed via SRQ-20. The internal consistency considered the items belonging to dimensions extracted by tetrachoric factor analysis for each study. Item homogeneity assessment compared estimates of Cronbach’s alpha (KD-20), the alpha applied to a tetrachoric correlation matrix and stratified Cronbach’s alpha. RESULTS The SRQ-20 dimensions showed adequate values, considering the reference parameters. The internal consistency of the instrument items, assessed by stratified Cronbach’s alpha, was high (> 0.80) in the four studies. CONCLUSIONS The SRQ-20 showed good internal consistency in the professional categories evaluated. However, there is still a need for studies using alternative methods and additional information able to refine the accuracy of latent variable measurement instruments, as in the case of common mental disorders.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT This study aimed to describe the digital disease detection and participatory surveillance in different countries. The systems or platforms consolidated in the scientific field were analyzed by describing the strategy, type of data source, main objectives, and manner of interaction with users. Eleven systems or platforms, developed from 1996 to 2016, were analyzed. There was a higher frequency of data mining on the web and active crowdsourcing as well as a trend in the use of mobile applications. It is important to provoke debate in the academia and health services for the evolution of methods and insights into participatory surveillance in the digital age.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Journal of Human Evolution, V. 55, pp. 148-163

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the first in vivo studies of cerebral function with radionuclides by Ingvar and Lassen, nuclear medicine (NM) brain applications have evolved dramatically, with marked improvements in both methods and tracers. Consequently it is now possible to assess not only cerebral blood flow and energy metabolism but also neurotransmission. Planar functional imaging was soon substituted by single-photon emission computed tomography (SPECT) and positron emission tomography (PET); it now has limited application in brain imaging, being reserved for the assessment of brain death.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main difficulties related to the detection of the Hepatitis Delta Virus (HDV) antigen and antibody has been the source of the needed HD antigen since HDV containing human and animal livers are very difficult to obtain and since yield is low. This fact prompted us to try to use the serum of patients in the acute phase of HDV infection as a source of HDAg and turn to enzyme immunoassays (EIA) instead of RIA for the sake of easiness and economy in the amount of HDAg needed. The antigen for EIA was obtained from patients during the acute phase of HDV infection and the antibody from patients who have been carriers for many years. For the detection of the antigen, a sandwich type method was employed, whereas for the antibody a competition assay was developed. In order to assess the relative specificity and sensibility of the test, the antibody assay was compared to a commercial RIA (C. RIA, Abbott) and to a non-commercial RIA (NC RIA). Forty-two sera were tested by the two methods and only in two cases discrepant results were obtained. Its is concluded that: 1) sera from patients in the acute and chronic phases of HDV infection can be used as source of both antigen and antibody, for immunoassays; 2) EIA and RIA have comparable relative specificity and sensibility and 3) EIA is easier to perform, cheaper, non-hazardous, has a longer shelf-life and saves scarce HDAg.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose for pelvis using automatic exposure control (AEC) and non-AEC in a computed radiography (CR) system. Methods and Materials: To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60-120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the 2 AFC visual grading software. PCXMC software was used to estimate the effective dose. Results: A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p> 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. Effective dose results show a statistical significant decrease (p=0.000) on the 75th quartile from 0.3 mSv at 60 kVp to 0.1 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion: No significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant effective dose reduction is observed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trabalho Final de mestrado para obtenção do grau de Mestre em Engenharia Mecânica perfil Manutenção e Produção

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computerized scheduling methods and computerized scheduling systems according to exemplary embodiments. A computerized scheduling method may be stored in a memory and executed on one or more processors. The method may include defining a main multi-machine scheduling problem as a plurality of single machine scheduling problems; independently solving the plurality of single machine scheduling problems thereby calculating a plurality of near optimal single machine scheduling problem solutions; integrating the plurality of near optimal single machine scheduling problem solutions into a main multi-machine scheduling problem solution; and outputting the main multi-machine scheduling problem solution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The reaction of the Schiff base (3,5-di-tert-butyl-2-hydroxybenzylidene)-2-hydroxybenzohydrazide (H3L) with copper(II) nitrate, acetate or metaborate has led to the isomeric complexes [Cu-3(L)(2)(MeOH)(4)] (1), [Cu-3(L)(2)(MeOH)(2)]2MeOH (2) and [Cu-3(L)(2)(MeOH)(4)] (3), respectively, in which the ligand L exhibits dianionic (HL2-, in 1) or trianionic (L3-, in 2 and 3) pentadentate 1O,O,N:2N,O chelation modes. Complexes 1-3 were characterized by elemental analysis, IR spectroscopy, single-crystal X-ray crystallography, electrochemical methods and variable-temperature magnetic susceptibility measurements, which indicated that the intratrimer antiferromagnetic coupling is strong in the three complexes and that there exists very weak ferromagnetic intermolecular interactions in 1 but weak antiferromagnetic intermolecular interactions in both 2 and 3. Electrochemical experiments showed that in complexes 1-3 the Cu-II ions can be reduced, in distinct steps, to Cu-I and Cu-0. All the complexes act as efficient catalyst precursors under mild conditions for the peroxidative oxidation of cyclohexane to cyclohexyl hydroperoxide, cyclohexanol and cyclohexanone, leading to overall yields (based on the alkane) of up to 31% (TON = 1.55x10(3)) after 6 h in the presence of pyrazinecarboxylic acid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de Natureza Científica elaborada no Laboratório Nacional de Engenharia Civil (LNEC) para obtenção do grau de mestre em Engenharia Civil na Área de Especialização de Hidráulica no âmbito do protocolo de cooperação entre o ISEL e o LNEC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações