922 resultados para Computer arithmetic and logic units.
Resumo:
The development of electrophoretic computer models and their use for simulation of electrophoretic processes has increased significantly during the last few years. Recently, GENTRANS and SIMUL5 were extended with algorithms that describe chemical equilibria between solutes and a buffer additive in a fast 1:1 interaction process, an approach that enables simulation of the electrophoretic separation of enantiomers. For acidic cationic systems with sodium and H3 0(+) as leading and terminating components, respectively, acetic acid as counter component, charged weak bases as samples, and a neutral CD as chiral selector, the new codes were used to investigate the dynamics of isotachophoretic adjustment of enantiomers, enantiomer separation, boundaries between enantiomers and between an enantiomer and a buffer constituent of like charge, and zone stability. The impact of leader pH, selector concentration, free mobility of the weak base, mobilities of the formed complexes and complexation constants could thereby be elucidated. For selected examples with methadone enantiomers as analytes and (2-hydroxypropyl)-β-CD as selector, simulated zone patterns were found to compare well with those monitored experimentally in capillary setups with two conductivity detectors or an absorbance and a conductivity detector. Simulation represents an elegant way to provide insight into the formation of isotachophoretic boundaries and zone stability in presence of complexation equilibria in a hitherto inaccessible way.
Resumo:
By forcing, we give a direct interpretation of inline image into Avigad's inline image. To the best of the author's knowledge, this is one of the simplest applications of forcing to “real problems”.
Resumo:
The results of shore-based three-axis resistivity and X-ray computed tomography (CT) measurements on cube-shaped samples recovered during Leg 185 are presented along with moisture and density, P-wave velocity, resistivity, and X-ray CT measurements on whole-round samples of representative lithologies from Site 1149. These measurements augment the standard suite of physical properties obtained during Leg 185 from the cube samples and samples obtained adjacent to the cut cubes. Both shipboard and shore-based measurements of physical properties provide information that assists in characterizing lithologic units, correlating cored material with downhole logging data, understanding the nature of consolidation, and interpreting seismic reflection profiles.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Paper submitted to the IFIP International Conference on Very Large Scale Integration (VLSI-SOC), Darmstadt, Germany, 2003.
Resumo:
Objectives: To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Study Design and Setting: Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Testeretest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen’s kappa (k). Results: The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good testeretest repeatability both for the scores obtained [ICC 5 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (k 5 0.612; 95% CI: 0.384, 0.839). Conclusion: The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research.
Resumo:
The evidence suggests that emotional intelligence and personality traits are important qualities that workers need in order to successfully exercise a profession. This article assumes that the main purpose of universities is to promote employment by providing an education that facilitates the acquisition of abilities, skills, competencies and values. In this study, the emotional intelligence and personality profiles of two groups of Spanish students studying degrees in two different academic disciplines – computer engineering and teacher training – were analysed and compared. In addition, the skills forming part of the emotional intelligence and personality traits required by professionals (computer engineers and teachers) in their work were studied, and the profiles obtained for the students were compared with those identified by the professionals in each field. Results revealed significant differences between the profiles of the two groups of students, with the teacher training students scoring higher on interpersonal skills; differences were also found between professionals and students for most competencies, with professionals in both fields demanding more competencies that those evidenced by graduates. The implications of these results for the incorporation of generic social, emotional and personal competencies into the university curriculum are discussed.
Resumo:
Mutual recognition is one of the most appreciated innovations of the EU. The idea is that one can pursue market integration, indeed "deep' market integration, while respecting 'diversity' amongst the participating countries. Put differently, in pursuing 'free movement' for goods, mutual recognition facilitates free movement by disciplining the nature and scope of 'regulatory barriers', whilst allowing some degree of regulatory discretion for EU Member States. This BEER paper attempts to explain the rationale and logic of mutual recognition in the EU internal goods market, its working in actual practice for about three decades now, culminating in a qualitative cost/benefit analysis and its recent improvement in terms of 'governance' in the so-called New Legislative Framework (first denoted as the 2008 Goods package) thereby ameliorating the benefits/costs ratio. For new (in contrast to existing) national regulation, the intrusive EU procedure to impose mutual recognition is presented as well, with basic data so as to show its critical importance to keep the internal goods market free. All this is complemented by a short summary of the scant economic literature on mutual recognition. Subsequently, the analysis is extended to the internal market for services. This is done in two steps, first by reminding the debate on the origin principle (which goes further than mutual recognition EU-style) and how mutual recognition works under the horizontal services directive. This is followed by a short section on how mutual recognition works in vertical (i.e. sectoral) services markets.
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
"C00-2118-0048."
Resumo:
Mode of access: Internet.
Resumo:
Issued by the division's Lexicographic and Terminology Section under the former name of the division: Air Information Division.
Resumo:
Includes bibliographical references.