909 resultados para Computer arithmetic and logic units.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
BACKGROUND: Painful invasive procedures are frequently performed on preterm infants admitted to a neonatal intensive care unit (NICU). The aim of the present study was to investigate current pain management in Austrian, German and Swiss NICU and to identify factors associated with improved pain management in preterm infants. METHODS: A questionnaire was sent to all Austrian, German and Swiss pediatric hospitals with an NICU (n = 370). Pain assessment and documentation, use of analgesics for 13 painful procedures, presence of written guidelines for pain management and the use of 12 analgesics and sedatives were examined. RESULTS: A total of 225 units responded (61%). Pain assessment and documentation and frequent analgesic therapy for painful procedures were performed more often in units using written guidelines for pain management and in those treating >50 preterm infants at <32 weeks of gestation per year. This was also the case for the use of opioid analgesics and sucrose solution. Non-opioid analgesics were used more often in smaller units and in units with written guidelines. There was a broad variation in dosage of analgesics and sedatives within all groups. CONCLUSION: Pain assessment, documentation of pain and analgesic therapy are more frequently performed in NICU with written guidelines for pain management and in larger units with more than 50 preterm infants at <32 weeks of gestation per year.
Resumo:
Among daily computer users who are proficient, some are flexible at accomplishing unfamiliar tasks on their own and others have difficulty. Software designers and evaluators involved with Human Computer Interaction (HCI) should account for any group of proficient daily users that are shown to stumble over unfamiliar tasks. We define "Just Enough" (JE) users as proficient daily computer users with predominantly extrinsic motivation style who know just enough to get what they want or need from the computer. We hypothesize that JE users have difficulty with unfamiliar computer tasks and skill transfer, whereas intrinsically motivated daily users accomplish unfamiliar tasks readily. Intrinsic motivation can be characterized by interest, enjoyment, and choice and extrinsic motivation is externally regulated. In our study we identified users by motivation style and then did ethnographic observations. Our results confirm that JE users do have difficulty accomplishing unfamiliar tasks on their own but had fewer problems with near skill transfer. In contrast, intrinsically motivated users had no trouble with unfamiliar tasks nor with near skill transfer. This supports our assertion that JE users know enough to get routine tasks done and can transfer that knowledge, but become unproductive when faced with unfamiliar tasks. This study combines quantitative and qualitative methods. We identified 66 daily users by motivation style using an inventory adapted from Deci and Ryan (Ryan and Deci 2000) and from Guay, Vallerand, and Blanchard (Guay et al. 2000). We used qualitative ethnographic methods with a think aloud protocol to observe nine extrinsic users and seven intrinsic users. Observation sessions had three customized phases where the researcher directed the participant to: 1) confirm the participant's proficiency; 2) test the participant accomplishing unfamiliar tasks; and 3) test transfer of existing skills to unfamiliar software.
Resumo:
The development of electrophoretic computer models and their use for simulation of electrophoretic processes has increased significantly during the last few years. Recently, GENTRANS and SIMUL5 were extended with algorithms that describe chemical equilibria between solutes and a buffer additive in a fast 1:1 interaction process, an approach that enables simulation of the electrophoretic separation of enantiomers. For acidic cationic systems with sodium and H3 0(+) as leading and terminating components, respectively, acetic acid as counter component, charged weak bases as samples, and a neutral CD as chiral selector, the new codes were used to investigate the dynamics of isotachophoretic adjustment of enantiomers, enantiomer separation, boundaries between enantiomers and between an enantiomer and a buffer constituent of like charge, and zone stability. The impact of leader pH, selector concentration, free mobility of the weak base, mobilities of the formed complexes and complexation constants could thereby be elucidated. For selected examples with methadone enantiomers as analytes and (2-hydroxypropyl)-β-CD as selector, simulated zone patterns were found to compare well with those monitored experimentally in capillary setups with two conductivity detectors or an absorbance and a conductivity detector. Simulation represents an elegant way to provide insight into the formation of isotachophoretic boundaries and zone stability in presence of complexation equilibria in a hitherto inaccessible way.
Resumo:
By forcing, we give a direct interpretation of inline image into Avigad's inline image. To the best of the author's knowledge, this is one of the simplest applications of forcing to “real problems”.
Resumo:
The results of shore-based three-axis resistivity and X-ray computed tomography (CT) measurements on cube-shaped samples recovered during Leg 185 are presented along with moisture and density, P-wave velocity, resistivity, and X-ray CT measurements on whole-round samples of representative lithologies from Site 1149. These measurements augment the standard suite of physical properties obtained during Leg 185 from the cube samples and samples obtained adjacent to the cut cubes. Both shipboard and shore-based measurements of physical properties provide information that assists in characterizing lithologic units, correlating cored material with downhole logging data, understanding the nature of consolidation, and interpreting seismic reflection profiles.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Paper submitted to the IFIP International Conference on Very Large Scale Integration (VLSI-SOC), Darmstadt, Germany, 2003.
Resumo:
Objectives: To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Study Design and Setting: Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Testeretest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen’s kappa (k). Results: The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good testeretest repeatability both for the scores obtained [ICC 5 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (k 5 0.612; 95% CI: 0.384, 0.839). Conclusion: The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research.
Resumo:
The evidence suggests that emotional intelligence and personality traits are important qualities that workers need in order to successfully exercise a profession. This article assumes that the main purpose of universities is to promote employment by providing an education that facilitates the acquisition of abilities, skills, competencies and values. In this study, the emotional intelligence and personality profiles of two groups of Spanish students studying degrees in two different academic disciplines – computer engineering and teacher training – were analysed and compared. In addition, the skills forming part of the emotional intelligence and personality traits required by professionals (computer engineers and teachers) in their work were studied, and the profiles obtained for the students were compared with those identified by the professionals in each field. Results revealed significant differences between the profiles of the two groups of students, with the teacher training students scoring higher on interpersonal skills; differences were also found between professionals and students for most competencies, with professionals in both fields demanding more competencies that those evidenced by graduates. The implications of these results for the incorporation of generic social, emotional and personal competencies into the university curriculum are discussed.
Resumo:
Mutual recognition is one of the most appreciated innovations of the EU. The idea is that one can pursue market integration, indeed "deep' market integration, while respecting 'diversity' amongst the participating countries. Put differently, in pursuing 'free movement' for goods, mutual recognition facilitates free movement by disciplining the nature and scope of 'regulatory barriers', whilst allowing some degree of regulatory discretion for EU Member States. This BEER paper attempts to explain the rationale and logic of mutual recognition in the EU internal goods market, its working in actual practice for about three decades now, culminating in a qualitative cost/benefit analysis and its recent improvement in terms of 'governance' in the so-called New Legislative Framework (first denoted as the 2008 Goods package) thereby ameliorating the benefits/costs ratio. For new (in contrast to existing) national regulation, the intrusive EU procedure to impose mutual recognition is presented as well, with basic data so as to show its critical importance to keep the internal goods market free. All this is complemented by a short summary of the scant economic literature on mutual recognition. Subsequently, the analysis is extended to the internal market for services. This is done in two steps, first by reminding the debate on the origin principle (which goes further than mutual recognition EU-style) and how mutual recognition works under the horizontal services directive. This is followed by a short section on how mutual recognition works in vertical (i.e. sectoral) services markets.
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.