946 resultados para Syntactic And Semantic Comprehension Tasks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analisam-se as características de ofertas de emprego do setor da Informação-Documentação. A mostra foi obtida de anúncios na Internet, feitos por empresas e instituições privadas brasileiras, durante o ano de 2010. A atividade foi realizada no âmbito de pesquisa maior, que tem como objetivo geral construir uma taxonomia que classifique e hierarquize as competências e conhecimentos exigidos pelas empresas, bem como as características dos postos oferecidos. Extraiu-se a informação de 200 ofertas de emprego obtidas nos portais Catho e Portal do bibliotecário. Cada entrada foi mantida em seu contexto de incidência e com a expressão original. Os termos obtidos passaram por um processo de normalização. A análise busca classificar e valorar as atividades, conhecimentos e tarefas mais solicitados, bem como as aptidões, competências e habilidades, além das incidências das unidades e dos profissionais de informação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Once multi-relational approach has emerged as an alternative for analyzing structured data such as relational databases, since they allow applying data mining in multiple tables directly, thus avoiding expensive joining operations and semantic losses, this work proposes an algorithm with multi-relational approach. Methods Aiming to compare traditional approach performance and multi-relational for mining association rules, this paper discusses an empirical study between PatriciaMine - an traditional algorithm - and its corresponding multi-relational proposed, MR-Radix. Results This work showed advantages of the multi-relational approach in performance over several tables, which avoids the high cost for joining operations from multiple tables and semantic losses. The performance provided by the algorithm MR-Radix shows faster than PatriciaMine, despite handling complex multi-relational patterns. The utilized memory indicates a more conservative growth curve for MR-Radix than PatriciaMine, which shows the increase in demand of frequent items in MR-Radix does not result in a significant growth of utilized memory like in PatriciaMine. Conclusion The comparative study between PatriciaMine and MR-Radix confirmed efficacy of the multi-relational approach in data mining process both in terms of execution time and in relation to memory usage. Besides that, the multi-relational proposed algorithm, unlike other algorithms of this approach, is efficient for use in large relational databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The humans process the numbers in a similar way to animals. There are countless studies in which similar performance between animals and humans (adults and/or children) are reported. Three models have been developed to explain the cognitive mechanisms underlying the number processing. The triple-code model (Dehaene, 1992) posits an mental number line as preferred way to represent magnitude. The mental number line has three particular effects: the distance, the magnitude and the SNARC effects. The SNARC effect shows a spatial association between number and space representations. In other words, the small numbers are related to left space while large numbers are related to right space. Recently a vertical SNARC effect has been found (Ito & Hatta, 2004; Schwarz & Keus, 2004), reflecting a space-related bottom-to-up representation of numbers. The magnitude representations horizontally and vertically could influence the subject performance in explicit and implicit digit tasks. The goal of this research project aimed to investigate the spatial components of number representation using different experimental designs and tasks. The experiment 1 focused on horizontal and vertical number representations in a within- and between-subjects designs in a parity and magnitude comparative tasks, presenting positive or negative Arabic digits (1-9 without 5). The experiment 1A replied the SNARC and distance effects in both spatial arrangements. The experiment 1B showed an horizontal reversed SNARC effect in both tasks while a vertical reversed SNARC effect was found only in comparative task. In the experiment 1C two groups of subjects performed both tasks in two different instruction-responding hand assignments with positive numbers. The results did not show any significant differences between two assignments, even if the vertical number line seemed to be more flexible respect to horizontal one. On the whole the experiment 1 seemed to demonstrate a contextual (i.e. task set) influences of the nature of the SNARC effect. The experiment 2 focused on the effect of horizontal and vertical number representations on spatial biases in a paper-and-pencil bisecting tasks. In the experiment 2A the participants were requested to bisect physical and number (2 or 9) lines horizontally and vertically. The findings demonstrated that digit 9 strings tended to generate a more rightward bias comparing with digit 2 strings horizontally. However in vertical condition the digit 2 strings generated a more upperward bias respect to digit 9 strings, suggesting a top-to-bottom number line. In the experiment 2B the participants were asked to bisect lines flanked by numbers (i.e. 1 or 7) in four spatial arrangements: horizontal, vertical, right-diagonal and left-diagonal lines. Four number conditions were created according to congruent or incongruent number line representation: 1-1, 1-7, 7-1 and 7-7. The main results showed a more reliable rightward bias in horizontal congruent condition (1-7) respect to incongruent condition (7-1). Vertically the incongruent condition (1-7) determined a significant bias towards bottom side of line respect to congruent condition (7-1). The experiment 2 suggested a more rigid horizontal number line while in vertical condition the number representation could be more flexible. In the experiment 3 we adopted the materials of experiment 2B in order to find a number line effect on temporal (motor) performance. The participants were presented horizontal, vertical, rightdiagonal and left-diagonal lines flanked by the same digits (i.e. 1-1 or 7-7) or by different digits (i.e. 1-7 or 7-1). The digits were spatially congruent or incongruent with their respective hypothesized mental representations. Participants were instructed to touch the lines either close to the large digit, or close to the small digit, or to bisected the lines. Number processing influenced movement execution more than movement planning. Number congruency influenced spatial biases mostly along the horizontal but also along the vertical dimension. These results support a two-dimensional magnitude representation. Finally, the experiment 4 addressed the visuo-spatial manipulation of number representations for accessing and retrieval arithmetic facts. The participants were requested to perform a number-matching and an addition verification tasks. The findings showed an interference effect between sum-nodes and neutral-nodes only with an horizontal presentation of digit-cues, in number-matching tasks. In the addition verification task, the performance was similar for horizontal and vertical presentations of arithmetic problems. In conclusion the data seemed to show an automatic activation of horizontal number line also used to retrieval arithmetic facts. The horizontal number line seemed to be more rigid and the preferred way to order number from left-to-right. A possible explanation could be the left-to-right direction for reading and writing. The vertical number line seemed to be more flexible and more dependent from the tasks, reflecting perhaps several example in the environment representing numbers either from bottom-to-top or from top-to-bottom. However the bottom-to-top number line seemed to be activated by explicit task demands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Zentralnervensystem der Säuger steuern N-Methyl-D-Aspartat-(NMDA)-Rezeptoren viele neuronale Prozesse, insbesondere während der Ontogenese sowie bei Lern- und Gedächtnisvorgängen. In der vorliegenden Arbeit wurde die Bedeutung dieser Rezeptoren während der Kortexentwicklung und bei Lernvorgängen mittels elektrophysiologischer, molekularbiologischer, pharmakologischer, histologischer, genetischer und verhaltensbiologischer Methoden an der Maus untersucht. Oszillatorische Netzwerkaktivität ist für die gesunde Entwicklung des Kortex essentiell. Mittels gepaarter patch-clamp Experimente an neonatalen Subplattenzellen wurde festgestellt, dass diese Neurone elektrisch gekoppelt sind. Damit könnten sie einen wichtigen Beitrag zur Entstehung bzw. Verstärkung von Netzwerkoszillationen leisten. Subplattenzellen erhalten afferenten Eingang aus dem Thalamus sowie von benachbarten Subplattenzellen. Die funktionellen und molekularen Eigenschaften dieser Synapsen differierten in eingangsspezifischer Weise. Subplatteninterne Verbindungen besaßen Integrations- und Summationsfähigkeiten, wenig synaptische Ermüdung, Paarpulsfazilitierung und einen erhöhten NR2D-Anteil in ihren NMDA-Rezeptoren. CA1-Pyramidenzellen des adulten Hippocampus zeigten eine den Subplattenzellen vergleichbare eingangsspezifische Verteilung der NMDA-Rezeptor-Untereinheiten. Synapsen von Schaffer-Kollateralen besaßen einen höheren NR2B-Anteil als temporo-ammonische Verbindungen. Die Aktivierung von Dopamin-Rezeptoren potenzierte NR2B-vermittelte synaptische Ströme in CA1-Neuronen. Bei komplexen Lernvorgängen, wie der Extinktion einer traumatischen Erinnerung, spielten NMDA-Rezeptoren von hippocampalen CA1-Zellen eine entscheidende Rolle. CA1-NMDA-Rezeptor-ko-Mäuse zeigten erhebliche Extinktionsdefizite nach Angstkonditionierung. Zudem entwickelten diese Mäuse erhöhte Ängstlichkeit und Hyperaktivität. Das sind beim Menschen Symptome für psychiatrische Angststörungen. Daher könnten CA1-NMDA-Rezeptor-ko-Mäuse als neues Tiermodell für solche Störungen dienen, die durch ein traumatisches Erlebnis ausgelöst werden, wie beim Posttraumatischen Stresssyndrom (PTSD).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims at exploring listeners’ perception of disfluencies, i.e. ungrammatical pauses, filled pauses, repairs, false starts and repetitions, which can irritate listeners and impede comprehension. As professional communicators, conference interpreters should be competent public speakers. This means that their speech should be easily understood by listeners and not contain elements that may be considered irritating. The aim of this study was to understand to what extent listeners notice disfluencies and consider them irritating, and to examine whether there are differences between interpreters and non-interpreters and between different age groups. A survey was therefore carried out among professional interpreters, students of interpreting and people who regularly attend conferences. The respondents were asked to answer a questionnaire after listening to three speeches: three consecutive interpretations delivered during the final exams held at the Advanced School of Languages, Literature, Translation and Interpretation (SSLLTI) in Forlì. Since conference interpreters’ public speaking skills should be at least as good as those of the speakers at a conference, the speeches were presented to the listeners as speeches delivered during a conference, with no mention of interpreting being made. The study is divided into five chapters. Chapter I outlines the characteristics of the interpreter as a professional communicator. The quality criterion “user-friendliness” is explored, with a focus on features that make a speech more user-friendly: fluency, intonation, coherence and cohesion. The Chapter also focuses on listeners’ quality expectations and evaluations. In Chapter II the methodology of the study is described. Chapter III contains a detailed analysis of the texts used for the study, focusing on those elements that may irritate listeners or impede comprehension, namely disfluencies, the wrong use of intonation and a lack of coherence or cohesion. Chapter IV outlines the results of the survey, while Chapter V presents our conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Only few standardized apraxia scales are available and they do not cover all domains and semantic features of gesture production. Therefore, the objective of the present study was to evaluate the reliability and validity of a newly developed test of upper limb apraxia (TULIA), which is comprehensive and still short to administer. METHODS: The TULIA consists of 48 items including imitation and pantomime domain of non-symbolic (meaningless), intransitive (communicative) and transitive (tool related) gestures corresponding to 6 subtests. A 6-point scoring method (0-5) was used (score range 0-240). Performance was assessed by blinded raters based on videos in 133 stroke patients, 84 with left hemisphere damage (LHD) and 49 with right hemisphere damage (RHD), as well as 50 healthy subjects (HS). RESULTS: The clinimetric findings demonstrated mostly good to excellent internal consistency, inter- and intra-rater (test-retest) reliability, both at the level of the six subtests and at individual item level. Criterion validity was evaluated by confirming hypotheses based on the literature. Construct validity was demonstrated by a high correlation (r = 0.82) with the De Renzi-test. CONCLUSION: These results show that the TULIA is both a reliable and valid test to systematically assess gesture production. The test can be easily applied and is therefore useful for both research purposes and clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While functional changes linked to second language learning have been subject to extensive investigation, the issue of learning-dependent structural plasticity in the fields of bilingualism and language comprehension has so far received less notice. In the present study we used voxel-based morphometry to monitor structural changes occurring within five months of second language learning. Native English-speaking exchange students learning German in Switzerland were examined once at the beginning of their stay and once about five months later, when their German language skills had significantly increased. We show that structural changes in the left inferior frontal gyrus are correlated with the increase in second language proficiency as measured by a paper-and-pencil language test. Contrary to the increase in proficiency and grey matter, the absolute values of grey matter density and second language proficiency did not correlate (neither on first nor on second measurement). This indicates that the individual amount of learning is reflected in brain structure changes, regardless of absolute proficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

COMPOSERS COMMONLY USE MAJOR OR MINOR SCALES to create different moods in music.Nonmusicians show poor discrimination and classification of this musical dimension; however, they can perform these tasks if the decision is phrased as happy vs. sad.We created pairs of melodies identical except for mode; the first major or minor third or sixth was the critical note that distinguished major from minor mode. Musicians and nonmusicians judged each melody as major vs. minor or happy vs. sad.We collected ERP waveforms, triggered to the onset of the critical note. Musicians showed a late positive component (P3) to the critical note only for the minor melodies, and in both tasks.Nonmusicians could adequately classify the melodies as happy or sad but showed little evidence of processing the critical information. Major appears to be the default mode in music, and musicians and nonmusicians apparently process mode differently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Auditory imagery is more than just mental “replaying” of tunes in one’s head. I will review several studies that capture characteristics of complex and active imagery tasks, using both behavioral and neuroscience approaches. I use behavioral methods to capture people’s ability to make emotion judgments about both heard and imagined music in real time. My neuroimaging studies look at the neural correlates of encoding an imagined melody, anticipating an upcoming tune, and also imagining tunes backwards. Several studies show voxel-by-voxel correlates of neural activity with self-report of imagery vividness. These studies speak to the ways in which musical imagery allows us not just to remember music, but also how we use those memories to judge temporally changing aspects of the musical experience.