819 resultados para Task-based information access
Resumo:
Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.
Resumo:
The following paper is an action research made with a group of 6 teenagers aged 11-13 from the city of Cali. The project was carried out at a non-formal education institution and basically describes the process of a teaching intervention in which the concepts of Critical Pedagogy and Task-Based Learning were the protagonists. The results show first, that students really need to feel motivated in order to accept a critical approach; second, that the role of the teacher in the achievement of the objectives is extremely relevant; it is necessary for them to have a critical perspective before working on this field besides the constant seeking of information in order to innovate in their classes; third, that the TBL (Task Based Learning) and the critical pedagogy are processes that need some time in order to bear fruits; and fourth, that a needs analysis is essential for the quality of the intervention.
Resumo:
El objetivo de este Trabajo Final de Graduación fue formular una propuesta de diseño y construcción de una Intranet, que permita al Centro de Documentación "Alvaro Castro Jenkins" del Banco Central de Costa Rica, administrar los recursos, servicios, y productos de información de acceso confidencial, discrecional y restringido a los funcionarios del Banco, con una red privada de comunicaciones, basada en los protocolos TCP/IP y tecnologías de World Wide Web (WWW). La implementación de la Unidad de Información del OVSICORI se propuso a partir del diagnóstico de la infraestructura, de los recursos tecnológicos, recursos documentales, financieros, humanos y necesidades de los usuarios reales y potenciales. Este Trabajo Final de Graduación consistió en una investigación descriptiva en la que se realizó un diagnóstico de los servicios de información existentes en la Biblioteca Conjunta de la Corte Interamericana de Derechos Humanos y del Instituto Interamericano de Derechos Humanos, con el fin de analizar la pertinencia y utilidad de los mismos y proponer un programa de mejoramiento continuo para los servicios y productos existentes, así como el diseño de nuevos servicios y productos de información.
Resumo:
Everyday, humans and animals navigate complex acoustic environments, where multiple sound sources overlap. Somehow, they effortlessly perform an acoustic scene analysis and extract relevant signals from background noise. Constant updating of the behavioral relevance of ambient sounds requires the representation and integration of incoming acoustical information with internal representations such as behavioral goals, expectations and memories of previous sound-meaning associations. Rapid plasticity of auditory representations may contribute to our ability to attend and focus on relevant sounds. In order to better understand how auditory representations are transformed in the brain to incorporate behavioral contextual information, we explored task-dependent plasticity in neural responses recorded at four levels of the auditory cortical processing hierarchy of ferrets: the primary auditory cortex (A1), two higher-order auditory areas (dorsal PEG and ventral-anterior PEG) and dorso-lateral frontal cortex. In one study we explored the laminar profile of rapid-task related plasticity in A1 and found that plasticity occurred at all depths, but was greatest in supragranular layers. This result suggests that rapid task-related plasticity in A1 derives primarily from intracortical modulation of neural selectivity. In two other studies we explored task-dependent plasticity in two higher-order areas of the ferret auditory cortex that may correspond to belt (secondary) and parabelt (tertiary) auditory areas. We found that representations of behaviorally-relevant sounds are progressively enhanced during performance of auditory tasks. These selective enhancement effects became progressively larger as you ascend the auditory cortical hierarchy. We also observed neuronal responses to non-auditory, task-related information (reward timing, expectations) in the parabelt area that were very similar to responses previously described in frontal cortex. These results suggests that auditory representations in the brain are transformed from the more veridical spectrotemporal information encoded in earlier auditory stages to a more abstract representation encoding sound behavioral meaning in higher-order auditory areas and dorso-lateral frontal cortex.
Resumo:
Industrial robots are both versatile and high performant, enabling the flexible automation typical of the modern Smart Factories. For safety reasons, however, they must be relegated inside closed fences and/or virtual safety barriers, to keep them strictly separated from human operators. This can be a limitation in some scenarios in which it is useful to combine the human cognitive skill with the accuracy and repeatability of a robot, or simply to allow a safe coexistence in a shared workspace. Collaborative robots (cobots), on the other hand, are intrinsically limited in speed and power in order to share workspace and tasks with human operators, and feature the very intuitive hand guiding programming method. Cobots, however, cannot compete with industrial robots in terms of performance, and are thus useful only in a limited niche, where they can actually bring an improvement in productivity and/or in the quality of the work thanks to their synergy with human operators. The limitations of both the pure industrial and the collaborative paradigms can be overcome by combining industrial robots with artificial vision. In particular, vision can be exploited for a real-time adjustment of the pre-programmed task-based robot trajectory, by means of the visual tracking of dynamic obstacles (e.g. human operators). This strategy allows the robot to modify its motion only when necessary, thus maintain a high level of productivity but at the same time increasing its versatility. Other than that, vision offers the possibility of more intuitive programming paradigms for the industrial robots as well, such as the programming by demonstration paradigm. These possibilities offered by artificial vision enable, as a matter of fact, an efficacious and promising way of achieving human-robot collaboration, which has the advantage of overcoming the limitations of both the previous paradigms yet keeping their strengths.
Resumo:
In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the proliferation of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive multiple access. In addition to scalability requirements, new demands have emerged for massive multiple access, including latency and reliability. The challenges envisaged for future wireless communication networks, particularly in the context of massive access, include: i) a very large population size of low power devices transmitting short packets; ii) an ever-increasing scalability requirement; iii) a mild fixed maximum latency requirement; iv) a non-trivial requirement on reliability. To this aim, we suggest the joint utilization of grant-free access protocols, massive MIMO at the base station side, framed schemes to let the contention start and end within a frame, and succesive interference cancellation techniques at the base station side. In essence, this approach is encapsulated in the concept of coded random access with massive MIMO processing. These schemes can be explored from various angles, spanning the protocol stack from the physical (PHY) to the medium access control (MAC) layer. In this thesis, we delve into both of these layers, examining topics ranging from symbol-level signal processing to succesive interference cancellation-based scheduling strategies. In parallel with proposing new schemes, our work includes a theoretical analysis aimed at providing valuable system design guidelines. As a main theoretical outcome, we propose a novel joint PHY and MAC layer design based on density evolution on sparse graphs.
Resumo:
Objective: While in many Western affluent countries there is widespread awareness of chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis (ME), little is known about the awareness of CFS/ME in low- and middle-income countries. We compared the awareness of CFS in Brazil and the United Kingdom. Methods: Recognition and knowledge of CFS were assessed among 120 Brazilian specialist doctors in two major university hospitals using a typical case vignette of CFS. We also surveyed 3914 and 2435 consecutive attenders in Brazilian and British primary care clinics, respectively, concerning their awareness of CFS. Results: When given a typical case vignette of CFS, only 30.8% [95% confidence interval (CI), 22.7-39.9%] of Brazilian specialist doctors mentioned chronic fatigue or CFS as a possible diagnosis, a proportion substantially lower than that observed in Western affluent countries. Similarly, only 16.2% (95% CI, 15.1-17.4%) of Brazilian primary care attenders were aware of CFS, in contrast to 55.1% (95% CI, 53.1-57.1%) of their British counterparts (P <.001). This difference remained highly significant after controlling for patients` sociodemographic and socioeconomic characteristics (P <.001). Conclusions: The awareness of CFS was substantially lower in Brazil than the United Kingdom. The observed difference may influence patients` help-seeking behavior and both doctors` and patients` beliefs and attitudes in relation to fatigue-related syndromes. Attempts to promote the awareness of CFS should be considered in Brazil, but careful plans are required to ensure the delivery of sound evidence-based information. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
Este artigo estuda os instrumentos e mecanismos de transpar??ncia e accountability das ag??ncias reguladoras brasileiras. Por meio da caracteriza????o dos processos de controle, participa????o e acesso a informa????es da Ag??ncia Nacional de Vigil??ncia Sanit??ria (Anvisa), analisa-se como as ag??ncias t??m utilizado tais instrumentos e mecanismos para acolher e processar diversos interesses do processo regulat??rio, promover a estabilidade das regras do jogo e refor??ar a sua legitimidade no ambiente pol??tico e social em que elas est??o inseridas. Foram utilizados dados relativos aos diversos instrumentos de transpar??ncia e accountability, bem como ??s inst??ncias e mecanismos de participa????o da sociedade no processo regulat??rio da Anvisa. Conclui-se que a accountability das ag??ncias ?? um contrafluxo ?? tend??ncia de insulamento, ao mesmo tempo em que se pode configurar um esfor??o de reconhecimento pela sociedade da chegada de um novo aparato institucional no Estado brasileiro: as ag??ncias reguladoras.
Resumo:
OBJECTIVES: Estimate the frequency of online searches on the topic of smoking and analyze the quality of online resources available to smokers interested in giving up smoking. METHODS: Search engines were used to revise searches and online resources related to stopping smoking in Brazil in 2010. The number of searches was determined using analytical tools available on Google Ads; the number and type of sites were determined by replicating the search patterns of internet users. The sites were classified according to content (advertising, library of articles and other). The quality of the sites was analyzed using the Smoking Treatment Scale- Content (STS-C) and the Smoking Treatment Scale - Rating (STS-R). RESULTS: A total of 642,446 searches was carried out. Around a third of the 113 sites encountered were of the 'library' type, i.e. they only contained articles, followed by sites containing clinical advertising (18.6) and professional education (10.6). Thirteen of the sites offered advice on quitting directed at smokers. The majority of the sites did not contain evidence-based information, were not interactive and did not have the possibility of communicating with users after the first contact. Other limitations we came across were a lack of financial disclosure as well as no guarantee of privacy concerning information obtained and no distinction made between editorial content and advertisements. CONCLUSIONS: There is a disparity between the high demand for online support in giving up smoking and the scarcity of quality online resources for smokers. It is necessary to develop interactive, customized online resources based on evidence and random clinical testing in order to improve the support available to Brazilian smokers.
Resumo:
The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization using natural selection to enhance the ability to escape from sub-optimal solutions. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots, hence decreasing the amount of required information exchange among robots. This paper further extends the previously proposed algorithm adapting the behavior of robots based on a set of context-based evaluation metrics. Those metrics are then used as inputs of a fuzzy system so as to systematically adjust the RDPSO parameters (i.e., outputs of the fuzzy system), thus improving its convergence rate, susceptibility to obstacles and communication constraints. The adapted RDPSO is evaluated in groups of physical robots, being further explored using larger populations of simulated mobile robots within a larger scenario.
Resumo:
Wireless sensor networks (WSNs) emerge as underlying infrastructures for new classes of large-scale networked embedded systems. However, WSNs system designers must fulfill the quality-of-service (QoS) requirements imposed by the applications (and users). Very harsh and dynamic physical environments and extremely limited energy/computing/memory/communication node resources are major obstacles for satisfying QoS metrics such as reliability, timeliness, and system lifetime. The limited communication range of WSN nodes, link asymmetry, and the characteristics of the physical environment lead to a major source of QoS degradation in WSNs-the ldquohidden node problem.rdquo In wireless contention-based medium access control (MAC) protocols, when two nodes that are not visible to each other transmit to a third node that is visible to the former, there will be a collision-called hidden-node or blind collision. This problem greatly impacts network throughput, energy-efficiency and message transfer delays, and the problem dramatically increases with the number of nodes. This paper proposes H-NAMe, a very simple yet extremely efficient hidden-node avoidance mechanism for WSNs. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes that scales to multiple clusters via a cluster grouping strategy that guarantees no interference between overlapping clusters. Importantly, H-NAMe is instantiated in IEEE 802.15.4/ZigBee, which currently are the most widespread communication technologies for WSNs, with only minor add-ons and ensuring backward compatibility with their protocols standards. H-NAMe was implemented and exhaustively tested using an experimental test-bed based on ldquooff-the-shelfrdquo technology, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. H-NAMe effectiveness was also demonstrated in a target tracking application with mobile robots - over a WSN deployment.
Resumo:
The hidden-node problem has been shown to be a major source of Quality-of-Service (QoS) degradation in Wireless Sensor Networks (WSNs) due to factors such as the limited communication range of sensor nodes, link asymmetry and the characteristics of the physical environment. In wireless contention-based Medium Access Control protocols, if two nodes that are not visible to each other transmit to a third node that is visible to the formers, there will be a collision – usually called hidden-node or blind collision. This problem greatly affects network throughput, energy-efficiency and message transfer delays, which might be particularly dramatic in large-scale WSNs. This technical report tackles the hidden-node problem in WSNs and proposes HNAMe, a simple yet efficient distributed mechanism to overcome it. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes and then scales to multiple clusters via a cluster grouping strategy that guarantees no transmission interference between overlapping clusters. We also show that the H-NAMe mechanism can be easily applied to the IEEE 802.15.4/ZigBee protocols with only minor add-ons and ensuring backward compatibility with the standard specifications. We demonstrate the feasibility of H-NAMe via an experimental test-bed, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. We believe that the results in this technical report will be quite useful in efficiently enabling IEEE 802.15.4/ZigBee as a WSN protocol.
Resumo:
The hidden-node problem has been shown to be a major source of Quality-of-Service (QoS) degradation in Wireless Sensor Networks (WSNs) due to factors such as the limited communication range of sensor nodes, link asymmetry and the characteristics of the physical environment. In wireless contention-based Medium Access Control protocols, if two nodes that are not visible to each other transmit to a third node that is visible to the formers, there will be a collision – usually called hidden-node or blind collision. This problem greatly affects network throughput, energy-efficiency and message transfer delays, which might be particularly dramatic in large-scale WSNs. This paper tackles the hiddennode problem in WSNs and proposes H-NAMe, a simple yet efficient distributed mechanism to overcome it. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes and then scales to multiple clusters via a cluster grouping strategy that guarantees no transmission interference between overlapping clusters. We also show that the H-NAMe mechanism can be easily applied to the IEEE 802.15.4/ZigBee protocols with only minor add-ons and ensuring backward compatibility with the standard specifications. We demonstrate the feasibility of H-NAMe via an experimental test-bed, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. We believe that the results in this paper will be quite useful in efficiently enabling IEEE 802.15.4/ZigBee as a WSN protocol
Resumo:
During the recent years human society evolved from the “industrial society age” and transitioned into the “knowledge society age”. This means that knowledge media support migrated from “pen and paper” to computer-based Information Systems. Due to this fact Ergonomics has assumed an increasing importance, as a science/technology that deals with the problem of adapting the work to the man, namely in terms of Usability. This paper presents some relevant Ergonomics, Usability and User-centred design concepts regarding Information Systems.
Resumo:
Contrary to fungi, exposure to mycotoxins is not usually identified as a risk factor present in occupational settings. This is probably due to the inexistence of limits regarding concentration of airborne mycotoxins, and also due to the fact that these compounds are rarely monitored in occupational environments. Aflatoxin B1 (AFB1) is the most prevalent aflatoxin and is associated with carcinogenicity, teratogenicity, genotoxicity and immunotoxicity but only a few studies examined exposure in occupational settings. Workers can be exposed to high airborne levels during certain operations in specific occupational settings. Aim of study: The study aimed to assess exposure to AFB1 in three settings: poultry, swine production and waste management.