819 resultados para Task-based information access


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Ensino Precoce do Inglês

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos, o avanço da tecnologia e a miniaturização de diversos componentes têm permitido o aparecimento de novos conceitos, ideias e projetos, que até aqui não passariam de filmes de ficção científica. Com a tecnologia atual, podem ser desenvolvidos pequenos dispositivos wearable com diversas interfaces, múltiplas conectividades, poder de processamento e autonomia. Permitindo desta forma, dar resposta à crescente necessidade de interação com os mais diversos equipamentos eletrónicos do dia-a-dia, melhorando o acesso e o fornecimento de informação. O principal objetivo deste trabalho passa assim por demonstrar e implementar um conceito que permita estreitar e facilitar a interação entre o utilizador e o mundo que o rodeia, quer em ambientes domésticos quer industriais. Para isso foi projetado e implementado um dispositivo wearable (para utilização no pulso) baseado numa arquitetura de hardware e software capaz de correr diferentes aplicações, tais como extensão de alertas de um smartphone, crowdsourcing de informações meteorológicas, manutenção e inspeção industrial e monitorização remota de forças de segurança. Os resultados obtidos demonstram que este conceito é viável tanto do ponto de vista técnico como funcional, evidenciando boas hipóteses para que estes conceitos, métodos e tecnologias possam ser integradas em plataformas robóticas desenvolvidas no âmbito de projetos do Laboratório de Sistemas Autónomos (LSA) bem como nos contextos industrial e de lazer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vision of the Internet of Things (IoT) includes large and dense deployment of interconnected smart sensing and monitoring devices. This vast deployment necessitates collection and processing of large volume of measurement data. However, collecting all the measured data from individual devices on such a scale may be impractical and time consuming. Moreover, processing these measurements requires complex algorithms to extract useful information. Thus, it becomes imperative to devise distributed information processing mechanisms that identify application-specific features in a timely manner and with a low overhead. In this article, we present a feature extraction mechanism for dense networks that takes advantage of dominance-based medium access control (MAC) protocols to (i) efficiently obtain global extrema of the sensed quantities, (ii) extract local extrema, and (iii) detect the boundaries of events, by using simple transforms that nodes employ on their local data. We extend our results for a large dense network with multiple broadcast domains (MBD). We discuss and compare two approaches for addressing the challenges with MBD and we show through extensive evaluations that our proposed distributed MBD approach is fast and efficient at retrieving the most valuable measurements, independent of the number sensor nodes in the network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

African elections often reveal low levels of political accountability. We assess different forms of voter education during an election in Mozambique. Three interventions providing information to voters and calling for their electoral participation were randomized; an SMS-based information campaign, an SMS hotline for electoral misconduct, and the distribution of a free newspaper. To measure impact, we look at official electoral results, reports by electoral observers, behavioral and survey data. We find positive effects of all treatments on voter turnout. We observe that the distribution of the newspaper led to more accountability-based participation and to a decrease in electoral problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT: In order to evaluate the one-year evolution of web-based information on alcohol dependence, we re-assessed alcohol-related sites in July 2007 with the same evaluating tool that had been used to assess these sites in June 2006. Websites were assessed with a standardized form designed to rate sites on the basis of accountability, presentation, interactivity, readability, and content quality. The DISCERN scale was also used, which aimed to assist persons without content expertise in assessing the quality of written health publications. Scores were highly stable for all components of the form one year later (r = .77 to .95, p < .01). Analysis of variance for repeated measures showed no time effect, no interaction between time and scale, no interaction between time and group (affiliation categories), and no interaction between time, group, and scale. The study highlights lack of change of alcohol-dependence-related web pages across one year.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Morris water maze (MWM) task, proprioceptive information is likely to have a poor accuracy due to movement inertia. Hence, in this condition, dynamic visual information providing information on linear and angular acceleration would play a critical role in spatial navigation. To investigate this assumption we compared rat's spatial performance in the MWM and in the homing hole board (HB) tasks using a 1.5 Hz stroboscopic illumination. In the MWM, rats trained in the stroboscopic condition needed more time than those trained in a continuous light condition to reach the hidden platform. They expressed also little accuracy during the probe trial. In the HB task, in contrast, place learning remained unaffected by the stroboscopic light condition. The deficit in the MWM was thus complete, affecting both escape latency and discrimination of the reinforced area, and was thus task specific. This dissociation confirms that dynamic visual information is crucial to spatial navigation in the MWM whereas spatial navigation on solid ground is mediated by a multisensory integration, and thus less dependent on visual information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Pharmacy-based case mix measures are an alternative source of information to the relatively scarce outpatient diagnoses data. But most published tools use national drug nomenclatures and offer no head-to-head comparisons between drugs-related and diagnoses-based categories. The objective of the study was to test the accuracy of drugs-based morbidity groups derived from the World Health Organization Anatomical Therapeutic Chemical Classification of drugs by checking them against diagnoses-based groups. METHODS: We compared drugs-based categories with their diagnoses-based analogues using anonymous data on 108,915 individuals insured with one of four companies. They were followed throughout 2005 and 2006 and hospitalized at least once during this period. The agreement between the two approaches was measured by weighted kappa coefficients. The reproducibility of the drugs-based morbidity measure over the 2 years was assessed for all enrollees. RESULTS: Eighty percent used a drug associated with at least one of the 60 morbidity categories derived from drugs dispensation. After accounting for inpatient under-coding, fifteen conditions agreed sufficiently with their diagnoses-based counterparts to be considered alternative strategies to diagnoses. In addition, they exhibited good reproducibility and allowed prevalence estimates in accordance with national estimates. For 22 conditions, drugs-based information identified accurately a subset of the population defined by diagnoses. CONCLUSIONS: Most categories provide insurers with health status information that could be exploited for healthcare expenditure prediction or ambulatory cost control, especially when ambulatory diagnoses are not available. However, due to insufficient concordance with their diagnoses-based analogues, their use for morbidity indicators is limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pace of development of new healthcare technologies and related knowledge is very fast. Implementation of high quality evidence-based knowledge is thus mandatory to warrant an effective healthcare system and patient safety. However, even though only a small fraction of the approximate 2500 scientific publication indexed daily in Medline is actually useful to clinical practice, the amountof the new information is much too large to allow busy healthcare professionals to stay aware of possibly important evidence-based information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the dramatic increase in the volume of experimental results in every domain of life sciences, assembling pertinent data and combining information from different fields has become a challenge. Information is dispersed over numerous specialized databases and is presented in many different formats. Rapid access to experiment-based information about well-characterized proteins helps predict the function of uncharacterized proteins identified by large-scale sequencing. In this context, universal knowledgebases play essential roles in providing access to data from complementary types of experiments and serving as hubs with cross-references to many specialized databases. This review outlines how the value of experimental data is optimized by combining high-quality protein sequences with complementary experimental results, including information derived from protein 3D-structures, using as an example the UniProt knowledgebase (UniProtKB) and the tools and links provided on its website ( http://www.uniprot.org/ ). It also evokes precautions that are necessary for successful predictions and extrapolations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Semantic Web (SW) research has resulted in significant outcomes. Various industries have adopted SW technologies, while the ‘deep web’ is still pursuing the critical transformation point, in which the majority of data found on the deep web will be exploited through SW value layers. In this article we analyse the SW applications from a ‘market’ perspective. We are setting the key requirements for real-world information systems that are SW-enabled and we discuss the major difficulties for the SW uptake that has been delayed. This article contributes to the literature of SW and knowledge management providing a context for discourse towards best practices on SW-based information systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2×2×3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Activity decreases, or deactivations, of midline and parietal cortical brain regions are routinely observed in human functional neuroimaging studies that compare periods of task-based cognitive performance with passive states, such as rest. It is now widely held that such task-induced deactivations index a highly organized"default-mode network" (DMN): a large-scale brain system whose discovery has had broad implications in the study of human brain function and behavior. In this work, we show that common task-induced deactivations from rest also occur outside of the DMN as a function of increased task demand. Fifty healthy adult subjects performed two distinct functional magnetic resonance imaging tasks that were designed to reliably map deactivations from a resting baseline. As primary findings, increases in task demand consistently modulated the regional anatomy of DMN deactivation. At high levels of task demand, robust deactivation was observed in non-DMN regions, most notably, the posterior insular cortex. Deactivation of this region was directly implicated in a performance-based analysis of experienced task difficulty. Together, these findings suggest that task-induced deactivations from rest are not limited to the DMN and extend to brain regions typically associated with integrative sensory and interoceptive processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray medical imaging is increasingly becoming three-dimensional (3-D). The dose to the population and its management are of special concern in computed tomography (CT). Task-based methods with model observers to assess the dose-image quality trade-off are promising tools, but they still need to be validated for real volumetric images. The purpose of the present work is to evaluate anthropomorphic model observers in 3-D detection tasks for low-contrast CT images. We scanned a low-contrast phantom containing four types of signals at three dose levels and used two reconstruction algorithms. We implemented a multislice model observer based on the channelized Hotelling observer (msCHO) with anthropomorphic channels and investigated different internal noise methods. We found a good correlation for all tested model observers. These results suggest that the msCHO can be used as a relevant task-based method to evaluate low-contrast detection for CT and optimize scan protocols to lower dose in an efficient way.