47 resultados para Face processing research
Resumo:
Natural ecosystems are increasingly exposed to multiple anthropogenic stressors, including land-use change, deforestation, agricultural intensification, and urbanisation, all of which have led to widespread habitat fragmentation, which is also likely to be amplified further by predicted climate change. The potential interactive effects of these different stressors cannot be determined by studying each in isolation, although such synergies have been largely ignored in ecological field studies to date. Here, we use a model system of naturally fragmented islands in a braided river network, which is exposed to periodic inundation, to investigate the interactive effects of habitat isolation and flood disturbance. Food web structure was similar across the islands during periods of hydrological stability, but several key properties were altered in the aftermath of flood disturbance, based on distance of the islands from the regional source pool of species: taxon richness and mean food chain length declined with habitat isolation after flooding, while the proportion of basal species increased. Greater species turnover through time reflected the slower process of re-colonisation on the more distant islands following disturbance. Increased variability of several food web properties over a 1-year period highlighted the reduced temporal stability of isolated habitat fragments. Many of these effects reflected the differential successes of predator and prey species at re-colonising the islands: even though larger, more mobile consumers may reach the more distant islands first, they cannot establish populations until the lower trophic levels have successfully reassembled. These results highlight the susceptibility of fragmented ecosystems to environmental perturbations. © 2013 Elsevier Ltd.
Resumo:
An overview of research on reconfigurable architectures for network processing applications within the Institute of Electronics, Communications and Information Technology (ECIT) is presented. Three key network processing topics, namely node throughput, Quality of Service (QoS) and security are examined where custom reconfigurability allows network nodes to adapt to fluctuating network traffic and customer demands. Various architectural possibilities have been investigated in order to explore the options and tradeoffs available when using reconfigurability for packet/frame processing, packet-scheduling and data encryption/decryption. This research has shown there is no common approach that can be applied. Rather the methodologies used and the cost-benefits for incorporation of reconfigurability depend on each of the functions considered, for example being well suited to encryption/decryption but not packet/frame processing. © 2005 IEEE.
Resumo:
Performance evaluation of parallel software and architectural exploration of innovative hardware support face a common challenge with emerging manycore platforms: they are limited by the slow running time and the low accuracy of software simulators. Manycore FPGA prototypes are difficult to build, but they offer great rewards. Software running on such prototypes runs orders of magnitude faster than current simulators. Moreover, researchers gain significant architectural insight during the modeling process. We use the Formic FPGA prototyping board [1], which specifically targets scalable and cost-efficient multi-board prototyping, to build and test a 64-board model of a 512-core, MicroBlaze-based, non-coherent hardware prototype with a full network-on-chip in a 3D-mesh topology. We expand the hardware architecture to include the ARM Versatile Express platforms and build a 520-core heterogeneous prototype of 8 Cortex-A9 cores and 512 MicroBlaze cores. We then develop an MPI library for the prototype and evaluate it extensively using several bare-metal and MPI benchmarks. We find that our processor prototype is highly scalable, models faithfully single-chip multicore architectures, and is a very efficient platform for parallel programming research, being 50,000 times faster than software simulation.
Resumo:
Aim: To determine whether the use of an online or blended learning paradigm has the potential to enhance the teaching of clinical skills in undergraduate nursing.
Background: The need to adequately support and develop students in clinical skills is now arguably more important than previously considered due to reductions in practice opportunities. Online and blended teaching methods are being developed to try and meet this requirement, but knowledge about their effectiveness in teaching clinical skills is limited.
Design: Mixed methods systematic review, which follows the Joanna Briggs Institute User guide version 5.
Data Sources: Computerized searches of five databases were undertaken for the period 1995-August 2013.
Review Methods: Critical appraisal and data extraction were undertaken using Joanna Briggs Institute tools for experimental/observational studies and interpretative and critical research. A narrative synthesis was used to report results.
Results: Nineteen published papers were identified. Seventeen papers reported on online approaches and only two papers reported on a blended approach. The synthesis of findings focused on the following four areas: performance/clinical skill, knowledge, self-efficacy/clinical confidence and user experience/satisfaction. The e-learning interventions used varied throughout all the studies.
Conclusion: The available evidence suggests that online learning for teaching clinical skills is no less effective than traditional means. Highlighted by this review is the lack of available evidence on the implementation of a blended learning approach to teaching clinical skills in undergraduate nurse education. Further research is required to assess the effectiveness of this teaching methodology.
Resumo:
‘Temporally urgent’ reactions are extremely rapid, spatially precise movements that are evoked following discrete stimuli. The involvement of primary motor cortex (M1) and its relationship to stimulus intensity in such reactions is not well understood. Continuous theta burst stimulation (cTBS) suppresses focal regions of the cortex and can assess the involvement of motor cortex in speed of processing. The primary objective of this study was to explore the involvement of M1 in speed of processing with respect to stimulus intensity. Thirteen healthy young adults participated in this experiment. Behavioral testing consisted of a simple button press using the index finger following median nerve stimulation of the opposite limb, at either high or low stimulus intensity. Reaction time was measured by the onset of electromyographic activity from the first dorsal interosseous (FDI) muscle of each limb. Participants completed a 30 min bout of behavioral testing prior to, and 15 min following, the delivery of cTBS to the motor cortical representation of the right FDI. The effect of cTBS on motor cortex was measured by recording the average of 30 motor evoked potentials (MEPs) just prior to, and 5 min following, cTBS. Paired t-tests revealed that, of thirteen participants, five demonstrated a significant attenuation, three demonstrated a significant facilitation and five demonstrated no significant change in MEP amplitude following cTBS. Of the group that demonstrated attenuated MEPs, there was a biologically significant interaction between stimulus intensity and effect of cTBS on reaction time and amplitude of muscle activation. This study demonstrates the variability of potential outcomes associated with the use of cTBS and further study on the mechanisms that underscore the methodology is required. Importantly, changes in motor cortical excitability may be an important determinant of speed of processing following high intensity stimulation.
Resumo:
Melt viscosity is one of the main factors affecting product quality in extrusion processes particularly with regard to recycled polymers. However, due to wide variability in the physical properties of recycled feedstock, it is difficult to maintain the melt viscosity during extrusion of polymer blends and obtain good quality product without generating scrap. This research investigates the application of ultrasound and temperature control in an automatic extruder controller, which has ability to maintain constant melt viscosity from variable recycled polymer feedstock during extrusion processing. An ultrasonic modulation system has been developed and fitted to the extruder prior to the die to convey ultrasonic energy from a high power ultrasonic generator to the polymer melt. Two separate control loops have been developed to run simultaneously in one controller: the first loop controls the ultrasonic energy or temperature to maintain constant die pressure, the second loop is used to control extruder screw speed to maintain constant throughput at the extruder die. Time response and energy consumption of the control methods in real-time experiments are also investigated and reported this paper.
Resumo:
Digital pathology and the adoption of image analysis have grown rapidly in the last few years. This is largely due to the implementation of whole slide scanning, advances in software and computer processing capacity and the increasing importance of tissue-based research for biomarker discovery and stratified medicine. This review sets out the key application areas for digital pathology and image analysis, with a particular focus on research and biomarker discovery. A variety of image analysis applications are reviewed including nuclear morphometry and tissue architecture analysis, but with emphasis on immunohistochemistry and fluorescence analysis of tissue biomarkers. Digital pathology and image analysis have important roles across the drug/companion diagnostic development pipeline including biobanking, molecular pathology, tissue microarray analysis, molecular profiling of tissue and these important developments are reviewed. Underpinning all of these important developments is the need for high quality tissue samples and the impact of pre-analytical variables on tissue research is discussed. This requirement is combined with practical advice on setting up and running a digital pathology laboratory. Finally, we discuss the need to integrate digital image analysis data with epidemiological, clinical and genomic data in order to fully understand the relationship between genotype and phenotype and to drive discovery and the delivery of personalized medicine.
Resumo:
Background: Men can be hard to reach with face-to-face health-related information, while increasingly, research shows that they are seeking health information from online sources. Recognizing this trend, there is merit in developing innovative online knowledge translation (KT) strategies capable of translating research on men’s health into engaging health promotion materials. While the concept of KT has become a new mantra for researchers wishing to bridge the gap between research evidence and improved health outcomes, little is written about the process, necessary skills, and best practices by which researchers can develop online knowledge translation.
Objective: Our aim was to illustrate some of the processes and challenges involved in, and potential value of, developing research knowledge online to promote men’s health.
Methods: We present experiences of KT across two case studies of men’s health. First, we describe a study that uses interactive Web apps to translate knowledge relating to Canadian men’s depression. Through a range of mechanisms, study findings were repackaged with the explicit aim of raising awareness and reducing the stigma associated with men’s depression and/or help-seeking. Second, we describe an educational resource for teenage men about unintended pregnancy, developed for delivery in the formal Relationship and Sexuality Education school curricula of Ireland, Northern Ireland (United Kingdom), and South Australia. The intervention is based around a Web-based interactive film drama entitled “If I Were Jack”.
Results: For each case study, we describe the KT process and strategies that aided development of credible and well-received online content focused on men’s health promotion. In both case studies, the original research generated the inspiration for the interactive online content and the core development strategy was working with a multidisciplinary team to develop this material through arts-based approaches. In both cases also, there is an acknowledgment of the need for gender and culturally sensitive information. Both aimed to engage men by disrupting stereotypes about men, while simultaneously addressing men through authentic voices and faces. Finally, in both case studies we draw attention to the need to think beyond placement of content online to delivery to target audiences from the outset.
Conclusions: The case studies highlight some of the new skills required by academics in the emerging paradigm of translational research and contribute to the nascent literature on KT. Our approach to online KT was to go beyond dissemination and diffusion to actively repackage research knowledge through arts-based approaches (videos and film scripts) as health promotion tools, with optimal appeal, to target male audiences. Our findings highlight the importance of developing a multidisciplinary team to inform the design of content, the importance of adaptation to context, both in terms of the national implementation context and consideration of gender-specific needs, and an integrated implementation and evaluation framework in all KT work.
Resumo:
Graphene, due to its outstanding properties, has become the topic of much research activity in recent years. Much of that work has been on a laboratory scale however, if we are to introduce graphene into real product applications it is necessary to examine how the material behaves under industrial processing conditions. In this paper the melt processing of polyamide 6/graphene nanoplatelet composites via twin screw extrusion is investigated and structure–property relationships are examined for mechanical and electrical properties. Graphene nanoplatelets (GNPs) with two aspect ratios (700 and 1000) were used in order to examine the influence of particle dimensions on composite properties. It was found that the introduction of GNPs had a nucleating effect on polyamide 6 (PA6) crystallization and substantially increased crystallinity by up to 120% for a 20% loading in PA6. A small increase in crystallinity was observed when extruder screw speed increased from 50 rpm to 200 rpm which could be attributed to better dispersion and more nucleation sites for crystallization. A maximum enhancement of 412% in Young's modulus was achieved at 20 wt% loading of GNPs. This is the highest reported enhancement in modulus achieved to date for a melt mixed thermoplastic/GNPs composite. A further result of importance here is that the modulus continued to increase as the loading of GNPs increased even at 20 wt% loading and results are in excellent agreement with theoretical predictions for modulus enhancement. Electrical percolation was achieved between 10–15 wt% loading for both aspect ratios of GNPs with an increase in conductivity of approximately 6 orders of magnitude compared to the unfilled PA6.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
This piece of writing is an excerpt from a keynote talk given at the Symposium on Artistic Research in Borås, Sweden, on 28 November 2014.
Resumo:
The increasing adoption of cloud computing, social networking, mobile and big data technologies provide challenges and opportunities for both research and practice. Researchers face a deluge of data generated by social network platforms which is further exacerbated by the co-mingling of social network platforms and the emerging Internet of Everything. While the topicality of big data and social media increases, there is a lack of conceptual tools in the literature to help researchers approach, structure and codify knowledge from social media big data in diverse subject matter domains, many of whom are from nontechnical disciplines. Researchers do not have a general-purpose scaffold to make sense of the data and the complex web of relationships between entities, social networks, social platforms and other third party databases, systems and objects. This is further complicated when spatio-temporal data is introduced. Based on practical experience of working with social media datasets and existing literature, we propose a general research framework for social media research using big data. Such a framework assists researchers in placing their contributions in an overall context, focusing their research efforts and building the body of knowledge in a given discipline area using social media data in a consistent and coherent manner.