855 resultados para multimodal terminals
A multimodal perspective on the composition of cortical oscillations:frontiers in human neuroscience
Resumo:
An expanding corpus of research details the relationship between functional magnetic resonance imaging (fMRI) measures and neuronal network oscillations. Typically, integratedelectroencephalography(EEG) and fMRI,orparallel magnetoencephalography (MEG) and fMRI are used to draw inference about the consanguinity of BOLD and electrical measurements. However, there is a relative dearth of information about the relationship between E/MEG and the focal networks from which these signals emanate. Consequently, the genesis and composition of E/MEG oscillations requires further clarification. Here we aim to contribute to understanding through a series of parallel measurements of primary motor cortex (M1) oscillations, using human MEG and in-vitro rodent local field potentials. We compare spontaneous activity in the ~10Hz mu and 15-30Hz beta frequency ranges and compare MEG signals with independent and integrated layers III and V(LIII/LV) from in vitro recordings. We explore the mechanisms of oscillatory generation, using specific pharmacological modulation with the GABA-A alpha-1 subunit modulator zolpidem. Finally, to determine the contribution of cortico-cortical connectivity, we recorded in-vitro M1, during an incision to sever lateral connections between M1 and S1 cortices. We demonstrate that frequency distribution of MEG signals appear have closer statistically similarity with signals from integrated rather than independent LIII/LV laminae. GABAergic modulation in both modalities elicited comparable changes in the power of the beta band. Finally, cortico-cortical connectivity in sensorimotor cortex (SMC) appears to directly influence the power of the mu rhythm in LIII. These findings suggest that the MEG signal is an amalgam of outputs from LIII and LV, that multiple frequencies can arise from the same cortical area and that in vitro and MEG M1 oscillations are driven by comparable mechanisms. Finally, corticocortical connectivity is reflected in the power of the SMC mu rhythm. © 2013 Ronnqvist, Mcallister, Woodhall, Stanford and Hall.
Resumo:
Generation of stable dual and/or multiple longitudinal modes emitted from a single quantum dot (QD) laser diode (LD) over a broad wavelength range by using volume Bragg gratings (VBG's) in an external cavity setup is reported. The LD operates in both the ground and excited states and the gratings give a dual-mode separation around each emission peak of 5 nm, which is suitable as a continuous wave (CW) optical pump signal for a terahertz (THz) photomixer device. The setup also generates dual modes around both 1180m and 1260 nm simultaneously, giving four simultaneous narrow linewidth modes comprising two simultaneous difference frequency pump signals. (C) 2011 American Institute of Physics.
Resumo:
Mobile technology has not yet achieved widespread acceptance in the Architectural, Engineering, and Construction (AEC) industry. This paper presents work that is part of an ongoing research project focusing on the development of multimodal mobile applications for use in the AEC industry. This paper focuses specifically on a context-relevant lab-based evaluation of two input modalities – stylus and soft-keyboard v. speech-based input – for use with a mobile data collection application for concrete test technicians. The manner in which the evaluation was conducted as well as the results obtained are discussed in detail.
Resumo:
Mobile technologies have yet to be widely adopted by the Architectural, Engineering, and Construction (AEC) industry despite being one of the major growth areas in computing in recent years. This lack of uptake in the AEC industry is likely due, in large part, to the combination of small screen size and inappropriate interaction demands of current mobile technologies. This paper discusses the scope for multimodal interaction design with a specific focus on speech-based interaction to enhance the suitability of mobile technology use within the AEC industry by broadening the field data input capabilities of such technologies. To investigate the appropriateness of using multimodal technology for field data collection in the AEC industry, we have developed a prototype Multimodal Field Data Entry (MFDE) application. This application, which allows concrete testing technicians to record quality control data in the field, has been designed to support two different modalities of data input speech-based data entry and stylus-based data entry. To compare the effectiveness or usability of, and user preference for, the different input options, we have designed a comprehensive lab-based evaluation of the application. To appropriately reflect the anticipated context of use within the study design, careful consideration had to be given to the key elements of a construction site that would potentially influence a test technician's ability to use the input techniques. These considerations and the resultant evaluation design are discussed in detail in this paper.
Resumo:
Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.
Resumo:
In this paper we take seriously the call for strategy-as-practice research to address the material, spatial and bodily aspects of strategic work. Drawing on a video-ethnographic study of strategic episodes in a financial trading context, we develop a conceptual framework that elaborates on strategic work as socially accomplished within particular spaces that are constructed through different orchestrations of material, bodily and discursive resources. Building on the findings, our study identifies three types of strategic work - private work, collaborative work and negotiating work - that are accomplished within three distinct spaces that are constructed through multimodal constellations of semiotic resources. We show that these spaces, and the activities performed within them, are continuously shifting in ways that enable and constrain the particular outcomes of a strategic episode. Our framework contributes to the strategy-as-practice literature by identifying the importance of spaces in conducting strategic work and providing insight into the way that these spaces are constructed.
Resumo:
The results of research the intelligence multimodal man-machine interface and virtual reality means for assistive medical systems including computers and mechatronic systems (robots) are discussed. The gesture translation for disability peoples, the learning-by-showing technology and virtual operating room with 3D visualization are presented in this report and were announced at International exhibition "Intelligent and Adaptive Robots–2005".
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.
Resumo:
This dissertation introduces the design of a multimodal, adaptive real-time assistive system as an alternate human computer interface that can be used by individuals with severe motor disabilities. The proposed design is based on the integration of a remote eye-gaze tracking system, voice recognition software, and a virtual keyboard. The methodology relies on a user profile that customizes eye gaze tracking using neural networks. The user profiling feature facilitates the notion of universal access to computing resources for a wide range of applications such as web browsing, email, word processing and editing. ^ The study is significant in terms of the integration of key algorithms to yield an adaptable and multimodal interface. The contributions of this dissertation stem from the following accomplishments: (a) establishment of the data transport mechanism between the eye-gaze system and the host computer yielding to a significantly low failure rate of 0.9%; (b) accurate translation of eye data into cursor movement through congregate steps which conclude with calibrated cursor coordinates using an improved conversion function; resulting in an average reduction of 70% of the disparity between the point of gaze and the actual position of the mouse cursor, compared with initial findings; (c) use of both a moving average and a trained neural network in order to minimize the jitter of the mouse cursor, which yield an average jittering reduction of 35%; (d) introduction of a new mathematical methodology to measure the degree of jittering of the mouse trajectory; (e) embedding an onscreen keyboard to facilitate text entry, and a graphical interface that is used to generate user profiles for system adaptability. ^ The adaptability nature of the interface is achieved through the establishment of user profiles, which may contain the jittering and voice characteristics of a particular user as well as a customized list of the most commonly used words ordered according to the user's preferences: in alphabetical or statistical order. This allows the system to successfully provide the capability of interacting with a computer. Every time any of the sub-system is retrained, the accuracy of the interface response improves even more. ^
Resumo:
Given the multiplicity of languages and media present in contemporary texts, the work with digital genres characterizes itself as essential for teaching, reading and writing. Virtual media is already present in many dayly activities that require the use of language. This shows that the globalized world brings new demands of literacy and various reading practices. Given this perspective, we propose to work the multiliteracies present in the new texts from the enunciative discourse Bakhtinian assumptions. For this, we chose to be as the object of research / intervention, the horror flash fiction multimodal discursive genre, because it is a multissemiótico digital statement of virtual circulation. In this context, this study aimed to understand how the teaching of this kind can contribute to the development of knowledge related to reading and text production, required by multiliteracies, by performing a Didactic Sequence in the classroom, specifically for two classes of elementary school, 7th and 8th grades of public school. The research was based on Bakhtin's theory and the Circle (2009, 2011) on gender perspective in a dialogic and the proposed Dolz and Schneuwly (2004) for the text of teaching through sequences Teaching. We also use the precepts of multiliteracies focused on Rojo (2012, 2013). The methodology used was based on a qualitative approach. We consider the analysis of minicontos produced by students, it's own hibridism of multiliteracies, the discursive characteristics such as composition, style and subject content, in addition to relations dialogicity present in these statements. At the end of the study, we realized that our intervention contributed to the expansion of knowledge of the subjects involved related to reading and multimodal genre production.
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Through a fine-grained reading of a London-French blog, this article aims to shed light on the lived experience of the French community in London. The ethnosemiotic conceptual framework brings together ethnographic and semiotic schools of thought, focusing in particular on Pierre Bourdieu’s concept of habitus and Gunther Kress’s multimodal social semiotic analytical model. Habitus is broken down into its material manifestations of habitat, habit and habituation, all displayed in the blog and revealing of the blogger’s identity and positioning within the migration setting. As all modes are considered to be of equal semiotic potential, equivalent emphasis is placed on the multiple modes of meaning-making present in the blog, such as layout, colour, typography and language. By examining the dynamic relationships between blogger and audience, subjectivity and objectivity, on-line and on-land habitus, and intermodal dynamics themselves, through the prism of multimodality, hidden facets of the blogger’s cultural identity and sense of community belonging within the diasporic context begin to materialise.
Resumo:
Invited Plenary Speaker
Resumo:
In this paper we present a convolutional neuralnetwork (CNN)-based model for human head pose estimation inlow-resolution multi-modal RGB-D data. We pose the problemas one of classification of human gazing direction. We furtherfine-tune a regressor based on the learned deep classifier. Next wecombine the two models (classification and regression) to estimateapproximate regression confidence. We present state-of-the-artresults in datasets that span the range of high-resolution humanrobot interaction (close up faces plus depth information) data tochallenging low resolution outdoor surveillance data. We buildupon our robust head-pose estimation and further introduce anew visual attention model to recover interaction with theenvironment. Using this probabilistic model, we show thatmany higher level scene understanding like human-human/sceneinteraction detection can be achieved. Our solution runs inreal-time on commercial hardware