964 resultados para brain-computer interface
Resumo:
A computer simulation study describing the electrophoretic separation and migration of methadone enantiomers in presence of free and immobilized (2-hydroxypropyl)-β-CD is presented. The 1:1 interaction of methadone with the neutral CD was simulated by using experimentally determined mobilities and complexation constants for the complexes in a low-pH BGE comprising phosphoric acid and KOH. The use of complex mobilities represents free solution conditions with the chiral selector being a buffer additive, whereas complex mobilities set to zero provide data that mimic migration and separation with the chiral selector being immobilized, that is CEC conditions in absence of unspecific interaction between analytes and the chiral stationary phase. Simulation data reveal that separations are quicker, electrophoretic displacement rates are reduced, and sensitivity is enhanced in CEC with on-column detection in comparison to free solution conditions. Simulation is used to study electrophoretic analyte behavior at the interface between sample and the CEC column with the chiral selector (analyte stacking) and at the rear end when analytes leave the environment with complexation (analyte destacking). The latter aspect is relevant for off-column analyte detection in CEC and is described here for the first time via the dynamics of migrating analyte zones. Simulation provides insight into means to counteract analyte dilution at the column end via use of a BGE with higher conductivity. Furthermore, the impact of EOF on analyte migration, separation, and detection for configurations with the selector zone being displaced or remaining immobilized under buffer flow is simulated. In all cases, the data reveal that detection should occur within or immediately after the selector zone.
Cerebellar mechanisms for motor learning: Testing predictions from a large-scale computer simulation
Resumo:
The cerebellum is the major brain structure that contributes to our ability to improve movements through learning and experience. We have combined computer simulations with behavioral and lesion studies to investigate how modification of synaptic strength at two different sites within the cerebellum contributes to a simple form of motor learning—Pavlovian conditioning of the eyelid response. These studies are based on the wealth of knowledge about the intrinsic circuitry and physiology of the cerebellum and the straightforward manner in which this circuitry is engaged during eyelid conditioning. Thus, our simulations are constrained by the well-characterized synaptic organization of the cerebellum and further, the activity of cerebellar inputs during simulated eyelid conditioning is based on existing recording data. These simulations have allowed us to make two important predictions regarding the mechanisms underlying cerebellar function, which we have tested and confirmed with behavioral studies. The first prediction describes the mechanisms by which one of the sites of synaptic modification, the granule to Purkinje cell synapses (gr → Pkj) of the cerebellar cortex, could generate two time-dependent properties of eyelid conditioning—response timing and the ISI function. An empirical test of this prediction using small, electrolytic lesions of the cerebellar cortex revealed the pattern of results predicted by the simulations. The second prediction made by the simulations is that modification of synaptic strength at the other site of plasticity, the mossy fiber to deep nuclei synapses (mf → nuc), is under the control of Purkinje cell activity. The analysis predicts that this property should confer mf → nuc synapses with resistance to extinction. Thus, while extinction processes erase plasticity at the first site, residual plasticity at mf → nuc synapses remains. The residual plasticity at the mf → nuc site confers the cerebellum with the capability for rapid relearning long after the learned behavior has been extinguished. We confirmed this prediction using a lesion technique that reversibly disconnected the cerebellar cortex at various stages during extinction and reacquisition of eyelid responses. The results of these studies represent significant progress toward a complete understanding of how the cerebellum contributes to motor learning. ^
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
The association between increases in cerebral glucose metabolism and the development of acidosis is largely inferential, based on reports linking hyperglycemia with poor neurological outcome, lactate accumulation, and the severity of acidosis. We measured local cerebral metabolic rate for glucose (lCMRglc) and an index of brain pH--the acid-base index (ABI)--concurrently and characterized their interaction in a model of focal cerebral ischemia in rats in a double-label autoradiographic study, using ($\sp{14}$C) 2-deoxyglucose and ($\sp{14}$C) dimethyloxazolidinedione. Computer-assisted digitization and analysis permitted the simultaneous quantification of the two variables on a pixel-by-pixel basis in the same brain slices. Hemispheres ipsilateral to tamponade-induced middle cerebral occlusion showed areas of normal, depressed and elevated glucose metabolic rate (as defined by an interhemispheric asymmetry index) after two hours of ischemia. Regions of normal glucose metabolic rate showed normal ABI (pH $\pm$ SD = 6.97 $\pm$ 0.09), regions of depressed lCMRglc showed severe acidosis (6.69 $\pm$ 0.14), and regions of elevated lCMRglc showed moderate acidosis (6.88 $\pm$ 0.10), all significantly different at the.00125 level as shown by analysis of variance. Moderate acidosis in regions of increased lCMRglc suggests that anaerobic glycolysis causes excess protons to be generated by the uncoupling of ATP synthesis and hydrolysis. ^
Unimanual and Bimanual Weight Perception of Virtual Objects with a new Multi-finger Haptic Interface
Resumo:
Accurate weight perception is important particularly in tasks where the user has to apply vertical forces to ensure safe landing of a fragile object or precise penetration of a surface with a probe. Moreover, depending on physical properties of objects such as weight and size we may switch between unimanual and bimanual manipulation during a task. Research has shown that bimanual manipulation of real objects results in a misperception of their weight: they tend to feel lighter than similarly heavy objects which are handled with one hand only [8]. Effective simulation of bimanual manipulation with desktop haptic interfaces should be able to replicate this effect of bimanual manipulation on weight perception. Here, we present the MasterFinger-2, a new multi-finger haptic interface allowing bimanual manipulation of virtual objects with precision grip and we conduct weight discrimination experiments to evaluate its capacity to simulate unimanual and bimanual weight. We found that the bimanual ‘lighter’ bias is also observed with the MasterFinger-2 but the sensitivity to changes of virtual weights deteriorated.
Resumo:
Advances in solid-state lighting have overcome common limitations on optical wireless such as power needs due to light dispersion. It's been recently proposed the modification of lamp's drivers to take advantages of its switching behaviour to include data links maintaining the illumination control they provide. In this paper, a remote access application using visible light communications is presented that provides wireless access to a remote computer using a touchscreen as user interface
Resumo:
The Universidad Politécnica of Madrid (UPM) includes schools and faculties that were for engineering degrees, architecture and computer science, that are now in a quick EEES Bolonia Plan metamorphosis getting into degrees, masters and doctorate structures. They are focused towards action in machines, constructions, enterprises, that are subjected to machines, human and environment created risks. These are present in actions such as use loads, wind, snow, waves, flows, earthquakes, forces and effects in machines, vehicles behavior, chemical effects, and other environmental factors including effects of crops, cattle and beasts, forests, and varied essential economic and social disturbances. Emphasis is for authors in this session more about risks of natural origin, such as for hail, winds, snow or waves that are not exactly known a priori, but that are often considered with statistical expected distributions giving extreme values for convenient return periods. These distributions are known from measures in time, statistic of extremes and models about hazard scenarios and about responses of man made constructions or devices. In each engineering field theories were built about hazards scenarios and how to cover for important risks. Engineers must get that the systems they handle, such as vehicles, machines, firms or agro lands or forests, obtain production with enough safety for persons and with decent economic results in spite of risks. For that risks must be considered in planning, in realization and in operation, and safety margins must be taken but at a reasonable cost. That is a small level of risks will often remain, due to limitations in costs or because of due to strange hazards, and maybe they will be covered by insurance in cases such as in transport with cars, ships or aircrafts, in agro for hail, or for fire in houses or in forests. These and other decisions about quality, security for men or about business financial risks are sometimes considered with Decision Theories models, using often tools from Statistics or operational Research. The authors have done and are following field surveys about risk consideration in the careers in UPM, making deep analysis of curricula taking into account the new structures of degrees in the EEES Bolonia Plan, and they have considered the risk structures offered by diverse schools of Decision theories. That gives an aspect of the needs and uses, and recommendations about improving in the teaching about risk, that may include special subjects especially oriented for each career, school or faculty, so as to be recommended to be included into the curricula, including an elaboration and presentation format using a multi-criteria decision model.
Resumo:
This work proposes an encapsulation scheme aimed at simplifying the reuse process of hardware cores. This hardware encapsulation approach has been conceived with a twofold objective. First, we look for the improvement of the reuse interface associated with the hardware core description. This is carried out in a first encapsulation level by improving the limited types and configuration options available in the conventional HDLs interface, and also providing information related to the implementation itself. Second, we have devised a more generic interface focused on describing the function avoiding details from a particular implementation, what corresponds to a second encapsulation level. This encapsulation allows the designer to define how to configure and use the design to implement a given functionality. The proposed encapsulation schemes help improving the amount of information that can be supplied with the design, and also allow to automate the process of searching, configuring and implementing diverse alternatives.
Resumo:
Situado en el límite entre Ingeniería, Informática y Biología, la mecánica computacional de las neuronas aparece como un nuevo campo interdisciplinar que potencialmente puede ser capaz de abordar problemas clínicos desde una perspectiva diferente. Este campo es multiescala por naturaleza, yendo desde la nanoescala (como, por ejemplo, los dímeros de tubulina) a la macroescala (como, por ejemplo, el tejido cerebral), y tiene como objetivo abordar problemas que son complejos, y algunas veces imposibles, de estudiar con medios experimentales. La modelización computacional ha sido ampliamente empleada en aplicaciones Neurocientíficas tan diversas como el crecimiento neuronal o la propagación de los potenciales de acción compuestos. Sin embargo, en la mayoría de los enfoques de modelización hechos hasta ahora, la interacción entre la célula y el medio/estímulo que la rodea ha sido muy poco explorada. A pesar de la tremenda importancia de esa relación en algunos desafíos médicos—como, por ejemplo, lesiones traumáticas en el cerebro, cáncer, la enfermedad del Alzheimer—un puente que relacione las propiedades electrofisiológicas-químicas y mecánicas desde la escala molecular al nivel celular todavía no existe. Con ese objetivo, esta investigación propone un marco computacional multiescala particularizado para dos escenarios respresentativos: el crecimiento del axón y el acomplamiento electrofisiológicomecánico de las neuritas. En el primer caso, se explora la relación entre los constituyentes moleculares del axón durante su crecimiento y sus propiedades mecánicas resultantes, mientras que en el último, un estímulo mecánico provoca deficiencias funcionales a nivel celular como consecuencia de sus alteraciones electrofisiológicas-químicas. La modelización computacional empleada en este trabajo es el método de las diferencias finitas, y es implementada en un nuevo programa llamado Neurite. Aunque el método de los elementos finitos es también explorado en parte de esta investigación, el método de las diferencias finitas tiene la flexibilidad y versatilidad necesaria para implementar mode los biológicos, así como la simplicidad matemática para extenderlos a simulaciones a gran escala con un coste computacional bajo. Centrándose primero en el efecto de las propiedades electrofisiológicas-químicas sobre las propiedades mecánicas, una versión adaptada de Neurite es desarrollada para simular la polimerización de los microtúbulos en el crecimiento del axón y proporcionar las propiedades mecánicas como función de la ocupación de los microtúbulos. Después de calibrar el modelo de crecimiento del axón frente a resultados experimentales disponibles en la literatura, las características mecánicas pueden ser evaluadas durante la simulación. Las propiedades mecánicas del axón muestran variaciones dramáticas en la punta de éste, donde el cono de crecimiento soporta las señales químicas y mecánicas. Bansándose en el conocimiento ganado con el modelo de diferencias finitas, y con el objetivo de ir de 1D a 3D, este esquema preliminar pero de una naturaleza innovadora allana el camino a futuros estudios con el método de los elementos finitos. Centrándose finalmente en el efecto de las propiedades mecánicas sobre las propiedades electrofisiológicas- químicas, Neurite es empleado para relacionar las cargas mecánicas macroscópicas con las deformaciones y velocidades de deformación a escala microscópica, y simular la propagación de la señal eléctrica en las neuritas bajo carga mecánica. Las simulaciones fueron calibradas con resultados experimentales publicados en la literatura, proporcionando, por tanto, un modelo capaz de predecir las alteraciones de las funciones electrofisiológicas neuronales bajo cargas externas dañinas, y uniendo lesiones mecánicas con las correspondientes deficiencias funcionales. Para abordar simulaciones a gran escala, aunque otras arquitecturas avanzadas basadas en muchos núcleos integrados (MICs) fueron consideradas, los solvers explícito e implícito se implementaron en unidades de procesamiento central (CPU) y unidades de procesamiento gráfico (GPUs). Estudios de escalabilidad fueron llevados acabo para ambas implementaciones mostrando resultados prometedores para casos de simulaciones extremadamente grandes con GPUs. Esta tesis abre la vía para futuros modelos mecánicos con el objetivo de unir las propiedades electrofisiológicas-químicas con las propiedades mecánicas. El objetivo general es mejorar el conocimiento de las comunidades médicas y de bioingeniería sobre la mecánica de las neuronas y las deficiencias funcionales que aparecen de los daños producidos por traumatismos mecánicos, como lesiones traumáticas en el cerebro, o enfermedades neurodegenerativas como la enfermedad del Alzheimer. ABSTRACT Sitting at the interface between Engineering, Computer Science and Biology, Computational Neuron Mechanics appears as a new interdisciplinary field potentially able to tackle clinical problems from a new perspective. This field is multiscale by nature, ranging from the nanoscale (e.g., tubulin dimers) to the macroscale (e.g., brain tissue), and aims at tackling problems that are complex, and sometime impossible, to study through experimental means. Computational modeling has been widely used in different Neuroscience applications as diverse as neuronal growth or compound action potential propagation. However, in the majority of the modeling approaches done in this field to date, the interactions between the cell and its surrounding media/stimulus have been rarely explored. Despite of the tremendous importance of such relationship in several medical challenges—e.g., traumatic brain injury (TBI), cancer, Alzheimer’s disease (AD)—a bridge between electrophysiological-chemical and mechanical properties of neurons from the molecular scale to the cell level is still lacking. To this end, this research proposes a multiscale computational framework particularized for two representative scenarios: axon growth and electrophysiological-mechanical coupling of neurites. In the former case, the relation between the molecular constituents of the axon during its growth and its resulting mechanical properties is explored, whereas in the latter, a mechanical stimulus provokes functional deficits at cell level as a consequence of its electrophysiological-chemical alterations. The computational modeling approach chosen in this work is the finite difference method (FDM), and was implemented in a new program called Neurite. Although the finite element method (FEM) is also explored as part of this research, the FDM provides the necessary flexibility and versatility to implement biological models, as well as the mathematical simplicity to extend them to large scale simulations with a low computational cost. Focusing first on the effect of electrophysiological-chemical properties on the mechanical proper ties, an adaptation of Neurite was developed to simulate microtubule polymerization in axonal growth and provide the axon mechanical properties as a function of microtubule occupancy. After calibrating the axon growth model against experimental results available in the literature, the mechanical characteristics can be tracked during the simulation. The axon mechanical properties show dramatic variations at the tip of the axon, where the growth cone supports the chemical and mechanical signaling. Based on the knowledge gained from the FDM scheme, and in order to go from 1D to 3D, this preliminary yet novel scheme paves the road for future studies with FEM. Focusing then on the effect of mechanical properties on the electrophysiological-chemical properties, Neurite was used to relate macroscopic mechanical loading to microscopic strains and strain rates, and simulate the electrical signal propagation along neurites under mechanical loading. The simulations were calibrated against experimental results published in the literature, thus providing a model able to predict the alteration of neuronal electrophysiological function under external damaging load, and linking mechanical injuries to subsequent acute functional deficits. To undertake large scale simulations, although other state-of-the-art architectures based on many integrated cores (MICs) were considered, the explicit and implicit solvers were implemented for central processing units (CPUs) and graphics processing units (GPUs). Scalability studies were done for both implementations showing promising results for extremely large scale simulations with GPUs. This thesis opens the avenue for future mechanical modeling approaches aimed at linking electrophysiological- chemical properties to mechanical properties. Its overarching goal is to enhance the bioengineering and medical communities knowledge on neuronal mechanics and functional deficits arising from damages produced by direct mechanical insults, such as TBI, or neurodegenerative evolving illness, such as AD.
Resumo:
En este proyecto, se presenta un informe técnico sobre la cámara Leap Motion y el Software Development Kit correspondiente, el cual es un dispositivo con una cámara de profundidad orientada a interfaces hombre-máquina. Esto es realizado con el propósito de desarrollar una interfaz hombre-máquina basada en un sistema de reconocimiento de gestos de manos. Después de un exhaustivo estudio de la cámara Leap Motion, se han realizado diversos programas de ejemplo con la intención de verificar las capacidades descritas en el informe técnico, poniendo a prueba la Application Programming Interface y evaluando la precisión de las diferentes medidas obtenidas sobre los datos de la cámara. Finalmente, se desarrolla un prototipo de un sistema de reconocimiento de gestos. Los datos sobre la posición y orientación de la punta de los dedos obtenidos de la Leap Motion son usados para describir un gesto mediante un vector descriptor, el cual es enviado a una Máquina Vectores Soporte, utilizada como clasificador multi-clase.
Resumo:
The conception of IoT (Internet of Things) is accepted as the future tendency of Internet among academia and industry. It will enable people and things to be connected at anytime and anyplace, with anything and anyone. IoT has been proposed to be applied into many areas such as Healthcare, Transportation,Logistics, and Smart environment etc. However, this thesis emphasizes on the home healthcare area as it is the potential healthcare model to solve many problems such as the limited medical resources, the increasing demands for healthcare from elderly and chronic patients which the traditional model is not capable of. A remarkable change in IoT in semantic oriented vision is that vast sensors or devices are involved which could generate enormous data. Methods to manage the data including acquiring, interpreting, processing and storing data need to be implemented. Apart from this, other abilities that IoT is not capable of are concluded, namely, interoperation, context awareness and security & privacy. Context awareness is an emerging technology to manage and take advantage of context to enable any type of system to provide personalized services. The aim of this thesis is to explore ways to facilitate context awareness in IoT. In order to realize this objective, a preliminary research is carried out in this thesis. The most basic premise to realize context awareness is to collect, model, understand, reason and make use of context. A complete literature review for the existing context modelling and context reasoning techniques is conducted. The conclusion is that the ontology-based context modelling and ontology-based context reasoning are the most promising and efficient techniques to manage context. In order to fuse ontology into IoT, a specific ontology-based context awareness framework is proposed for IoT applications. In general, the framework is composed of eight components which are hardware, UI (User Interface), Context modelling, Context fusion, Context reasoning, Context repository, Security unit and Context dissemination. Moreover, on the basis of TOVE (Toronto Virtual Enterprise), a formal ontology developing methodology is proposed and illustrated which consists of four stages: Specification & Conceptualization, Competency Formulation, Implementation and Validation & Documentation. In addition, a home healthcare scenario is elaborated by listing its well-defined functionalities. Aiming at representing this specific scenario, the proposed ontology developing methodology is applied and the ontology-based model is developed in a free and open-source ontology editor called Protégé. Finally, the accuracy and completeness of the proposed ontology are validated to show that this proposed ontology is able to accurately represent the scenario of interest.
Resumo:
A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.
Resumo:
The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created.
Resumo:
Publisher PDF
Resumo:
Speech interface technology, which includes automatic speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact on business and personal computer use. Today, powerful and inexpensive microprocessors and improved algorithms are driving commercial applications in computer command, consumer, data entry, speech-to-text, telephone, and voice verification. Robust speaker-independent recognition systems for command and navigation in personal computers are now available; telephone-based transaction and database inquiry systems using both speech synthesis and recognition are coming into use. Large-vocabulary speech interface systems for document creation and read-aloud proofing are expanding beyond niche markets. Today's applications represent a small preview of a rich future for speech interface technology that will eventually replace keyboards with microphones and loud-speakers to give easy accessibility to increasingly intelligent machines.