3 resultados para language model
em Duke University
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
Angelman syndrome (AS) is a neurobehavioral disorder associated with mental retardation, absence of language development, characteristic electroencephalography (EEG) abnormalities and epilepsy, happy disposition, movement or balance disorders, and autistic behaviors. The molecular defects underlying AS are heterogeneous, including large maternal deletions of chromosome 15q11-q13 (70%), paternal uniparental disomy (UPD) of chromosome 15 (5%), imprinting mutations (rare), and mutations in the E6-AP ubiquitin ligase gene UBE3A (15%). Although patients with UBE3A mutations have a wide spectrum of neurological phenotypes, their features are usually milder than AS patients with deletions of 15q11-q13. Using a chromosomal engineering strategy, we generated mutant mice with a 1.6-Mb chromosomal deletion from Ube3a to Gabrb3, which inactivated the Ube3a and Gabrb3 genes and deleted the Atp10a gene. Homozygous deletion mutant mice died in the perinatal period due to a cleft palate resulting from the null mutation in Gabrb3 gene. Mice with a maternal deletion (m-/p+) were viable and did not have any obvious developmental defects. Expression analysis of the maternal and paternal deletion mice confirmed that the Ube3a gene is maternally expressed in brain, and showed that the Atp10a and Gabrb3 genes are biallelically expressed in all brain sub-regions studied. Maternal (m-/p+), but not paternal (m+/p-), deletion mice had increased spontaneous seizure activity and abnormal EEG. Extensive behavioral analyses revealed significant impairment in motor function, learning and memory tasks, and anxiety-related measures assayed in the light-dark box in maternal deletion but not paternal deletion mice. Ultrasonic vocalization (USV) recording in newborns revealed that maternal deletion pups emitted significantly more USVs than wild-type littermates. The increased USV in maternal deletion mice suggests abnormal signaling behavior between mothers and pups that may reflect abnormal communication behaviors in human AS patients. Thus, mutant mice with a maternal deletion from Ube3a to Gabrb3 provide an AS mouse model that is molecularly more similar to the contiguous gene deletion form of AS in humans than mice with Ube3a mutation alone. These mice will be valuable for future comparative studies to mice with maternal deficiency of Ube3a alone.
Resumo:
Behavior, neuropsychology, and neuroimaging suggest that episodic memories are constructed from interactions among the following basic systems: vision, audition, olfaction, other senses, spatial imagery, language, emotion, narrative, motor output, explicit memory, and search and retrieval. Each system has its own well-documented functions, neural substrates, processes, structures, and kinds of schemata. However, the systems have not been considered as interacting components of episodic memory, as is proposed here. Autobiographical memory and oral traditions are used to demonstrate the usefulness of the basic-systems model in accounting for existing data and predicting novel findings, and to argue that the model, or one similar to it, is the only way to understand episodic memory for complex stimuli routinely encountered outside the laboratory.