12 resultados para EEG signal classification
em CaltechTHESIS
Resumo:
This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.
Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.
Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.
Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.
Resumo:
The development of the vulva of the nematode Caenorhabditis elegans is induced by a signal from the anchor cell of the somatic gonad. Activity of the gene lin-3 is required for the Vulval Precursor Cells (VPCs) to assume vulval fates. It is shown here that lin-3 encodes the vulval-inducing signal.
lin-3 was molecularly cloned by transposon-tagging and shown to encode a nematode member ofthe Epidermal Growth Factor (EGF) family. Genetic epistasis experiments indicate that lin-3 acts upstream of let-23, which encodes a homologue of the EGF-Receptor.
lin-3 transgenes that contain multiple copies of wild-type lin-3 genomic DNA clones confer a dominant multivulva phenotype in which up to all six of the VPCs assume vulval fates. The properties of these trans genes suggest that lin-3 can act in the anchor cell to induce vulval fates. Ablation of the gonadal precursors, which prevents the development of the AC, strongly reduces the ability of lin-3 transgenes to stimulate vulval development. A lin-3 recorder transgene that retains the ability to stimulate vulval development is expressed specifically in the anchor cell at the time of vulval induction.
Expression of an obligate secreted form of the EGF domain of Lin-S from a heterologous promoter is sufficient to induce vulval fates in the absence of the normal source of the inductive signal. This result suggests that Lin-S may act as a secreted factor, and that Lin-S may be the sole vulval-inducing signal made by the anchor cell.
lin-3 transgenes can cause adjacent VPCs to assume the 1° vulval fate and thus can override the action of the lateral signal mediated by lin-12 that normally prevents adjacent 1° fates. This indicates that the production of Lin-3 by the anchor cell must be limited to allow the VPCs to assume the proper pattern of fates of so 3° 3° 2° 1° 2° 3°.
Resumo:
The roles of the folate receptor and an anion carrier in the uptake of 5- methyltetrahydrofolate (5-MeH_4folate) were studied in cultured human (KB) cells using radioactive 5-MeH_4folate. Binding of the 5-MeH_4folate was inhibited by folic acid, but not by probenecid, an anion carrier inhibitor. The internalization of 5-MeH_4folate was inhibited by low temperature, folic acid, probenecid and methotrexate. Prolonged incubation of cells in the presence of high concentrations of probenecid appeared to inhibit endocytosis of folatereceptors as well as the anion carrier. The V_(max) and K_M values for the carrier were 8.65 ± 0.55 pmol/min/mg cell protein and 3.74 ± 0.54µM, respectively. The transport of 5-MeH4folate was competitively inhibited by folic acid, probenecid and methotrexate. The carrier dissociation constants for folic acid, probenecid and methotreate were 641 µM, 2.23 mM and 13.8 µM, respectively. Kinetic analysis suggests that 5-MeH_4folate at physiological concentration is transported through an anion carrier with the characteristics of the reduced-folate carrier after 5-MeH_4folate is endocytosed by folate receptors in KB cells. Our data with KB cells suggest that folate receptors and probenecid-sensitive carriers work in tandem to transport 5-MeH_4folate to the cytoplasm of cells, based upon the assumption that 1 mM probenecid does not interfere with the acidification of the vesicle where the folate receptors are endocytosed.
Oligodeoxynucleotides designed to hybridize to specific mRNA sequences (antisense oligonucleotides) or double stranded DNA sequences have been used to inhibit the synthesis of a number of cellular and viral proteins (Crooke, S. T. (1993) FASEB J. 7, 533-539; Carter, G. and Lemoine, N. R. (1993) Br. J. Cacer 67, 869-876; Stein, C. A. and cohen, J. S. (1988) Cancer Res. 48, 2659-2668). However, the distribution of the delivered oligonucleotides in the cell, i.e., in the cytoplasm or in the nucleus has not been clearly defined. We studied the kinetics of oligonucleotide transport into the cell nucleus using reconstituted cell nuclei as a model system. We present evidences here that oligonucleotides can freely diffuse into reconstituted nuclei. Our results are consistent with the reports by Leonetti et al. (Proc. Natl. Acad. Sci. USA, Vol. 88, pp. 2702-2706, April 1991), which were published while we were carrying this research independently. We also investigated whether a synthetic nuclear localization signal (NLS) peptide of SV40 T antigen could be used for the nuclear targeting of oligonucleotides. We synthesized a nuclear localization signal peptide-conjugated oligonucleotide to see if a nuclear localization signal peptide can enhance the uptake of oligonucleotides into reconstituted nuclei of Xenopus. Uptake of the NLS peptide-conjugated oligonucleotide was comparable to the control oligonucleotide at similar concentrations, suggesting that the NLS signal peptide does not significantly enhance the nuclear accumulation of oligonucleotides. This result is probably due to the small size of the oligonucleotide.
Resumo:
Humans are able of distinguishing more than 5000 visual categories even in complex environments using a variety of different visual systems all working in tandem. We seem to be capable of distinguishing thousands of different odors as well. In the machine learning community, many commonly used multi-class classifiers do not scale well to such large numbers of categories. This thesis demonstrates a method of automatically creating application-specific taxonomies to aid in scaling classification algorithms to more than 100 cate- gories using both visual and olfactory data. The visual data consists of images collected online and pollen slides scanned under a microscope. The olfactory data was acquired by constructing a small portable sniffing apparatus which draws air over 10 carbon black polymer composite sensors. We investigate performance when classifying 256 visual categories, 8 or more species of pollen and 130 olfactory categories sampled from common household items and a standardized scratch-and-sniff test. Taxonomies are employed in a divide-and-conquer classification framework which improves classification time while allowing the end user to trade performance for specificity as needed. Before classification can even take place, the pollen counter and electronic nose must filter out a high volume of background “clutter” to detect the categories of interest. In the case of pollen this is done with an efficient cascade of classifiers that rule out most non-pollen before invoking slower multi-class classifiers. In the case of the electronic nose, much of the extraneous noise encountered in outdoor environments can be filtered using a sniffing strategy which preferentially samples the visensor response at frequencies that are relatively immune to background contributions from ambient water vapor. This combination of efficient background rejection with scalable classification algorithms is tested in detail for three separate projects: 1) the Caltech-256 Image Dataset, 2) the Caltech Automated Pollen Identification and Counting System (CAPICS) and 3) a portable electronic nose specially constructed for outdoor use.
Resumo:
Some of the most exciting developments in the field of nucleic acid engineering include the utilization of synthetic nucleic acid molecular devices as gene regulators, as disease marker detectors, and most recently, as therapeutic agents. The common thread between these technologies is their reliance on the detection of specific nucleic acid input markers to generate some desirable output, such as a change in the copy number of an mRNA (for gene regulation), a change in the emitted light intensity (for some diagnostics), and a change in cell state within an organism (for therapeutics). The research presented in this thesis likewise focuses on engineering molecular tools that detect specific nucleic acid inputs, and respond with useful outputs.
Four contributions to the field of nucleic acid engineering are presented: (1) the construction of a single nucleotide polymorphism (SNP) detector based on the mechanism of hybridization chain reaction (HCR); (2) the utilization of a single-stranded oligonucleotide molecular Scavenger as a means of enhancing HCR selectivity; (3) the implementation of Quenched HCR, a technique that facilitates transduction of a nucleic acid chemical input into an optical (light) output, and (4) the engineering of conditional probes that function as sequence transducers, receiving target signal as input and providing a sequence of choice as output. These programmable molecular systems are conceptually well-suited for performing wash-free, highly selective rapid genotyping and expression profiling in vitro, in situ, and potentially in living cells.
Resumo:
The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.
Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.
The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.
Resumo:
RTKs-mediated signaling systems and the pathways with which they interact (e.g., those initiated by G protein-mediated signaling) involve a highly cooperative network that sense a large number of cellular inputs and then integrate, amplify, and process this information to orchestrate an appropriate set of cellular responses. The responses include virtually all aspects of cell function, from the most fundamental (proliferation, differentiation) to the most specialized (movement, metabolism, chemosensation). The basic tenets of RTK signaling system seem rather well established. Yet, new pathways and even new molecular players continue to be discovered. Although we believe that many of the essential modules of RTK signaling system are rather well understood, we have relatively little knowledge of the extent of interaction among these modules and their overall quantitative importance.
My research has encompassed the study of both positive and negative signaling by RTKs in C. elegans. I identified the C. elegans S0S-1 gene and showed that it is necessary for multiple RAS-mediated developmental signals. In addition, I demonstrated that there is a SOS-1-independent signaling during RAS-mediated vulval differentiation. By assessing signal outputs from various triple mutants, I have concluded that this SOS-1-independent signaling is not mediated by PTP-2/SHP-2 or the removal of inhibition by GAP-1/ RasGAP and it is not under regulation by SLI-1/Cb1. I speculate that there is either another exchange factor for RASor an as yet unidentified signaling pathway operating during RAS-mediated vulval induction in C. elegans.
In an attempt to uncover the molecular mechanisms of negative regulation of EGFR signaling by SLI-1/Cb1, I and two other colleagues codiscovered that RING finger domain of SLI-1 is partially dispensable for activity. This structure-function analysis shows that there is an ubiquitin protein ligase-independent activity for SLI-1 in regulating EGFR signaling. Further, we identified an inhibitory tyrosine of LET-23/ EGFR requiring sli-1(+)for its effects: removal of this tyrosine closely mimics loss of sli-1 but not loss of other negative regulator function.
By comparative analysis of two RTK pathways with similar signaling mechanisms, I have found that clr-1, a previously identified negative regulator of egl-15 mediated FGFR signaling, is also involved in let-23 EGFR signaling. The success of this approach promises a similar reciprocal test and could potentially extend to the study of other signaling pathways with similar signaling logic.
Finally, by correlating the developmental expression of lin-3 EGF to let-23 EGFR signaling activity, I demonstrated the existence of reciprocal EGF signaling in coordinating the morphogenesis of epithelia. This developmental logic of EGF signaling could provide a basis to understand a universal mechanism for organogenesis.
Resumo:
The signal recognition particle (SRP) targets membrane and secretory proteins to their correct cellular destination with remarkably high fidelity. Previous studies have shown that multiple checkpoints exist within this targeting pathway that allows ‘correct cargo’ to be quickly and efficiently targeted and for ‘incorrect cargo’ to be promptly rejected. In this work, we delved further into understanding the mechanisms of how substrates are selected or discarded by the SRP. First, we discovered the role of the SRP fingerloop and how it activates the SRP and SRP receptor (SR) GTPases to target and unload cargo in response to signal sequence binding. Second, we learned how an ‘avoidance signal’ found in the bacterial autotransporter, EspP, allows this protein to escape the SRP pathway by causing the SRP and SR to form a ‘distorted’ complex that is inefficient in delivering the cargo to the membrane. Lastly, we determined how Trigger Factor, a co-translational chaperone, helps SRP discriminate against ‘incorrect cargo’ at three distinct stages: SRP binding to RNC; targeting of RNC to the membrane via SRP-FtsY assembly; and stronger antagonism of SRP targeting of ribosomes bearing nascent polypeptides that exceed a critical length. Overall, results delineate the rich underlying mechanisms by which SRP recognizes its substrates, which in turn activates the targeting pathway and provides a conceptual foundation to understand how timely and accurate selection of substrates is achieved by this protein targeting machinery.
Resumo:
The LIGO and Virgo gravitational-wave observatories are complex and extremely sensitive strain detectors that can be used to search for a wide variety of gravitational waves from astrophysical and cosmological sources. In this thesis, I motivate the search for the gravitational wave signals from coalescing black hole binary systems with total mass between 25 and 100 solar masses. The mechanisms for formation of such systems are not well-understood, and we do not have many observational constraints on the parameters that guide the formation scenarios. Detection of gravitational waves from such systems — or, in the absence of detection, the tightening of upper limits on the rate of such coalescences — will provide valuable information that can inform the astrophysics of the formation of these systems. I review the search for these systems and place upper limits on the rate of black hole binary coalescences with total mass between 25 and 100 solar masses. I then show how the sensitivity of this search can be improved by up to 40% by the the application of the multivariate statistical classifier known as a random forest of bagged decision trees to more effectively discriminate between signal and non-Gaussian instrumental noise. I also discuss the use of this classifier in the search for the ringdown signal from the merger of two black holes with total mass between 50 and 450 solar masses and present upper limits. I also apply multivariate statistical classifiers to the problem of quantifying the non-Gaussianity of LIGO data. Despite these improvements, no gravitational-wave signals have been detected in LIGO data so far. However, the use of multivariate statistical classification can significantly improve the sensitivity of the Advanced LIGO detectors to such signals.
Resumo:
The problem of the representation of signal envelope is treated, motivated by the classical Hilbert representation in which the envelope is represented in terms of the received signal and its Hilbert transform. It is shown that the Hilbert representation is the proper one if the received signal is strictly bandlimited but that some other filter is more appropriate in the bandunlimited case. A specific alternative filter, the conjugate filter, is proposed and the overall envelope estimation error is evaluated to show that for a specific received signal power spectral density the proposed filter yields a lower envelope error than the Hilbert filter.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
In the first section of this thesis, two-dimensional properties of the human eye movement control system were studied. The vertical - horizontal interaction was investigated by using a two-dimensional target motion consisting of a sinusoid in one of the directions vertical or horizontal, and low-pass filtered Gaussian random motion of variable bandwidth (and hence information content) in the orthogonal direction. It was found that the random motion reduced the efficiency of the sinusoidal tracking. However, the sinusoidal tracking was only slightly dependent on the bandwidth of the random motion. Thus the system should be thought of as consisting of two independent channels with a small amount of mutual cross-talk.
These target motions were then rotated to discover whether or not the system is capable of recognizing the two-component nature of the target motion. That is, the sinusoid was presented along an oblique line (neither vertical nor horizontal) with the random motion orthogonal to it. The system did not simply track the vertical and horizontal components of motion, but rotated its frame of reference so that its two tracking channels coincided with the directions of the two target motion components. This recognition occurred even when the two orthogonal motions were both random, but with different bandwidths.
In the second section, time delays, prediction and power spectra were examined. Time delays were calculated in response to various periodic signals, various bandwidths of narrow-band Gaussian random motions and sinusoids. It was demonstrated that prediction occurred only when the target motion was periodic, and only if the harmonic content was such that the signal was sufficiently narrow-band. It appears as if general periodic motions are split into predictive and non-predictive components.
For unpredictable motions, the relationship between the time delay and the average speed of the retinal image was linear. Based on this I proposed a model explaining the time delays for both random and periodic motions. My experiments did not prove that the system is sampled data, or that it is continuous. However, the model can be interpreted as representative of a sample data system whose sample interval is a function of the target motion.
It was shown that increasing the bandwidth of the low-pass filtered Gaussian random motion resulted in an increase of the eye movement bandwidth. Some properties of the eyeball-muscle dynamics and the extraocular muscle "active state tension" were derived.