930 resultados para Spatial Query Processing And Optimization
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
The coupling of mechanical stress fields in polymers to covalent chemistry (polymer mechanochemistry) has provided access to previously unattainable chemical reactions and polymer transformations. In the bulk, mechanochemical activation has been used as the basis for new classes of stress-responsive polymers that demonstrate stress/strain sensing, shear-induced intermolecular reactivity for molecular level remodeling and self-strengthening, and the release of acids and other small molecules that are potentially capable of triggering further chemical response. The potential utility of polymer mechanochemistry in functional materials is limited, however, by the fact that to date, all reported covalent activation in the bulk occurs in concert with plastic yield and deformation, so that the structure of the activated object is vastly different from its nascent form. Mechanochemically activated materials have thus been limited to “single use” demonstrations, rather than as multi-functional materials for structural and/or device applications. Here, we report that filled polydimethylsiloxane (PDMS) elastomers provide a robust elastic substrate into which mechanophores can be embedded and activated under conditions from which the sample regains its original shape and properties. Fabrication is straightforward and easily accessible, providing access for the first time to objects and devices that either release or reversibly activate chemical functionality over hundreds of loading cycles.
While the mechanically accelerated ring-opening reaction of spiropyran to merocyanine and associated color change provides a useful method by which to image the molecular scale stress/strain distribution within a polymer, the magnitude of the forces necessary for activation had yet to be quantified. Here, we report single molecule force spectroscopy studies of two spiropyran isomers. Ring opening on the timescale of tens of milliseconds is found to require forces of ~240 pN, well below that of previously characterized covalent mechanophores. The lower threshold force is a combination of a low force-free activation energy and the fact that the change in rate with force (activation length) of each isomer is greater than that inferred in other systems. Importantly, quantifying the magnitude of forces required to activate individual spiropyran-based force-probes enables the probe behave as a “scout” of molecular forces in materials; the observed behavior of which can be extrapolated to predict the reactivity of potential mechanophores within a given material and deformation.
We subsequently translated the design platform to existing dynamic soft technologies to fabricate the first mechanochemically responsive devices; first, by remotely inducing dielectric patterning of an elastic substrate to produce assorted fluorescent patterns in concert with topological changes; and second, by adopting a soft robotic platform to produce a color change from the strains inherent to pneumatically actuated robotic motion. Shown herein, covalent polymer mechanochemistry provides a viable mechanism to convert the same mechanical potential energy used for actuation into value-added, constructive covalent chemical responses. The color change associated with actuation suggests opportunities for not only new color changing or camouflaging strategies, but also the possibility for simultaneous activation of latent chemistry (e.g., release of small molecules, change in mechanical properties, activation of catalysts, etc.) in soft robots. In addition, mechanochromic stress mapping in a functional actuating device might provide a useful design and optimization tool, revealing spatial and temporal force evolution within the actuator in a way that might also be coupled to feedback loops that allow autonomous, self-regulation of activity.
In the future, both the specific material and the general approach should be useful in enriching the responsive functionality of soft elastomeric materials and devices. We anticipate the development of new mechanophores that, like the materials, are reversibly and repeatedly activated, expanding the capabilities of soft, active devices and further permitting dynamic control over chemical reactivity that is otherwise inaccessible, each in response to a single remote signal.
Resumo:
Currently there is no consensus as to the specific cognitive impairments that characterize mathematical disabilities (MD) or specific subtypes such as an arithmetic disability (AD). The present study sought to address this concern by examining cognitive processes that might undergird AD in children. The present study utilized archival data to conduct two investigations. The first investigation examined the executive functioning and working memory of children with AD. An age-matched achievement-matched design was employed to explore whether children with AD exhibit developmental lags or deficits in these cognitive domains. While children with AD did not exhibit impairments in verbal working memory or colour word inhibition, they did demonstrate impairments in shifting attention, visual-spatial working memory, and quantity inhibition. As children with AD did not perform more poorly than their younger achievement-matched peers on any of these tasks, impairments in specific areas of executive functioning and working memory appeared to reflect a developmental lag rather than a cognitive deficit. The second study examined the phonological processing performance of children with AD compared to children with comorbid disabilities in arithmetic and word recognition (AD/WRD) and to typically achieving (TA) children. Results indicated that, while children with AD did demonstrate impairments on all isolated naming speed tasks, trail making digits, and memory for digits, they did not demonstrate impairments on measures of phonological awareness, nonword repetition, serial processing speed, or serial naming speed. In contrast, children with AD/WRD demonstrated impairments on measures of phonological awareness, phonological short-term memory, isolated naming speed, serial processing speed, and the alphabet a-z task. Overall, results suggested that phonological processing impairments are more prominent in children with a WRD than children with an AD. Together, these studies further our understanding of the nature of the cognitive processes that underlie AD by focusing upon rarely used methods (i.e., age-matched achievement-matched design) and under-examined cognitive domains (i.e., phonological processing).
Resumo:
This book brings together experts in the fields of spatial planning, landuse and infrastructure management to explore the emerging agenda of spatially-oriented integrated evaluation. It weaves together the latest theories, case studies, methods, policy and practice to examine and assess the values, impacts, benefits and the overall success in integrated land-use management. In doing so, the book clarifies the nature and roles of evaluation and puts forward guidance for future policy and practice.
Resumo:
AIMS: Mutation detection accuracy has been described extensively; however, it is surprising that pre-PCR processing of formalin-fixed paraffin-embedded (FFPE) samples has not been systematically assessed in clinical context. We designed a RING trial to (i) investigate pre-PCR variability, (ii) correlate pre-PCR variation with EGFR/BRAF mutation testing accuracy and (iii) investigate causes for observed variation. METHODS: 13 molecular pathology laboratories were recruited. 104 blinded FFPE curls including engineered FFPE curls, cell-negative FFPE curls and control FFPE tissue samples were distributed to participants for pre-PCR processing and mutation detection. Follow-up analysis was performed to assess sample purity, DNA integrity and DNA quantitation. RESULTS: Rate of mutation detection failure was 11.9%. Of these failures, 80% were attributed to pre-PCR error. Significant differences in DNA yields across all samples were seen using analysis of variance (p
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.
Resumo:
Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Two-phase flow heat exchangers have been shown to have very high efficiencies, but the lack of a dependable model and data precludes them from use in many cases. Herein a new method for the measurement of local convective heat transfer coefficients from the outside of a heat transferring wall has been developed, which results in accurate local measurements of heat flux during two-phase flow. This novel technique uses a chevron-pattern corrugated plate heat exchanger consisting of a specially machined Calcium Fluoride plate and the refrigerant HFE7100, with heat flux values up to 1 W cm-2 and flow rates up to 300 kg m-2s-1. As Calcium Fluoride is largely transparent to infra-red radiation, the measurement of the surface temperature of PHE that is in direct contact with the liquid is accomplished through use of a mid-range (3.0-5.1 µm) infra-red camera. The objective of this study is to develop, validate, and use a unique infrared thermometry method to quantify the heat transfer characteristics of flow boiling within different Plate Heat Exchanger geometries. This new method allows high spatial and temporal resolution measurements. Furthermore quasi-local pressure measurements enable us to characterize the performance of each geometry. Validation of this technique will be demonstrated by comparison to accepted single and two-phase data. The results can be used to come up with new heat transfer correlations and optimization tools for heat exchanger designers. The scientific contribution of this thesis is, to give PHE developers further tools to allow them to identify the heat transfer and pressure drop performance of any corrugated plate pattern directly without the need to account for typical error sources due to inlet and outlet distribution systems. Furthermore, the designers will now gain information on the local heat transfer distribution within one plate heat exchanger cell which will help to choose the correct corrugation geometry for a given task.
Resumo:
This dissertation investigates the acquisition of oblique relative clauses in L2 Spanish by English and Moroccan Arabic speakers in order to understand the role of previous linguistic knowledge and its interaction with Universal Grammar on the one hand, and the relationship between grammatical knowledge and its use in real-time, on the other hand. Three types of tasks were employed: an oral production task, an on-line self-paced grammaticality judgment task, and an on-line self-paced reading comprehension task. Results indicated that the acquisition of oblique relative clauses in Spanish is a problematic area for second language learners of intermediate proficiency in the language, regardless of their native language. In particular, this study has showed that, even when the learners’ native language shares the main properties of the L2, i.e., fronting of the obligatory preposition (Pied-Piping), there is still room for divergence, especially in production and timed grammatical intuitions. On the other hand, reaction time data have shown that L2 learners can and do converge at the level of sentence processing, showing exactly the same real-time effects for oblique relative clauses that native speakers had. Processing results demonstrated that native and non-native speakers alike are able to apply universal processing principles such as the Minimal Chain Principle (De Vincenzi, 1991) even when the L2 learners still have incomplete grammatical representations, a result that contradicts some of the predictions of the Shallow Structure Hypothesis (Clahsen & Felser, 2006). Results further suggest that the L2 processing and comprehension domains may be able to access some type of information that it is not yet available to other grammatical modules, probably because transfer of certain L1 properties occurs asymmetrically across linguistic domains. In addition, this study also explored the Null-Prep phenomenon in L2 Spanish, and proposed that Null-Prep is an interlanguage stage, fully available and accounted within UG, which intermediate L2 as well as first language learners go through in the development of pied-piping oblique relative clauses. It is hypothesized that this intermediate stage is the result of optionality of the obligatory preposition in the derivation, when it is not crucial for the meaning of the sentence, and when the DP is going to be in an A-bar position, so it can get default case. This optionality can be predicted by the Bottleneck Hypothesis (Slabakova, 2009c) if we consider that these prepositions are some sort of functional morphology. This study contributes to the field of SLA and L2 processing in various ways. First, it demonstrates that the grammatical representations may be dissociated from grammatical processing in the sense that L2 learners, unlike native speakers, can present unexpected asymmetries such as a convergent processing but divergent grammatical intuitions or production. This conclusion is only possible under the assumption of a modular language system. Finally, it contributes to the general debate of generative SLA since in argues for a fully UG-constrained interlanguage grammar.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.