8 resultados para objectivity without objects
em CaltechTHESIS
Resumo:
This thesis presents a biologically plausible model of an attentional mechanism for forming position- and scale-invariant representations of objects in the visual world. The model relies on a set of control neurons to dynamically modify the synaptic strengths of intra-cortical connections so that information from a windowed region of primary visual cortex (Vl) is selectively routed to higher cortical areas. Local spatial relationships (i.e., topography) within the attentional window are preserved as information is routed through the cortex, thus enabling attended objects to be represented in higher cortical areas within an object-centered reference frame that is position and scale invariant. The representation in V1 is modeled as a multiscale stack of sample nodes with progressively lower resolution at higher eccentricities. Large changes in the size of the attentional window are accomplished by switching between different levels of the multiscale stack, while positional shifts and small changes in scale are accomplished by translating and rescaling the window within a single level of the stack. The control signals for setting the position and size of the attentional window are hypothesized to originate from neurons in the pulvinar and in the deep layers of visual cortex. The dynamics of these control neurons are governed by simple differential equations that can be realized by neurobiologically plausible circuits. In pre-attentive mode, the control neurons receive their input from a low-level "saliency map" representing potentially interesting regions of a scene. During the pattern recognition phase, control neurons are driven by the interaction between top-down (memory) and bottom-up (retinal input) sources. The model respects key neurophysiological, neuroanatomical, and psychophysical data relating to attention, and it makes a variety of experimentally testable predictions.
Resumo:
The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.
Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.
The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.
The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.
In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.
Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.
The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.
The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.
Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.
Resumo:
Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.
The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.
Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.
In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.
Resumo:
My thesis studies how people pay attention to other people and the environment. How does the brain figure out what is important and what are the neural mechanisms underlying attention? What is special about salient social cues compared to salient non-social cues? In Chapter I, I review social cues that attract attention, with an emphasis on the neurobiology of these social cues. I also review neurological and psychiatric links: the relationship between saliency, the amygdala and autism. The first empirical chapter then begins by noting that people constantly move in the environment. In Chapter II, I study the spatial cues that attract attention during locomotion using a cued speeded discrimination task. I found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The more ecologically valid the motion features became (e.g., temporal expansion of each object, spatial depth structure implied by distribution of the size of the objects), the stronger the attentional effects. However, compared to inanimate objects and cues, people preferentially attend to animals and faces, a process in which the amygdala is thought to play an important role. To directly compare social cues and non-social cues in the same experiment and investigate the neural structures processing social cues, in Chapter III, I employ a change detection task and test four rare patients with bilateral amygdala lesions. All four amygdala patients showed a normal pattern of reliably faster and more accurate detection of animate stimuli, suggesting that advantageous processing of social cues can be preserved even without the amygdala, a key structure of the “social brain”. People not only attend to faces, but also pay attention to others’ facial emotions and analyze faces in great detail. Humans have a dedicated system for processing faces and the amygdala has long been associated with a key role in recognizing facial emotions. In Chapter IV, I study the neural mechanisms of emotion perception and find that single neurons in the human amygdala are selective for subjective judgment of others’ emotions. Lastly, people typically pay special attention to faces and people, but people with autism spectrum disorders (ASD) might not. To further study social attention and explore possible deficits of social attention in autism, in Chapter V, I employ a visual search task and show that people with ASD have reduced attention, especially social attention, to target-congruent objects in the search array. This deficit cannot be explained by low-level visual properties of the stimuli and is independent of the amygdala, but it is dependent on task demands. Overall, through visual psychophysics with concurrent eye-tracking, my thesis found and analyzed socially salient cues and compared social vs. non-social cues and healthy vs. clinical populations. Neural mechanisms underlying social saliency were elucidated through electrophysiology and lesion studies. I finally propose further research questions based on the findings in my thesis and introduce my follow-up studies and preliminary results beyond the scope of this thesis in the very last section, Future Directions.
Resumo:
Being able to detect a single molecule without the use of labels has been a long standing goal of bioengineers and physicists. This would simplify applications ranging from single molecular binding studies to those involving public health and security, improved drug screening, medical diagnostics, and genome sequencing. One promising technique that has the potential to detect single molecules is the microtoroid optical resonator. The main obstacle to detecting single molecules, however, is decreasing the noise level of the measurements such that a single molecule can be distinguished from background. We have used laser frequency locking in combination with balanced detection and data processing techniques to reduce the noise level of these devices and report the detection of a wide range of nanoscale objects ranging from nanoparticles with radii from 100 to 2.5 nm, to exosomes, ribosomes, and single protein molecules (mouse immunoglobulin G and human interleukin-2). We further extend the exosome results towards creating a non-invasive tumor biopsy assay. Our results, covering several orders of magnitude of particle radius (100 nm to 2 nm), agree with the `reactive' model prediction for the frequency shift of the resonator upon particle binding. In addition, we demonstrate that molecular weight may be estimated from the frequency shift through a simple formula, thus providing a basis for an ``optical mass spectrometer'' in solution. We anticipate that our results will enable many applications, including more sensitive medical diagnostics and fundamental studies of single receptor-ligand and protein-protein interactions in real time. The thesis summarizes what we have achieved thus far and shows that the goal of detecting a single molecule without the use of labels can now be realized.
Resumo:
Light has long been used for the precise measurement of moving bodies, but the burgeoning field of optomechanics is concerned with the interaction of light and matter in a regime where the typically weak radiation pressure force of light is able to push back on the moving object. This field began with the realization in the late 1960's that the momentum imparted by a recoiling photon on a mirror would place fundamental limits on the smallest measurable displacement of that mirror. This coupling between the frequency of light and the motion of a mechanical object does much more than simply add noise, however. It has been used to cool objects to their quantum ground state, demonstrate electromagnetically-induced-transparency, and modify the damping and spring constant of the resonator. Amazingly, these radiation pressure effects have now been demonstrated in systems ranging 18 orders of magnitude in mass (kg to fg).
In this work we will focus on three diverse experiments in three different optomechanical devices which span the fields of inertial sensors, closed-loop feedback, and nonlinear dynamics. The mechanical elements presented cover 6 orders of magnitude in mass (ng to fg), but they all employ nano-scale photonic crystals to trap light and resonantly enhance the light-matter interaction. In the first experiment we take advantage of the sub-femtometer displacement resolution of our photonic crystals to demonstrate a sensitive chip-scale optical accelerometer with a kHz-frequency mechanical resonator. This sensor has a noise density of approximately 10 micro-g/rt-Hz over a useable bandwidth of approximately 20 kHz and we demonstrate at least 50 dB of linear dynamic sensor range. We also discuss methods to further improve performance of this device by a factor of 10.
In the second experiment, we used a closed-loop measurement and feedback system to damp and cool a room-temperature MHz-frequency mechanical oscillator from a phonon occupation of 6.5 million down to just 66. At the time of the experiment, this represented a world-record result for the laser cooling of a macroscopic mechanical element without the aid of cryogenic pre-cooling. Furthermore, this closed-loop damping yields a high-resolution force sensor with a practical bandwidth of 200 kHZ and the method has applications to other optomechanical sensors.
The final experiment contains results from a GHz-frequency mechanical resonator in a regime where the nonlinearity of the radiation-pressure interaction dominates the system dynamics. In this device we show self-oscillations of the mechanical element that are driven by multi-photon-phonon scattering. Control of the system allows us to initialize the mechanical oscillator into a stable high-amplitude attractor which would otherwise be inaccessible. To provide context, we begin this work by first presenting an intuitive overview of optomechanical systems and then providing an extended discussion of the principles underlying the design and fabrication of our optomechanical devices.
Resumo:
Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.
This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.
A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.
This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.