951 resultados para single system image
Resumo:
The effect of alcohol solution on single human red blood Cells (RBCs) was investigated using near-infrared laser tweezers Raman spectroscopy (LTRS). In our system, a low-power diode laser at 785 nm was applied for the trapping of a living cell and the excitation of its Raman spectrum. Such a design could simultaneously reduce the photo-damage to the cell and suppress the interference from the fluorescence on the Raman signal. The denaturation process of single RBCs in 20% alcohol solution was investigated by detecting the time evolution of the Raman spectra at the single-cell level. The vitality of RBCs was characterized by the Raman band at 752 cm(-1), which corresponds to the porphyrin breathing mode. We found that the intensity of this band decreased by 34.1% over a period of 25 min after the administration of alcohol. In a further study of the dependence of denaturation on alcohol concentration, we discovered that the decrease in the intensity of the 752 cm(-1) band became more rapid and more prominent as the alcohol concentration increased. The present LTRS technique may have several potential applications in cell biology and medicine, including probing dynamic cellular processes at the single cell level and diagnosing cell disorders in real time. Copyright (c) 2005 John Wiley T Sons, Ltd.
Resumo:
A single-cell diagnostic technique for epithelial cancers is developed by utilizing laser trapping and Raman spectroscopy to differentiate cancerous and normal epithelial cells. Single-cell suspensions were prepared from surgically removed human colorectal tissues following standard primary culture protocols and examined in a near-infrared laser-trapping Raman spectroscopy system, where living epithelial cells were investigated one by one. A diagnostic model was built on the spectral data obtained from 8 patients and validated by the data from 2 new patients. Our technique has potential applications from epithelial cancer diagnosis to the study of cell dynamics of carcinogenesis. (c) 2006 Optical Society of America.
Resumo:
Os estuários são ambientes altamente dinâmicos e concentram a maior parte da população mundial em seu entorno. São ambientes complexos que necessitam de uma gama de estudos. Nesse contexto, este trabalho visa contribuir para o entendimento dos estuários lagunares, tendo como objetivo comparar duas ferramentas geofísicas acústicas no mapeamento de uma porção submersa do Mar de Cananéia que está inserido no Sistema Estuarino Lagunar de Cananéia-Iguape (SP). Os equipamentos utilizados nesta pesquisa são o Sonar de Varredura Lateral e o Sistema Acústico de Classificação de Fundo RoxAnn, através da parametrização de amostras de fundo. A comparação do padrão acústico do Sonar de Varredura Lateral com as amostras de fundo da região permitiu o reconhecimento de 6 tipos distintos de padrões acústicos e a relação positiva com o diâmetro médio do grão foi de 50%. A comparação da resposta acústica do Sistema Acústico de Classificação de Fundo RoxAnn com o diâmetro médio do grão foi igualmente de 50%. Isto deve-se ao fato de que os valores produzidos pelo eco 1 e pelo eco 2 deste equipamento mostram que, por ser um mono-feixe e por analisar valores de intensidade do retorno acústico, o equipamento em questão pode responder a outros fatores ambientais que não seja somente o diâmetro médio do grão. Ao comparar a resposta acústica do Sonar de Varredura Lateral com o Sistema Acústico de Classificação de fundo RoxAnn obteve-se um resultado positivo de 93%. Isto pode ser explicado pelo fato de o Sonar de Varredura Lateral gerar uma imagem acústica do fundo. Em locais onde tem-se amostra e os valores do eco 1 e do eco 2 do Sistema Acústico de Classificação de Fundo RoxAnn são altos, pode-se associar a esses locais a influência da compactação dos sedimentos finos através da análise das imagens do Sonar de Varredura Lateral. Por meio da comparação destes dois métodos foi possível estabelecer um intervalo de valores para o eco 1 que pode ser associado ao diâmetro médio do grão. Assim, valores entre 0.170 a 0.484 milivolts podem ser associados a sedimentos finos com granulometria até areia fina. Valores entre 0.364 a 0.733 podem ser associados a sedimentos de granulometria entre areia fina a média. Valores acima de 0.805 milivolts até 1.585 milivolts podem ser associados a sedimentos mais grossos como carbonatos biodetríticos ou areias grossas. E, por fim, valores acima de 2.790 milivolts podem ser associados a afloramentos rochosos.
Resumo:
提出了一种用于合成孔径激光成像雷达的双向环路结构的发射接收望远镜,双向环路包括发射4-f转像系统、接收4-f转像系统和独立的望远镜。发射通道中设置离焦和相位调制平板偏置,接收通道中设置离焦和相位平板偏置。控制发射离焦量,发射相位调制函数,接收离焦量,接收相位调制函数,用同一个望远镜可以同时实现空间二次项相位附加偏置的激光发射和消除目标点散射回波接收波面像差的离焦光学接收,并产生雷达运动方向上合适的和可控制的相位二次项历程,从而实现孔径合成成像。详细介绍了系统设计,给出了从发射到光电外差接收的全过程传输方程。
Resumo:
Computation technology has dramatically changed the world around us; you can hardly find an area where cell phones have not saturated the market, yet there is a significant lack of breakthroughs in the development to integrate the computer with biological environments. This is largely the result of the incompatibility of the materials used in both environments; biological environments and experiments tend to need aqueous environments. To help aid in these development chemists, engineers, physicists and biologists have begun to develop microfluidics to help bridge this divide. Unfortunately, the microfluidic devices required large external support equipment to run the device. This thesis presents a series of several microfluidic methods that can help integrate engineering and biology by exploiting nanotechnology to help push the field of microfluidics back to its intended purpose, small integrated biological and electrical devices. I demonstrate this goal by developing different methods and devices to (1) separate membrane bound proteins with the use of microfluidics, (2) use optical technology to make fiber optic cables into protein sensors, (3) generate new fluidic devices using semiconductor material to manipulate single cells, and (4) develop a new genetic microfluidic based diagnostic assay that works with current PCR methodology to provide faster and cheaper results. All of these methods and systems can be used as components to build a self-contained biomedical device.
Resumo:
Intrinsically fuzzy morphological erosion and dilation are extended to a total of eight operations that have been formulated in terms of a single morphological operation--biased dilation. Based on the spatial coding of a fuzzy variable, a bidirectional projection concept is proposed. Thus, fuzzy logic operations, arithmetic operations, gray-scale dilation, and erosion for the extended intrinsically fuzzy morphological operations can be included in a unified algorithm with only biased dilation and fuzzy logic operations. To execute this image algebra approach we present a cellular two-layer processing architecture that consists of a biased dilation processor and a fuzzy logic processor. (C) 1996 Optical Society of America
Resumo:
An ordered gray-scale erosion is suggested according to the definition of hit-miss transform. Instead of using three operations, two images, and two structuring elements, the developed operation requires only one operation and one structuring element, but with three gray-scale levels. Therefore, a union of the ordered gray-scale erosions with different structuring elements can constitute a simple image algebra to program any combined image processing function. An optical parallel ordered gray-scale erosion processor is developed based on the incoherent correlation in a single channel. Experimental results are also given for an edge detection and a pattern recognition. (C) 1998 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(98)00306-7].
Resumo:
This work deals with two related areas: processing of visual information in the central nervous system, and the application of computer systems to research in neurophysiology.
Certain classes of interneurons in the brain and optic lobes of the blowfly Calliphora phaenicia were previously shown to be sensitive to the direction of motion of visual stimuli. These units were identified by visual field, preferred direction of motion, and anatomical location from which recorded. The present work is addressed to the questions: (1) is there interaction between pairs of these units, and (2) if such relationships can be found, what is their nature. To answer these questions, it is essential to record from two or more units simultaneously, and to use more than a single recording electrode if recording points are to be chosen independently. Accordingly, such techniques were developed and are described.
One must also have practical, convenient means for analyzing the large volumes of data so obtained. It is shown that use of an appropriately designed computer system is a profitable approach to this problem. Both hardware and software requirements for a suitable system are discussed and an approach to computer-aided data analysis developed. A description is given of members of a collection of application programs developed for analysis of neuro-physiological data and operated in the environment of and with support from an appropriate computer system. In particular, techniques developed for classification of multiple units recorded on the same electrode are illustrated as are methods for convenient graphical manipulation of data via a computer-driven display.
By means of multiple electrode techniques and the computer-aided data acquisition and analysis system, the path followed by one of the motion detection units was traced from open optic lobe through the brain and into the opposite lobe. It is further shown that this unit and its mirror image in the opposite lobe have a mutually inhibitory relationship. This relationship is investigated. The existence of interaction between other pairs of units is also shown. For pairs of units responding to motion in the same direction, the relationship is of an excitatory nature; for those responding to motion in opposed directions, it is inhibitory.
Experience gained from use of the computer system is discussed and a critical review of the current system is given. The most useful features of the system were found to be the fast response, the ability to go from one analysis technique to another rapidly and conveniently, and the interactive nature of the display system. The shortcomings of the system were problems in real-time use and the programming barrier—the fact that building new analysis techniques requires a high degree of programming knowledge and skill. It is concluded that computer system of the kind discussed will play an increasingly important role in studies of the central nervous system.
Resumo:
PART I
The energy spectrum of heavily-doped molecular crystals was treated in the Green’s function formulation. The mixed crystal Green’s function was obtained by averaging over all possible impurity distributions. The resulting Green’s function, which takes the form of an infinite perturbation expansion, was further approximated by a closed form suitable for numerical calculations. The density-of-states functions and optical spectra for binary mixtures of normal naphthalene and deuterated naphthalene were calculated using the pure crystal density-of-state functions. The results showed that when the trap depth is large, two separate energy bands persist, but when the trap depth is small only a single band exists. Furthermore, in the former case it was found that the intensities of the outer Davydov bands are enhanced whereas the inner bands are weakened. Comparisons with previous theoretical calculations and experimental results are also made.
PART II
The energy states and optical spectra of heavily-doped mixed crystals are investigated. Studies are made for the following binary systems: (1) naphthalene-h8 and d8, (2) naphthalene--h8 and αd4, and (3) naphthalene--h8 and βd1, corresponding to strong, medium and weak perturbations. In addition to ordinary absorption spectra at 4˚K, band-to-band transitions at both 4˚K and 77˚K are also analyzed with emphasis on their relations to cooperative excitation and overall density-of-states functions for mixed crystals. It is found that the theoretical calculations presented in a previous paper agree generally with experiments except for cluster states observed in system (1) at lower guest concentrations. These features are discussed semi-quantitatively. As to the intermolecular interaction parameters, it is found that experimental results compare favorably with calculations based on experimental density-of-states functions but not with those based on octopole interactions or charge-transfer interactions. Previous experimental results of Sheka and the theoretical model of Broude and Rashba are also compared with present investigations.
PART III
The phosphorescence, fluorescence and absorption spectra of pyrazine-h4 and d4 have been obtained at 4˚K in a benzene matrix. For comparison, those of the isotopically mixed crystal pyrazine-h4 in d4 were also taken. All these spectra show extremely sharp and well-resolved lines and reveal detailed vibronic structure.
The analysis of the weak fluorescence spectrum resolves the long-disputed question of whether one or two transitions are involved in the near-ultraviolet absorption of pyrazine. The “mirror-image relationship” between absorption and emission shows that the lowest singlet state is an allowed transition, properly designated as 1B3u ← 1A1g. The forbidden component 1B2g, predicted by both “exciton” and MO theories to be below the allowed component, must lie higher. Its exact location still remains uncertain.
The phosphorescence spectrum when compared with the excitation phosphorescence spectra, indicates that the lowest triplet state is also symmetry allowed, showing a strong 0-0 band and a “mirror-image relationship” between absorption and emission. In accordance with previous work, the triplet state is designated as 3B3u.
The vibronic structure of the phosphorescence spectrum is very complicated. Previous work on the analysis of this spectrum all concluded that a long progression of v6a exists. Under the high resolution attainable in our work, the supposed v6a progression proves to have a composite triplet structure, starting from the second member of the progression. Not only is the v9a hydrogen-bending mode present as shown by the appearance of the C-D bending mode in the d4 spectrum, but a band of 1207 cm-1 in the pyrazine in benzene system and 1231 cm-1 in the mixed crystal system is also observed. This band is assigned as 2v6b and of a1g symmetry. Its anonymously strong intensity in the phosphorescence spectrum is interpreted as due to the Fermi resonance with the 2v6a and v9a band.
To help resolve the present controversy over the crystal phosphorescence spectrum of pyrazine, detailed vibrational analyses of the emission spectra were made. The fluorescence spectrum has essentially the same vibronic structure as the phosphorescence spectrum.
Resumo:
A scheme is proposed to transform an optical pulse into a millimeter-wave frequency modulation pulse by using a weak fiber Bragg grating (FBG) in a fiber-optics system. The Fourier transformation method is used to obtain the required spectrum response function of the FBG for the Gaussian pulse, soliton pulse, and Lorenz shape pulse. On the condition of the first-order Born approximation of the weak fiber grating, the relation of the refractive index distribution and the spectrum response function of the FBG satisfies the Fourier transformation, and the corresponding refractive index distribution forms are obtained for single-frequency modulation and linear-frequency modulation millimeter-wave pulse generation. The performances of the designed fiber gratings are also studied by a numerical simulation method for a supershort pulse transmission. (c) 2007 Optical Society of America.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Coupling a single-mode laser diode with 200 mW to a single-mode fiber (SMF) through an orthonormal aspherical cylindrical lens and a GRIN lens for the intersatellite optical communication system is proposed and demonstrated. We experimentally studied how the coupling efficiency changes with the SMF's position displacement and axial angle variation, and obtained 80 mW output power at the end of the SMF, which shows that the coupling units have satisfied the designed request. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
A general class of single degree of freedom systems possessing rate-independent hysteresis is defined. The hysteretic behavior in a system belonging to this class is depicted as a sequence of single-valued functions; at any given time, the current function is determined by some set of mathematical rules concerning the entire previous response of the system. Existence and uniqueness of solutions are established and boundedness of solutions is examined.
An asymptotic solution procedure is used to derive an approximation to the response of viscously damped systems with a small hysteretic nonlinearity and trigonometric excitation. Two properties of the hysteresis loops associated with any given system completely determine this approximation to the response: the area enclosed by each loop, and the average of the ascending and descending branches of each loop.
The approximation, supplemented by numerical calculations, is applied to investigate the steady-state response of a system with limited slip. Such features as disconnected response curves and jumps in response exist for a certain range of system parameters for any finite amount of slip.
To further understand the response of this system, solutions of the initial-value problem are examined. The boundedness of solutions is investigated first. Then the relationship between initial conditions and resulting steady-state solution is examined when multiple steady-state solutions exist. Using the approximate analysis and numerical calculations, it is found that significant regions of initial conditions in the initial condition plane lead to the different asymptotically stable steady-state solutions.
Resumo:
In the first section of this thesis, two-dimensional properties of the human eye movement control system were studied. The vertical - horizontal interaction was investigated by using a two-dimensional target motion consisting of a sinusoid in one of the directions vertical or horizontal, and low-pass filtered Gaussian random motion of variable bandwidth (and hence information content) in the orthogonal direction. It was found that the random motion reduced the efficiency of the sinusoidal tracking. However, the sinusoidal tracking was only slightly dependent on the bandwidth of the random motion. Thus the system should be thought of as consisting of two independent channels with a small amount of mutual cross-talk.
These target motions were then rotated to discover whether or not the system is capable of recognizing the two-component nature of the target motion. That is, the sinusoid was presented along an oblique line (neither vertical nor horizontal) with the random motion orthogonal to it. The system did not simply track the vertical and horizontal components of motion, but rotated its frame of reference so that its two tracking channels coincided with the directions of the two target motion components. This recognition occurred even when the two orthogonal motions were both random, but with different bandwidths.
In the second section, time delays, prediction and power spectra were examined. Time delays were calculated in response to various periodic signals, various bandwidths of narrow-band Gaussian random motions and sinusoids. It was demonstrated that prediction occurred only when the target motion was periodic, and only if the harmonic content was such that the signal was sufficiently narrow-band. It appears as if general periodic motions are split into predictive and non-predictive components.
For unpredictable motions, the relationship between the time delay and the average speed of the retinal image was linear. Based on this I proposed a model explaining the time delays for both random and periodic motions. My experiments did not prove that the system is sampled data, or that it is continuous. However, the model can be interpreted as representative of a sample data system whose sample interval is a function of the target motion.
It was shown that increasing the bandwidth of the low-pass filtered Gaussian random motion resulted in an increase of the eye movement bandwidth. Some properties of the eyeball-muscle dynamics and the extraocular muscle "active state tension" were derived.