13 resultados para image motion analysis
em CaltechTHESIS
Resumo:
The temporal structure of neuronal spike trains in the visual cortex can provide detailed information about the stimulus and about the neuronal implementation of visual processing. Spike trains recorded from the macaque motion area MT in previous studies (Newsome et al., 1989a; Britten et al., 1992; Zohary et al., 1994) are analyzed here in the context of the dynamic random dot stimulus which was used to evoke them. If the stimulus is incoherent, the spike trains can be highly modulated and precisely locked in time to the stimulus. In contrast, the coherent motion stimulus creates little or no temporal modulation and allows us to study patterns in the spike train that may be intrinsic to the cortical circuitry in area MT. Long gaps in the spike train evoked by the preferred direction motion stimulus are found, and they appear to be symmetrical to bursts in the response to the anti-preferred direction of motion. A novel cross-correlation technique is used to establish that the gaps are correlated between pairs of neurons. Temporal modulation is also found in psychophysical experiments using a modified stimulus. A model is made that can account for the temporal modulation in terms of the computational theory of biological image motion processing. A frequency domain analysis of the stimulus reveals that it contains a repeated power spectrum that may account for psychophysical and electrophysiological observations.
Some neurons tend to fire bursts of action potentials while others avoid burst firing. Using numerical and analytical models of spike trains as Poisson processes with the addition of refractory periods and bursting, we are able to account for peaks in the power spectrum near 40 Hz without assuming the existence of an underlying oscillatory signal. A preliminary examination of the local field potential reveals that stimulus-locked oscillation appears briefly at the beginning of the trial.
Resumo:
Optical microscopy is an essential tool in biological science and one of the gold standards for medical examinations. Miniaturization of microscopes can be a crucial stepping stone towards realizing compact, cost-effective and portable platforms for biomedical research and healthcare. This thesis reports on implementations of bright-field and fluorescence chip-scale microscopes for a variety of biological imaging applications. The term “chip-scale microscopy” refers to lensless imaging techniques realized in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical microscopes.
Our strategy for chip-scale microscopy involves utilization of low-cost Complementary metal Oxide Semiconductor (CMOS) image sensors, computational image processing and micro-fabricated structural components. First, the sub-pixel resolving optofluidic microscope (SROFM), will be presented, which combines microfluidics and pixel super-resolution image reconstruction to perform high-throughput imaging of fluidic samples, such as blood cells. We discuss design parameters and construction of the device, as well as the resulting images and the resolution of the device, which was 0.66 µm at the highest acuity. The potential applications of SROFM for clinical diagnosis of malaria in the resource-limited settings is discussed.
Next, the implementations of ePetri, a self-imaging Petri dish platform with microscopy resolution, are presented. Here, we simply place the sample of interest on the surface of the image sensor and capture the direct shadow images under the illumination. By taking advantage of the inherent motion of the microorganisms, we achieve high resolution (~1 µm) imaging and long term culture of motile microorganisms over ultra large field-of-view (5.7 mm × 4.4 mm) in a specialized ePetri platform. We apply the pixel super-resolution reconstruction to a set of low-resolution shadow images of the microorganisms as they move across the sensing area of an image sensor chip and render an improved resolution image. We perform longitudinal study of Euglena gracilis cultured in an ePetri platform and image based analysis on the motion and morphology of the cells. The ePetri device for imaging non-motile cells are also demonstrated, by using the sweeping illumination of a light emitting diode (LED) matrix for pixel super-resolution reconstruction of sub-pixel shifted shadow images. Using this prototype device, we demonstrate the detection of waterborne parasites for the effective diagnosis of enteric parasite infection in resource-limited settings.
Then, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope, which uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is also based on the image reconstruction with sweeping illumination technique, where the sequence of images are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
Finally, we report on the implementation of fluorescence chip-scale microscope, based on a silo-filter structure fabricated on the pixel array of a CMOS image sensor. The extruded pixel design with metal walls between neighboring pixels successfully guides fluorescence emission through the thick absorptive filter to the photodiode layer of a pixel. Our silo-filter CMOS image sensor prototype achieves 13-µm resolution for fluorescence imaging over a wide field-of-view (4.8 mm × 4.4 mm). Here, we demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
Resumo:
This work deals with two related areas: processing of visual information in the central nervous system, and the application of computer systems to research in neurophysiology.
Certain classes of interneurons in the brain and optic lobes of the blowfly Calliphora phaenicia were previously shown to be sensitive to the direction of motion of visual stimuli. These units were identified by visual field, preferred direction of motion, and anatomical location from which recorded. The present work is addressed to the questions: (1) is there interaction between pairs of these units, and (2) if such relationships can be found, what is their nature. To answer these questions, it is essential to record from two or more units simultaneously, and to use more than a single recording electrode if recording points are to be chosen independently. Accordingly, such techniques were developed and are described.
One must also have practical, convenient means for analyzing the large volumes of data so obtained. It is shown that use of an appropriately designed computer system is a profitable approach to this problem. Both hardware and software requirements for a suitable system are discussed and an approach to computer-aided data analysis developed. A description is given of members of a collection of application programs developed for analysis of neuro-physiological data and operated in the environment of and with support from an appropriate computer system. In particular, techniques developed for classification of multiple units recorded on the same electrode are illustrated as are methods for convenient graphical manipulation of data via a computer-driven display.
By means of multiple electrode techniques and the computer-aided data acquisition and analysis system, the path followed by one of the motion detection units was traced from open optic lobe through the brain and into the opposite lobe. It is further shown that this unit and its mirror image in the opposite lobe have a mutually inhibitory relationship. This relationship is investigated. The existence of interaction between other pairs of units is also shown. For pairs of units responding to motion in the same direction, the relationship is of an excitatory nature; for those responding to motion in opposed directions, it is inhibitory.
Experience gained from use of the computer system is discussed and a critical review of the current system is given. The most useful features of the system were found to be the fast response, the ability to go from one analysis technique to another rapidly and conveniently, and the interactive nature of the display system. The shortcomings of the system were problems in real-time use and the programming barrier—the fact that building new analysis techniques requires a high degree of programming knowledge and skill. It is concluded that computer system of the kind discussed will play an increasingly important role in studies of the central nervous system.
Resumo:
Current earthquake early warning systems usually make magnitude and location predictions and send out a warning to the users based on those predictions. We describe an algorithm that assesses the validity of the predictions in real-time. Our algorithm monitors the envelopes of horizontal and vertical acceleration, velocity, and displacement. We compare the observed envelopes with the ones predicted by Cua & Heaton's envelope ground motion prediction equations (Cua 2005). We define a "test function" as the logarithm of the ratio between observed and predicted envelopes at every second in real-time. Once the envelopes deviate beyond an acceptable threshold, we declare a misfit. Kurtosis and skewness of a time evolving test function are used to rapidly identify a misfit. Real-time kurtosis and skewness calculations are also inputs to both probabilistic (Logistic Regression and Bayesian Logistic Regression) and nonprobabilistic (Least Squares and Linear Discriminant Analysis) models that ultimately decide if there is an unacceptable level of misfit. This algorithm is designed to work at a wide range of amplitude scales. When tested with synthetic and actual seismic signals from past events, it works for both small and large events.
Resumo:
This thesis covers four different problems in the understanding of vortex sheets, and these are presented in four chapters.
In Chapter 1, free streamline theory is used to determine the steady solutions of an array of identical, hollow or stagnant core vortices in an inviscid, incompressible fluid. Assuming the array is symmetric to rotation through π radians about an axis through any vortex centre, there are two solutions or no solutions depending on whether A^(1/2)/L is less than or greater than 0.38 where A is the area of the vortex and L is the separation distance. Stability analysis shows that the more deformed shape is unstable to infinitesimal symmetric disturbances which leave the centres of the vortices undisplaced.
Chapter 2 is concerned with the roll-up of vortex sheets in homogeneous fluid. The flow over conventional and ring wings is used to test the method of Fink and Soh (1974). Despite modifications which improve the accuracy of the method, unphysical results occur. A possible explanation for this is that small scales are important and an alternate method based on "Cloud-in-Cell" techniques is introduced. The results show small scale growth and amalgamation into larger structures.
The motion of a buoyant pair of line vortices of opposite circulation is considered in Chapter 3. The density difference between the fluid carried by the vortices and the fluid outside is considered small, so that the Boussinesq approximation may be used. A macroscopic model is developed which shows the formation of a detrainment filament and this is included as a modification to the model. The results agree well with the numerical solution as developed by Hill (1975b) and show that after an initial slowdown, the vortices begin to accelerate downwards.
Chapter 4 reproduces completely a paper that has already been published (Baker, Barker, Bofah and Saffman (1974)) on the effect of "vortex wandering" on the measurement of velocity profiles of the trailing vortices behind a wing.
Resumo:
In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.
For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.
To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.
The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.
The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.
Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.
We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.
Resumo:
The Northridge earthquake of January 17, 1994, highlighted the two previously known problems of premature fracturing of connections and the damaging capabilities of near-source ground motion pulses. Large ground motions had not been experienced in a city with tall steel moment-frame buildings before. Some steel buildings exhibited fracture of welded connections or other types of structural degradation.
A sophisticated three-dimensional nonlinear inelastic program is developed that can accurately model many nonlinear properties commonly ignored or approximated in other programs. The program can assess and predict severely inelastic response of steel buildings due to strong ground motions, including collapse.
Three-dimensional fiber and segment discretization of elements is presented in this work. This element and its two-dimensional counterpart are capable of modeling various geometric and material nonlinearities such as moment amplification, spread of plasticity and connection fracture. In addition to introducing a three-dimensional element discretization, this work presents three-dimensional constraints that limit the number of equations required to solve various three-dimensional problems consisting of intersecting planar frames.
Two buildings damaged in the Northridge earthquake are investigated to verify the ability of the program to match the level of response and the extent and location of damage measured. The program is used to predict response of larger near-source ground motions using the properties determined from the matched response.
A third building is studied to assess three-dimensional effects on a realistic irregular building in the inelastic range of response considering earthquake directivity. Damage levels are observed to be significantly affected by directivity and torsional response.
Several strong recorded ground motions clearly exceed code-based levels. Properly designed buildings can have drifts exceeding code specified levels due to these ground motions. The strongest ground motions caused collapse if fracture was included in the model. Near-source ground displacement pulses can cause columns to yield prior to weaker-designed beams. Damage in tall buildings correlates better with peak-to-peak displacements than with peak-to-peak accelerations.
Dynamic response of tall buildings shows that higher mode response can cause more damage than first mode response. Leaking of energy between modes in conjunction with damage can cause torsional behavior that is not anticipated.
Various response parameters are used for all three buildings to determine what correlations can be made for inelastic building response. Damage levels can be dramatically different based on the inelastic model used. Damage does not correlate well with several common response parameters.
Realistic modeling of material properties and structural behavior is of great value for understanding the performance of tall buildings due to earthquake excitations.
Resumo:
High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.
Resumo:
Computation technology has dramatically changed the world around us; you can hardly find an area where cell phones have not saturated the market, yet there is a significant lack of breakthroughs in the development to integrate the computer with biological environments. This is largely the result of the incompatibility of the materials used in both environments; biological environments and experiments tend to need aqueous environments. To help aid in these development chemists, engineers, physicists and biologists have begun to develop microfluidics to help bridge this divide. Unfortunately, the microfluidic devices required large external support equipment to run the device. This thesis presents a series of several microfluidic methods that can help integrate engineering and biology by exploiting nanotechnology to help push the field of microfluidics back to its intended purpose, small integrated biological and electrical devices. I demonstrate this goal by developing different methods and devices to (1) separate membrane bound proteins with the use of microfluidics, (2) use optical technology to make fiber optic cables into protein sensors, (3) generate new fluidic devices using semiconductor material to manipulate single cells, and (4) develop a new genetic microfluidic based diagnostic assay that works with current PCR methodology to provide faster and cheaper results. All of these methods and systems can be used as components to build a self-contained biomedical device.
Resumo:
In this study the dynamics of flow over the blades of vertical axis wind turbines was investigated using a simplified periodic motion to uncover the fundamental flow physics and provide insight into the design of more efficient turbines. Time-resolved, two-dimensional velocity measurements were made with particle image velocimetry on a wing undergoing pitching and surging motion to mimic the flow on a turbine blade in a non-rotating frame. Dynamic stall prior to maximum angle of attack and a leading edge vortex development were identified in the phase-averaged flow field and captured by a simple model with five modes, including the first two harmonics of the pitch/surge frequency identified using the dynamic mode decomposition. Analysis of these modes identified vortical structures corresponding to both frequencies that led the separation and reattachment processes, while their phase relationship determined the evolution of the flow.
Detailed analysis of the leading edge vortex found multiple regimes of vortex development coupled to the time-varying flow field on the airfoil. The vortex was shown to grow on the airfoil for four convection times, before shedding and causing dynamic stall in agreement with 'optimal' vortex formation theory. Vortex shedding from the trailing edge was identified from instantaneous velocity fields prior to separation. This shedding was found to be in agreement with classical Strouhal frequency scaling and was removed by phase averaging, which indicates that it is not exactly coupled to the phase of the airfoil motion.
The flow field over an airfoil undergoing solely pitch motion was shown to develop similarly to the pitch/surge motion; however, flow separation took place earlier, corresponding to the earlier formation of the leading edge vortex. A similar reduced-order model to the pitch/surge case was developed, with similar vortical structures leading separation and reattachment; however, the relative phase lead of the separation mode, corresponding to earlier separation, necessitated that a third frequency to be incorporated into the reattachment mode to provide a relative lag in reattachment.
Finally, the results are returned to the rotating frame and the effects of each flow phenomena on the turbine are estimated, suggesting kinematic criteria for the design of improved turbines.
Resumo:
A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.
Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.
Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.
Resumo:
The pattern of energy release during the Imperial Valley, California, earthquake of 1940 is studied by analysing the El Centro strong motion seismograph record and records from the Tinemaha seismograph station, 546 km from the epicenter. The earthquake was a multiple event sequence with at least 4 events recorded at El Centro in the first 25 seconds, followed by 9 events recorded in the next 5 minutes. Clear P, S and surface waves were observed on the strong motion record. Although the main part of the earthquake energy was released during the first 15 seconds, some of the later events were as large as M = 5.8 and thus are important for earthquake engineering studies. The moment calculated using Fourier analysis of surface waves agrees with the moment estimated from field measurements of fault offset after the earthquake. The earthquake engineering significance of the complex pattern of energy release is discussed. It is concluded that a cumulative increase in amplitudes of building vibration resulting from the present sequence of shocks would be significant only for structures with relatively long natural period of vibration. However, progressive weakening effects may also lead to greater damage for multiple event earthquakes.
The model with surface Love waves propagating through a single layer as a surface wave guide is studied. It is expected that the derived properties for this simple model illustrate well several phenomena associated with strong earthquake ground motion. First, it is shown that a surface layer, or several layers, will cause the main part of the high frequency energy, radiated from the nearby earthquake, to be confined to the layer as a wave guide. The existence of the surface layer will thus increase the rate of the energy transfer into the man-made structures on or near the surface of the layer. Secondly, the surface amplitude of the guided SH waves will decrease if the energy of the wave is essentially confined to the layer and if the wave propagates towards an increasing layer thickness. It is also shown that the constructive interference of SH waves will cause the zeroes and the peaks in the Fourier amplitude spectrum of the surface ground motion to be continuously displaced towards the longer periods as the distance from the source of the energy release increases.
Resumo:
As a simplified approach for estimating theoretically the influence of local subsoils upon the ground motion during an earthquake, the problem of an idealized layered system subjected to vertically incident plane body waves was studied. Both the technique of steady-state analysis and the technique of transient analysis have been used to analyze the problem.
In the steady-state analysis, a recursion formula has been derived for obtaining the response of a layered system to sinusoidally steady-state input. Several conclusions are drawn concerning the nature of the amplification spectrum of a nonviscous layered system having its layer stiffnesses increasing with depth. Numerical examples are given to demonstrate the effect of layer parameters on the amplification spectrum of a layered system.
In the transient analysis, two modified shear beam models have been established for obtaining approximately the response of a layered system to earthquake-like excitation. The method of continuous modal analysis was adopted for approximate analysis of the models, with energy dissipation in the layers, if any, taken into account. Numerical examples are given to demonstrate the accuracy of the models and the effect of a layered system in modifying the input motion.
Conditions are established, under which the theory is applicable to predict the influence of local subsoils on the ground motion during an earthquake. To demonstrate the applicability of the models to actual cases, three examples of actually recorded earthquake events are examined. It is concluded that significant modification of the incoming seismic waves, as predicted by the theory, is likely to occur in well defined soft subsoils during an earthquake, provided that certain conditions concerning the nature of the incoming seismic waves are satisfied.