5 resultados para Classes of flow correction
em Massachusetts Institute of Technology
Resumo:
Electroosmotic flow is a convenient mechanism for transporting polar fluid in a microfluidic device. The flow is generated through the application of an external electric field that acts on the free charges that exists in a thin Debye layer at the channel walls. The charge on the wall is due to the chemistry of the solid-fluid interface, and it can vary along the channel, e.g. due to modification of the wall. This investigation focuses on the simulation of the electroosmotic flow (EOF) profile in a cylindrical microchannel with step change in zeta potential. The modified Navier-Stoke equation governing the velocity field and a non-linear two-dimensional Poisson-Boltzmann equation governing the electrical double-layer (EDL) field distribution are solved numerically using finite control-volume method. Continuities of flow rate and electric current are enforced resulting in a non-uniform electrical field and pressure gradient distribution along the channel. The resulting parabolic velocity distribution at the junction of the step change in zeta potential, which is more typical of a pressure-driven velocity flow profile, is obtained.
Resumo:
Different approaches to visual object recognition can be divided into two general classes: model-based vs. non model-based schemes. In this paper we establish some limitation on the class of non model-based recognition schemes. We show that every function that is invariant to viewing position of all objects is the trivial (constant) function. It follows that every consistent recognition scheme for recognizing all 3-D objects must in general be model based. The result is extended to recognition schemes that are imperfect (allowed to make mistakes) or restricted to certain classes of objects.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
A computer may gather a lot of information from its environment in an optical or graphical manner. A scene, as seen for instance from a TV camera or a picture, can be transformed into a symbolic description of points and lines or surfaces. This thesis describes several programs, written in the language CONVERT, for the analysis of such descriptions in order to recognize, differentiate and identify desired objects or classes of objects in the scene. Examples are given in each case. Although the recognition might be in terms of projections of 2-dim and 3-dim objects, we do not deal with stereoscopic information. One of our programs (Polybrick) identifies parallelepipeds in a scene which may contain partially hidden bodies and non-parallelepipedic objects. The program TD works mainly with 2-dimensional figures, although under certain conditions successfully identifies 3-dim objects. Overlapping objects are identified when they are transparent. A third program, DT, works with 3-dim and 2-dim objects, and does not identify objects which are not completely seen. Important restrictions and suppositions are: (a) the input is assumed perfect (noiseless), and in a symbolic format; (b) no perspective deformation is considered. A portion of this thesis is devoted to the study of models (symbolic representations) of the objects we want to identify; different schemes, some of them already in use, are discussed. Focusing our attention on the more general problem of identification of general objects when they substantially overlap, we propose some schemes for their recognition, and also analyze some problems that are met.
Resumo:
As part of a larger research project in musical structure, a program has been written which "reads" scores encoded in an input language isomorphic to music notation. The program is believed to be the first of its kind. From a small number of parsing rules the program derives complex configurations, each of which is associated with a set of reference points in a numerical representation of a time-continuum. The logical structure of the program is such that all and only the defined classes of events are represented in the output. Because the basis of the program is syntactic (in the sense that parsing operations are performed on formal structures in the input string), many extensions and refinements can be made without excessive difficulty. The program can be applied to any music which can be represented in the input language. At present, however, it constitutes the first stage in the development of a set of analytic tools for the study of so-called atonal music, the revolutionary and little understood music which has exerted a decisive influence upon contemporary practice of the art. The program and the approach to automatic data-structuring may be of interest to linguists and scholars in other fields concerned with basic studies of complex structures produced by human beings.