9 resultados para value coherence
em CaltechTHESIS
Resumo:
A theory of two-point boundary value problems analogous to the theory of initial value problems for stochastic ordinary differential equations whose solutions form Markov processes is developed. The theory of initial value problems consists of three main parts: the proof that the solution process is markovian and diffusive; the construction of the Kolmogorov or Fokker-Planck equation of the process; and the proof that the transistion probability density of the process is a unique solution of the Fokker-Planck equation.
It is assumed here that the stochastic differential equation under consideration has, as an initial value problem, a diffusive markovian solution process. When a given boundary value problem for this stochastic equation almost surely has unique solutions, we show that the solution process of the boundary value problem is also a diffusive Markov process. Since a boundary value problem, unlike an initial value problem, has no preferred direction for the parameter set, we find that there are two Fokker-Planck equations, one for each direction. It is shown that the density of the solution process of the boundary value problem is the unique simultaneous solution of this pair of Fokker-Planck equations.
This theory is then applied to the problem of a vibrating string with stochastic density.
Resumo:
The theory of bifurcation of solutions to two-point boundary value problems is developed for a system of nonlinear first order ordinary differential equations in which the bifurcation parameter is allowed to appear nonlinearly. An iteration method is used to establish necessary and sufficient conditions for bifurcation and to construct a unique bifurcated branch in a neighborhood of a bifurcation point which is a simple eigenvalue of the linearized problem. The problem of bifurcation at a degenerate eigenvalue of the linearized problem is reduced to that of solving a system of algebraic equations. Cases with no bifurcation and with multiple bifurcation at a degenerate eigenvalue are considered.
The iteration method employed is shown to generate approximate solutions which contain those obtained by formal perturbation theory. Thus the formal perturbation solutions are rigorously justified. A theory of continuation of a solution branch out of the neighborhood of its bifurcation point is presented. Several generalizations and extensions of the theory to other types of problems, such as systems of partial differential equations, are described.
The theory is applied to the problem of the axisymmetric buckling of thin spherical shells. Results are obtained which confirm recent numerical computations.
Resumo:
We consider the following singularly perturbed linear two-point boundary-value problem:
Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)
By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)
Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.
A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.
Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The relentlessly increasing demand for network bandwidth, driven primarily by Internet-based services such as mobile computing, cloud storage and video-on-demand, calls for more efficient utilization of the available communication spectrum, as that afforded by the resurging DSP-powered coherent optical communications. Encoding information in the phase of the optical carrier, using multilevel phase modulationformats, and employing coherent detection at the receiver allows for enhanced spectral efficiency and thus enables increased network capacity. The distributed feedback semiconductor laser (DFB) has served as the near exclusive light source powering the fiber optic, long-haul network for over 30 years. The transition to coherent communication systems is pushing the DFB laser to the limits of its abilities. This is due to its limited temporal coherence that directly translates into the number of different phases that can be imparted to a single optical pulse and thus to the data capacity. Temporal coherence, most commonly quantified in the spectral linewidth Δν, is limited by phase noise, result of quantum-mandated spontaneous emission of photons due to random recombination of carriers in the active region of the laser.
In this work we develop a generically new type of semiconductor laser with the requisite coherence properties. We demonstrate electrically driven lasers characterized by a quantum noise-limited spectral linewidth as low as 18 kHz. This narrow linewidth is result of a fundamentally new laser design philosophy that separates the functions of photon generation and storage and is enabled by a hybrid Si/III-V integration platform. Photons generated in the active region of the III-V material are readily stored away in the low loss Si that hosts the bulk of the laser field, thereby enabling high-Q photon storage. The storage of a large number of coherent quanta acts as an optical flywheel, which by its inertia reduces the effect of the spontaneous emission-mandated phase perturbations on the laser field, while the enhanced photon lifetime effectively reduces the emission rate of incoherent quanta into the lasing mode. Narrow linewidths are obtained over a wavelength bandwidth spanning the entire optical communication C-band (1530-1575nm) at only a fraction of the input power required by conventional DFB lasers. The results presented in this thesis hold great promise for the large scale integration of lithographically tuned, high-coherence laser arrays for use in coherent communications, that will enable Tb/s-scale data capacities.
Resumo:
The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?
In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
This investigation deals with certain generalizations of the classical uniqueness theorem for the second boundary-initial value problem in the linearized dynamical theory of not necessarily homogeneous nor isotropic elastic solids. First, the regularity assumptions underlying the foregoing theorem are relaxed by admitting stress fields with suitably restricted finite jump discontinuities. Such singularities are familiar from known solutions to dynamical elasticity problems involving discontinuous surface tractions or non-matching boundary and initial conditions. The proof of the appropriate uniqueness theorem given here rests on a generalization of the usual energy identity to the class of singular elastodynamic fields under consideration.
Following this extension of the conventional uniqueness theorem, we turn to a further relaxation of the customary smoothness hypotheses and allow the displacement field to be differentiable merely in a generalized sense, thereby admitting stress fields with square-integrable unbounded local singularities, such as those encountered in the presence of focusing of elastic waves. A statement of the traction problem applicable in these pathological circumstances necessitates the introduction of "weak solutions'' to the field equations that are accompanied by correspondingly weakened boundary and initial conditions. A uniqueness theorem pertaining to this weak formulation is then proved through an adaptation of an argument used by O. Ladyzhenskaya in connection with the first boundary-initial value problem for a second-order hyperbolic equation in a single dependent variable. Moreover, the second uniqueness theorem thus obtained contains, as a special case, a slight modification of the previously established uniqueness theorem covering solutions that exhibit only finite stress-discontinuities.