916 resultados para multiple discrepancies theory
Resumo:
I. Existence and Structure of Bifurcation Branches
The problem of bifurcation is formulated as an operator equation in a Banach space, depending on relevant control parameters, say of the form G(u,λ) = 0. If dimN(G_u(u_O,λ_O)) = m the method of Lyapunov-Schmidt reduces the problem to the solution of m algebraic equations. The possible structure of these equations and the various types of solution behaviour are discussed. The equations are normally derived under the assumption that G^O_λεR(G^O_u). It is shown, however, that if G^O_λεR(G^O_u) then bifurcation still may occur and the local structure of such branches is determined. A new and compact proof of the existence of multiple bifurcation is derived. The linearized stability near simple bifurcation and "normal" limit points is then indicated.
II. Constructive Techniques for the Generation of Solution Branches
A method is described in which the dependence of the solution arc on a naturally occurring parameter is replaced by the dependence on a form of pseudo-arclength. This results in continuation procedures through regular and "normal" limit points. In the neighborhood of bifurcation points, however, the associated linear operator is nearly singular causing difficulty in the convergence of continuation methods. A study of the approach to singularity of this operator yields convergence proofs for an iterative method for determining the solution arc in the neighborhood of a simple bifurcation point. As a result of these considerations, a new constructive proof of bifurcation is determined.
Resumo:
The theory of bifurcation of solutions to two-point boundary value problems is developed for a system of nonlinear first order ordinary differential equations in which the bifurcation parameter is allowed to appear nonlinearly. An iteration method is used to establish necessary and sufficient conditions for bifurcation and to construct a unique bifurcated branch in a neighborhood of a bifurcation point which is a simple eigenvalue of the linearized problem. The problem of bifurcation at a degenerate eigenvalue of the linearized problem is reduced to that of solving a system of algebraic equations. Cases with no bifurcation and with multiple bifurcation at a degenerate eigenvalue are considered.
The iteration method employed is shown to generate approximate solutions which contain those obtained by formal perturbation theory. Thus the formal perturbation solutions are rigorously justified. A theory of continuation of a solution branch out of the neighborhood of its bifurcation point is presented. Several generalizations and extensions of the theory to other types of problems, such as systems of partial differential equations, are described.
The theory is applied to the problem of the axisymmetric buckling of thin spherical shells. Results are obtained which confirm recent numerical computations.
Resumo:
In recent years coastal resource management has begun to stand as its own discipline. Its multidisciplinary nature gives it access to theory situated in each of the diverse fields which it may encompass, yet management practices often revert to the primary field of the manager. There is a lack of a common set of “coastal” theory from which managers can draw. Seven resource-related issues with which coastal area managers must contend include: coastal habitat conservation, traditional maritime communities and economies, strong development and use pressures, adaptation to sea level rise and climate change, landscape sustainability and resilience, coastal hazards, and emerging energy technologies. The complexity and range of human and environmental interactions at the coast suggest a strong need for a common body of coastal management theory which managers would do well to understand generally. Planning theory, which itself is a synthesis of concepts from multiple fields, contains ideas generally valuable to coastal management. Planning theory can not only provide an example of how to develop a multi- or transdisciplinary set of theory, but may also provide actual theoretical foundation for a coastal theory. In particular we discuss five concepts in the planning theory discourse and present their utility for coastal resource managers. These include “wicked” problems, ecological planning, the epistemology of knowledge communities, the role of the planner/ manager, and collaborative planning. While these theories are known and familiar to some professionals working at the coast, we argue that there is a need for broader understanding amongst the various specialists working in the increasingly identifiable field of coastal resource management. (PDF contains 4 pages)
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The wave-theoretical analysis of acoustic and elastic waves refracted by a spherical boundary across which both velocity and density increase abruptly and thence either increase or decrease continuously with depth is formulated in terms of the general problem of waves generated at a steady point source and scattered by a radially heterogeneous spherical body. A displacement potential representation is used for the elastic problem that results in high frequency decoupling of P-SV motion in a spherically symmetric, radially heterogeneous medium. Through the application of an earth-flattening transformation on the radial solution and the Watson transform on the sum over eigenfunctions, the solution to the spherical problem for high frequencies is expressed as a Weyl integral for the corresponding half-space problem in which the effect of boundary curvature maps into an effective positive velocity gradient. The results of both analytical and numerical evaluation of this integral can be summarized as follows for body waves in the crust and upper mantle:
1) In the special case of a critical velocity gradient (a gradient equal and opposite to the effective curvature gradient), the critically refracted wave reduces to the classical head wave for flat, homogeneous layers.
2) For gradients more negative than critical, the amplitude of the critically refracted wave decays more rapidly with distance than the classical head wave.
3) For positive, null, and gradients less negative than critical, the amplitude of the critically refracted wave decays less rapidly with distance than the classical head wave, and at sufficiently large distances, the refracted wave can be adequately described in terms of ray-theoretical diving waves. At intermediate distances from the critical point, the spectral amplitude of the refracted wave is scalloped due to multiple diving wave interference.
These theoretical results applied to published amplitude data for P-waves refracted by the major crustal and upper mantle horizons (the Pg, P*, and Pn travel-time branches) suggest that the 'granitic' upper crust, the 'basaltic' lower crust, and the mantle lid all have negative or near-critical velocity gradients in the tectonically active western United States. On the other hand, the corresponding horizons in the stable eastern United States appear to have null or slightly positive velocity gradients. The distribution of negative and positive velocity gradients correlates closely with high heat flow in tectonic regions and normal heat flow in stable regions. The velocity gradients inferred from the amplitude data are generally consistent with those inferred from ultrasonic measurements of the effects of temperature and pressure on crustal and mantle rocks and probable geothermal gradients. A notable exception is the strong positive velocity gradient in the mantle lid beneath the eastern United States (2 x 10-3 sec-1), which appears to require a compositional gradient to counter the effect of even a small geothermal gradient.
New seismic-refraction data were recorded along a 800 km profile extending due south from the Canadian border across the Columbia Plateau into eastern Oregon. The source for the seismic waves was a series of 20 high-energy chemical explosions detonated by the Canadian government in Greenbush Lake, British Columbia. The first arrivals recorded along this profile are on the Pn travel-time branch. In northern Washington and central Oregon their travel time is described by T = Δ/8.0 + 7.7 sec, but in the Columbia Plateau the Pn arrivals are as much as 0.9 sec early with respect to this line. An interpretation of these Pn arrivals together with later crustal arrivals suggest that the crust under the Columbia Plateau is thinner by about 10 km and has a higher average P-wave velocity than the 35-km-thick, 62-km/sec crust under the granitic-metamorphic terrain of northern Washington. A tentative interpretation of later arrivals recorded beyond 500 km from the shots suggests that a thin 8.4-km/sec horizon may be present in the upper mantle beneath the Columbia Plateau and that this horizon may form the lid to a pronounced low-velocity zone extending to a depth of about 140 km.
Resumo:
Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.
In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.
The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.
The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.
Resumo:
In this paper, a new type of resonant Brewster filters (RBF) with surface relief structure for the multiple channels is first presented by using the rigorous coupled-wave analysis and the S-matrix method. By tuning the depth of homogeneous layer which is under the surface relief structure, the multiple channels phenomenon is obtained. Long range, extremely low sidebands and multiple channels are found when the RBF with surface relief structure is illuminated with Transverse Magnetic incident polarization light near the Brewster angle calculated with the effective media theory of sub wavelength grating. Moreover, the wavelengths of RBF with surface relief structure can be easily shifted by changing the depth of homogeneous layer while its optical properties such as low sideband reflection and narrow band are not spoiled when the depth is changed. Furthermore, the variation of the grating thickness does not effectively change the resonant wavelength of RBF, but have a remarkable effect on its line width, which is very useful for designing such filters with different line widths at desired wavelength.
Resumo:
Level II reliability theory provides an approximate method whereby the reliability of a complex engineering structure which has multiple strength and loading variables may be estimated. This technique has been applied previously to both civil and offshore structures with considerable success. The aim of the present work is to assess the applicability of the method for aircraft structures, and to this end landing gear design is considered in detail. It is found that the technique yields useful information regarding the structural reliability, and further it enables the critical design parameters to be identified.
Resumo:
Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales. From the perspective of a fixed computational operation in a network, this seems like a most unacceptable degree of added variability. We suggest an alternative theory according to which short-term synaptic plasticity plays a normatively-justifiable role. This theory starts from the commonplace observation that the spiking of a neuron is an incomplete, digital, report of the analog quantity that contains all the critical information, namely its membrane potential. We suggest that a synapse solves the inverse problem of estimating the pre-synaptic membrane potential from the spikes it receives, acting as a recursive filter. We show that the dynamics of short-term synaptic depression closely resemble those required for optimal filtering, and that they indeed support high quality estimation. Under this account, the local postsynaptic potential and the level of synaptic resources track the (scaled) mean and variance of the estimated presynaptic membrane potential. We make experimentally testable predictions for how the statistics of subthreshold membrane potential fluctuations and the form of spiking non-linearity should be related to the properties of short-term plasticity in any particular cell type.
Resumo:
We demonstrate how a prior assumption of smoothness can be used to enhance the reconstruction of free energy profiles from multiple umbrella sampling simulations using the Bayesian Gaussian process regression approach. The method we derive allows the concurrent use of histograms and free energy gradients and can easily be extended to include further data. In Part I we review the necessary theory and test the method for one collective variable. We demonstrate improved performance with respect to the weighted histogram analysis method and obtain meaningful error bars without any significant additional computation. In Part II we consider the case of multiple collective variables and compare to a reconstruction using least squares fitting of radial basis functions. We find substantial improvements in the regimes of spatially sparse data or short sampling trajectories. A software implementation is made available on www.libatoms.org.
Resumo:
The residual tensile strength of glass fibre reinforced composites with randomly distributed holes and fragment impact damages have been investigated. Experiments have been performed on large scale panels and small scale specimens. A finite element model has been developed to predict the strength of multi-axial panels with randomly distributed holes. Further, an effective analytical model has been developed using percolation theory. The model gives an estimation of the residual strength as function of removed surface area caused by the holes. It is found that if 8% of the area is removed, the residual strength is approximately 50% of the un-damaged strength. © 2014 Published by Elsevier Ltd.
Resumo:
Significant progress has been made towards understanding the global stability of slowly-developing shear flows. The WKBJ theory developed by Patrick Huerre and his co-authors has proved absolutely central, with the result that both the linear and the nonlinear stability of a wide range of flows can now be understood in terms of their local absolute/convective instability properties. In many situations, the local absolute frequency possesses a single dominant saddle point in complex X-space (where X is the slow streamwise coordinate of the base flow), which then acts as a single wavemaker driving the entire global linear dynamics. In this paper we consider the more complicated case in which multiple saddles may act as the wavemaker for different values of some control parameter. We derive a frequency selection criterion in the general case, which is then validated against numerical results for the linearized third-order Ginzburg-Landau equation (which possesses two saddle points). We believe that this theory may be relevant to a number of flows, including the boundary layer on a rotating disk and the eccentric Taylor-Couette-Poiseuille flow. © 2014 Elsevier Masson SAS. All rights reserved.
Resumo:
Division of labour is a marked feature of multicellular organisms. Margulis proposed that the ancestors of metazoans had only one microtubule organizing center (MTOC), so they could not move and divide simultaneously. Selection for simultaneous movement and cell division had driven the division of labour between cells. However, no evidence or explanation for this assumption was provided. Why could the unicellular ancetors not have multiple MTOCs? The gain and loss of three possible strategies are discussed. It was found that the advantage of one or two MTOC per cell is environment-dependent. Unicellular organisms with only one MTOC per cell are favored only in resource-limited environments without strong predatory pressure. If division of labour occurring in a bicellular organism just makes simultaneous movement and cell division possible, the possibility of its fixation by natural selection is very low because a somatic cell performing the function of an MTOC is obviously wasting resources. Evolutionary biologists should search for other selective forces for division of labour in cells.