948 resultados para Asymptotic Mean Squared Errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the radially symmetric nonlinear von Kármán plate equations for circular or annular plates in the limit of small thickness. The loads on the plate consist of a radially symmetric pressure load and a uniform edge load. The dependence of the steady states on the edge load and thickness is studied using asymptotics as well as numerical calculations. The von Kármán plate equations are a singular perturbation of the Fӧppl membrane equation in the asymptotic limit of small thickness. We study the role of compressive membrane solutions in the small thickness asymptotic behavior of the plate solutions.

We give evidence for the existence of a singular compressive solution for the circular membrane and show by a singular perturbation expansion that the nonsingular compressive solution approach this singular solution as the radial stress at the center of the plate vanishes. In this limit, an infinite number of folds occur with respect to the edge load. Similar behavior is observed for the annular membrane with zero edge load at the inner radius in the limit as the circumferential stress vanishes.

We develop multiscale expansions, which are asymptotic to members of this family for plates with edges that are elastically supported against rotation. At some thicknesses this approximation breaks down and a boundary layer appears at the center of the plate. In the limit of small normal load, the points of breakdown approach the bifurcation points corresponding to buckling of the nondeflected state. A uniform asymptotic expansion for small thickness combining the boundary layer with a multiscale approximation of the outer solution is developed for this case. These approximations complement the well known boundary layer expansions based on tensile membrane solutions in describing the bending and stretching of thin plates. The approximation becomes inconsistent as the clamped state is approached by increasing the resistance against rotation at the edge. We prove that such an expansion for the clamped circular plate cannot exist unless the pressure load is self-equilibrating.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:

i) the mean exit time

ii) the phase-space distribution of exit locations.

When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.

Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.

The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.

We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.

Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of three separate studies of roles that black holes might play in our universe.

In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.

In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.

In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel method for gene enrichment has been developed and applied to mapping the rRNA genes of two eucaryotic organisms. The method makes use of antibodies to DNA/RNA hybrids prepared by injecting rabbits with the synthetic hybrid poly(rA)•poly(dT). Antibodies which cross-react with non-hybrid nucleic acids were removed from the purified IgG fraction by adsorption on columns of DNA-Sepharose, oligo(dT)-cellulose, and poly(rA)-Sepharose. Subsequent purification of the specific DNA/RNA hybrid antibody was carried out on a column of oligo(dT)-cellulose to which poly(rA) was hybridized. Attachment of these antibodies to CNBr-activated Sepharose produced an affinity resin which specifically binds DNA/RNA hybrids.

In order to map the rDNA of the slime mold Dictyostelium discoideum, R-loops were formed using unsheared nuclear DNA and the 178 and 268 rRNAs of this organism. This mixture was passed through a column containing the affinity resin, and bound molecules containing R- loops were eluted by high salt. This purified rDN A was observed directly in the electron microscope. Evidence was obtained that there is a physical end to Dictyostelium rDN A molecules approximately 10 kilobase pairs (kbp) from the region which codes for the 268 rRNA. This finding is consistent with reports of other investigators that the rRNA genes exist as inverse repeats on extra-chromosomal molecules of DNA unattached to the remainder of the nuclear DNA in this organism.

The same general procedure was used to map the rRNA genes of the rat. Molecules of DNA which contained R-loops formed with the 188 and 288 rRNAs were enriched approximately 150- fold from total genomal rat DNA by two cycles of purification on the affinity column. Electron microscopic measurements of these molecules enabled the construction of an R-loop map of rat rDNA. Eleven of the observed molecules contained three or four R-loops or else two R-loops separated by a long spacer. These observations indicated that the rat rRNA genes are arranged as tandem repeats. The mean length of the repeating units was 37.2 kbp with a standard deviation of 1.3 kbp. These eleven molecules may represent repeating units of exactly the same length within the errors of the measurements, although a certain degree of length heterogeneity cannot be ruled out. If significantly shorter or longer repeating units exist, they are probably much less common than the 37.2 kbp unit.

The last section of the thesis describes the production of antibodies to non-histone chromosomal proteins which have been exposed to the ionic detergent sodium dodecyl sulfate (SDS). The presence of low concentrations of SDS did not seem to affect either production of antibodies or their general specificity. Also, a technique is described for the in situ immunofluorescent detection of protein antigens in polyacrylamide gels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objeto de estudo foi o preparo e a administração de medicamentos por cateter pela enfermagem em pacientes que recebem nutrição enteral. O objetivo geral foi investigar o padrão de preparo e administração dos medicamentos por cateter em pacientes que recebem nutrição enteral concomitante. Os objetivos específicos foram apresentar o perfil dos medicamentos preparados e administrados de acordo com a possibilidade de serem administrados por cateter enteral e avaliar o tipo e a freqüência de erros que ocorrem no preparo e administração de medicamentos por cateter. Tratou-se de uma pesquisa com desenho transversal de natureza observacional, sem modelo de intervenção. Foi desenvolvida em um hospital do Rio de Janeiro onde foram observados técnicos de enfermagem preparando e administrando medicamentos por cateter na Unidade de Terapia Intensiva. Foram observadas 350 doses de medicamentos sendo preparados e administrados. Os grupos de medicamentos prevalentes foram os que agem no Sistema Cardiovascular Renal com 164 doses (46,80%), seguido pelos que agem no Sistema Respiratório e Sangue com 12,85% e 12,56% respectivamente. Foram encontrados 19 medicamentos diferentes do primeiro grupo, dois no segundo e cinco no terceiro. As categorias de erro no preparo foram trituração, diluição e misturas. Encontrou-se uma taxa média de 67,71% no preparo de medicamentos. Comprimidos simples foram preparados errados em 72,54% das doses, e todos os comprimidos revestidos e de liberação prolongada foram triturados indevidamente entre sólidos a categoria de erro prevalente foi trituração com 45,47%, preparar misturando medicamentos foi um erro encontrado em quase 40% das doses de medicamentos sólidos. A trituração insuficiente ocorreu em 73,33% das doses de ácido fólico, do cloridrato de amiodarona (58,97%) e bromoprida (50,00%). A mistura com outros medicamentos ocorreu em 66,66% das doses de bromoprida, de besilato de anlodipina (53,33%), bamifilina (43,47%), ácido fólico (40,00%) e ácido acetilsalicílico (33,33%). Os erros na administração foram ausência de pausa e manejo indevido do cateter. A taxa média de erros na administração foi de 32,64%, distribuídas entre 17,14% para pausa e 48,14% para manejo do cateter. A ausência de lavagem do cateter antes foi o erro mais comum e o mais incomum foi não lavar o cateter após a administração. Os medicamentos mais envolvidos em erros na administração foram: cloridrato de amiodarona (n=39), captopril (n=33), cloridrato de hidralazina (n=7), levotiroxina sódica (n=7). Com relação à lavagem dos cateteres antes, ela não ocorreu em 330 doses de medicamentos. O preparo e administração inadequados de medicamentos podem levar à perdas na biodisponibilidade, diminuição do nível sérico e riscos de intoxicações para o paciente. Preparar e administrar medicamentos são procedimentos comuns, porém apresentou altas taxas de erros, o que talvez reflita pouco conhecimento desses profissionais sobre as boas práticas da terapia medicamentosa. Constata-se a necessidade de maior investimento de todos os profissionais envolvidos, médicos, enfermeiros e farmacêuticos nas questões que envolvam a segurança com medicamentos assim como repensar o processo de trabalho da enfermagem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho apresenta um estudo teórico e numérico sobre os erros que ocorrem nos cálculos de gradientes em malhas não estruturadas constituídas pelo diagrama de Voronoi, malhas estas, formadas também pela triangulação de Delaunay. As malhas adotadas, no trabalho, foram as malhas cartesianas e as malhas triangulares, esta última é gerada pela divisão de um quadrado em dois ou quatro triângulos iguais. Para tal análise, adotamos a escolha de três metodologias distintas para o cálculo dos gradientes: método de Green Gauss, método do Mínimo Resíduo Quadrático e método da Média do Gradiente Projetado Corrigido. O texto se baseia em dois enfoques principais: mostrar que as equações de erros dadas pelos gradientes podem ser semelhantes, porém com sinais opostos, para pontos de cálculos em volumes vizinhos e que a ordem do erro das equações analíticas pode ser melhorada em malhas uniformes quando comparada as não uniformes, nos casos unidimensionais, e quando analisada na face de tais volumes vizinhos nos casos bidimensionais.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.

Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.

That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.

The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.