962 resultados para Explicit method, Mean square stability, Stochastic orthogonal Runge-Kutta, Chebyshev method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The probability distribution of the instantaneous incremental yield of an inelastic system is characterized in terms of a conditional probability and average rate of crossing. The detailed yield statistics of a single degree-of-freedom elasto-plastic system under a Gaussian white noise are obtained for both nonstationary and stationary response. The present analysis indicates that the yield damage is sensitive to viscous damping. The spectra of mean and mean square damage rate are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular dynamics (MD) studies have been carried out on the Hoogsteen hydrogen bonded parallel and the reverse Hoogsteen hydrogen banded antiparallel C.G*G triplexes. Earlier, the molecular mechanics studies had shown that the parallel structure was energetically more favourable than the antiparallel structure. To characterize the structural stability of the two triplexes and to investigate whether the antiparallel structure can transit to an energetically more favourable structure, due to the local fluctuations in the structure during the MD simulation, the two structures were subjected to 200ps of constant temperature vacuum MD simulations at 300K. Initially no constraints were applied to the structures and it was observed that for the antiparallel tripler, the structure showed a large root mean square deviation from the starting structure within the first 12ps and the N4-H41-O6 hydrogen bond in the WC duplex got distorted due to a high propeller twist and a moderate increase in the opening angle in the basepairs. Starting from an initial value of 30 degrees, helical twist of the average structure from this simulation had a value of 36 degrees, while the parallel structure stabilized at a twist of 33 degrees. In spite of the hydrogen bond distortions in the antiparallel tripler, it was energetically comparable to the parallel tripler. To examine the structural characteristics of an undistorted structure, another MD simulation was performed on the antiparallel tripler by constraining all the hydrogen bonds. This structure stabilized at an average twist of 33 degrees. In the course of the dynamics though the energy of the molecule - compared to the initial structure - improved, it did not become comparable to the parallel structure. Energy minimization studies performed in the presence of explicit water and counterions also showed the two structures to be equally favourable energetically Together these results indicate that the parallel C.G*G tripler with Hoogsteen hydrogen bonds also represents a stereochemically and energetically favourable structure for this class of triplexes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Beta-Lactamase, which catalyzes beta-lactam antibiotics, is prototypical of large alpha/beta proteins with a scaffolding formed by strong noncovalent interactions. Experimentally, the enzyme is well characterized, and intermediates that are slightly less compact and having nearly the same content of secondary structure have been identified in the folding pathway. In the present study, high temperature molecular dynamics simulations have been carried out on the native enzyme in solution. Analysis of these results in terms of root mean square fluctuations in cartesian and [phi, psi] space, backbone dihedral angles and secondary structural hydrogen bonds forms the basis for an investigation of the topology of partially unfolded states of beta-lactamase. A differential stability has been observed for alpha-helices and beta-sheets upon thermal denaturation to putative unfolding intermediates. These observations contribute to an understanding of the folding/unfolding processes of beta-lactamases in particular, and other alpha/beta proteins in general.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Normal mode sound propagation in an isovelocity ocean with random narrow-band surface waves is considered, assuming the root-mean-square wave height to be small compared to the acoustic wavelength. Nonresonant interaction among the normal modes is studied straightforward perturbation technique. The more interesting case of resonant interaction is investigated using the method of multiple scales to obtain a pair of stochastic coupled amplitude equations which are solved using the Peano-Baker expansion technique. Equations for the spatial evolution of the first and second moments of the mode amplitudes are also derived and solved. It is shown that, irrespective of the initial conditions, the mean values of the mode amplitudes tend to zero asymptotically with increasing range, the mean-square amplitudes tend towards a state of equipartition of energy, and the total energy of the modes is conserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the design and performance analysis of a detector based on suprathreshold stochastic resonance (SSR) for the detection of deterministic signals in heavy-tailed non-Gaussian noise. The detector consists of a matched filter preceded by an SSR system which acts as a preprocessor. The SSR system is composed of an array of 2-level quantizers with independent and identically distributed (i.i.d) noise added to the input of each quantizer. The standard deviation sigma of quantizer noise is chosen to maximize the detection probability for a given false alarm probability. In the case of a weak signal, the optimum sigma also minimizes the mean-square difference between the output of the quantizer array and the output of the nonlinear transformation of the locally optimum detector. The optimum sigma depends only on the probability density functions (pdfs) of input noise and quantizer noise for weak signals, and also on the signal amplitude and the false alarm probability for non-weak signals. Improvement in detector performance stems primarily from quantization and to a lesser extent from the optimization of quantizer noise. For most input noise pdfs, the performance of the SSR detector is very close to that of the optimum detector. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The enzyme SAICAR synthetase ligates aspartate with CAIR (5'-phosphoribosyl-4-carboxy-5-aminoimidazole) forming SAICAR (5-amino-4-imidazole-N-succinocarboxamide ribonucleotide) in the presence of ATP. In continuation with our previous study on the thermostability of this enzyme in hyper-/thermophiles based on the structural aspects, here, we present the dynamic aspects that differentiate the mesophilic (E. coli, E. chaffeensis), thermophilic (G. kaustophilus), and hyperthermophilic (M. jannaschii, P. horikoshii) SAICAR synthetases by carrying out a total of 11 simulations. The five functional dimers from the above organisms were simulated using molecular dynamics for a period of 50 ns each at 300 K, 363 K, and an additional simulation at 333 K for the thermophilic protein. The basic features like root-mean-square deviations, root-mean-square fluctuations, surface accessibility, and radius of gyration revealed the instability of mesophiles at 363 K. Mean square displacements establish the reduced flexibility of hyper-/thermophiles at all temperatures. At the simulations time scale considered here, the long-distance networks are considerably affected in mesophilic structures at 363 K. In mesophiles, a comparatively higher number of short-lived (having less percent existence time) C alpha, hydrogen bonds, hydrophobic interactions are formed, and long-lived (with higher percentage existence time) contacts are lost. The number of time-averaged salt-bridges is at least 2-fold higher in hyperthermophiles at 363 K. The change in surface accessibility of salt-bridges at 363 K from 300 K is nearly doubled in mesophilic protein compared to proteins from other temperature classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new numerical method for solving the axisymmetric unsteady incompressible Navier-Stokes equations using vorticity-velocity variables and a staggered grid is presented. The solution is advanced in time with an explicit two-stage Runge-Kutta method. At each stage a vector Poisson equation for velocity is solved. Some important aspects of staggering of the variable location, divergence-free correction to the velocity held by means of a suitably chosen scalar potential and numerical treatment of the vorticity boundary condition are examined. The axisymmetric spherical Couette flow between two concentric differentially rotating spheres is computed as an initial value problem. Comparison of the computational results using a staggered grid with those using a non-staggered grid shows that the staggered grid is superior to the non-staggered grid. The computed scenario of the transition from zero-vortex to two-vortex flow at moderate Reynolds number agrees with that simulated using a pseudospectral method, thus validating the temporal accuracy of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a method of image-speckle contrast for the nonprecalibration measurement of the root-mean-square roughness and the lateral-correlation length of random surfaces with Gaussian correlation. We use the simplified model of the speckle fields produced by the weak scattering object in the theoretical analysis. The explicit mathematical relation shows that the saturation value of the image-speckle contrast at a large aperture radius determines the roughness, while the variation of the contrast with the aperture radius determines the lateral-correlation length. In the experimental performance, we specially fabricate the random surface samples with Gaussian correlation. The square of the image-speckle contrast is measured versus the radius of the aperture in the 4f system, and the roughness and the lateral-correlation length are extracted by fitting the theoretical result to the experimental data. Comparison of the measurement with that by an atomic force microscope shows our method has a satisfying accuracy. (C) 2002 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

以(CH3)2Si(OC2H5)2为前驱体,采用溶胶-凝胶与有机合成相结合的方法,制得稳定性良好的涂膜液。采用旋转涂膜法在掺钕磷酸盐激光玻璃棒端面涂制防潮膜,膜层固化后透过率达96.5%,获得的膜层表面粗糙度优良,均方根表面粗糙度(RMS)为1.659nm,平均粗糙度(RA)平均为1.321nm;在激光波长1053nm,脉冲宽度1 ns条件下膜层的激光破坏闽值可达10~14 J/cm^2。经过“神光Ⅱ”高功率激光器物理实验运行,膜层使用期为五年,并且已经在我国“神光Ⅲ”原型装置上试用。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O uso de técnicas com o funcional de Tikhonov em processamento de imagens tem sido amplamente usado nos últimos anos. A ideia básica nesse processo é modificar uma imagem inicial via equação de convolução e encontrar um parâmetro que minimize esse funcional afim de obter uma aproximação da imagem original. Porém, um problema típico neste método consiste na seleção do parâmetro de regularização adequado para o compromisso entre a acurácia e a estabilidade da solução. Um método desenvolvido por pesquisadores do IPRJ e UFRJ, atuantes na área de problemas inversos, consiste em minimizar um funcional de resíduos através do parâmetro de regularização de Tikhonov. Uma estratégia que emprega a busca iterativa deste parâmetro visando obter um valor mínimo para o funcional na iteração seguinte foi adotada recentemente em um algoritmo serial de restauração. Porém, o custo computacional é um fator problema encontrado ao empregar o método iterativo de busca. Com esta abordagem, neste trabalho é feita uma implementação em linguagem C++ que emprega técnicas de computação paralela usando MPI (Message Passing Interface) para a estratégia de minimização do funcional com o método de busca iterativa, reduzindo assim, o tempo de execução requerido pelo algoritmo. Uma versão modificada do método de Jacobi é considerada em duas versões do algoritmo, uma serial e outra em paralelo. Este algoritmo é adequado para implementação paralela por não possuir dependências de dados como de Gauss-Seidel que também é mostrado a convergir. Como indicador de desempenho para avaliação do algoritmo de restauração, além das medidas tradicionais, uma nova métrica que se baseia em critérios subjetivos denominada IWMSE (Information Weighted Mean Square Error) é empregada. Essas métricas foram introduzidas no programa serial de processamento de imagens e permitem fazer a análise da restauração a cada passo de iteração. Os resultados obtidos através das duas versões possibilitou verificar a aceleração e a eficiência da implementação paralela. A método de paralelismo apresentou resultados satisfatórios em um menor tempo de processamento e com desempenho aceitável.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quality of cardiopulmonary resuscitation (CPR) improves through the use of CPR feedback devices. Most feedback devices integrate the acceleration twice to estimate compression depth. However, they use additional sensors or processing techniques to compensate for large displacement drifts caused by integration. This study introduces an accelerometer-based method that avoids integration by using spectral techniques on short duration acceleration intervals. We used a manikin placed on a hard surface, a sternal triaxial accelerometer, and a photoelectric distance sensor (gold standard). Twenty volunteers provided 60 s of continuous compressions to test various rates (80-140 min(-1)), depths (3-5 cm), and accelerometer misalignment conditions. A total of 320 records with 35312 compressions were analysed. The global root-mean-square errors in rate and depth were below 1.5 min(-1) and 2 mm for analysis intervals between 2 and 5 s. For 3 s analysis intervals the 95% levels of agreement between the method and the gold standard were within -1.64-1.67 min(-1) and -1.69-1.72 mm, respectively. Accurate feedback on chest compression rate and depth is feasible applying spectral techniques to the acceleration. The method avoids additional techniques to compensate for the integration displacement drift, improving accuracy, and simplifying current accelerometer-based devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Em uma grande gama de problemas físicos, governados por equações diferenciais, muitas vezes é de interesse obter-se soluções para o regime transiente e, portanto, deve-se empregar técnicas de integração temporal. Uma primeira possibilidade seria a de aplicar-se métodos explícitos, devido à sua simplicidade e eficiência computacional. Entretanto, esses métodos frequentemente são somente condicionalmente estáveis e estão sujeitos a severas restrições na escolha do passo no tempo. Para problemas advectivos, governados por equações hiperbólicas, esta restrição é conhecida como a condição de Courant-Friedrichs-Lewy (CFL). Quando temse a necessidade de obter soluções numéricas para grandes períodos de tempo, ou quando o custo computacional a cada passo é elevado, esta condição torna-se um empecilho. A fim de contornar esta restrição, métodos implícitos, que são geralmente incondicionalmente estáveis, são utilizados. Neste trabalho, foram aplicadas algumas formulações implícitas para a integração temporal no método Smoothed Particle Hydrodynamics (SPH) de modo a possibilitar o uso de maiores incrementos de tempo e uma forte estabilidade no processo de marcha temporal. Devido ao alto custo computacional exigido pela busca das partículas a cada passo no tempo, esta implementação só será viável se forem aplicados algoritmos eficientes para o tipo de estrutura matricial considerada, tais como os métodos do subespaço de Krylov. Portanto, fez-se um estudo para a escolha apropriada dos métodos que mais se adequavam a este problema, sendo os escolhidos os métodos Bi-Conjugate Gradient (BiCG), o Bi-Conjugate Gradient Stabilized (BiCGSTAB) e o Quasi-Minimal Residual (QMR). Alguns problemas testes foram utilizados a fim de validar as soluções numéricas obtidas com a versão implícita do método SPH.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A discriminação de fases que são praticamente indistinguíveis ao microscópio ótico de luz refletida ou ao microscópio eletrônico de varredura (MEV) é um dos problemas clássicos da microscopia de minérios. Com o objetivo de resolver este problema vem sendo recentemente empregada a técnica de microscopia colocalizada, que consiste na junção de duas modalidades de microscopia, microscopia ótica e microscopia eletrônica de varredura. O objetivo da técnica é fornecer uma imagem de microscopia multimodal, tornando possível a identificação, em amostras de minerais, de fases que não seriam distinguíveis com o uso de uma única modalidade, superando assim as limitações individuais dos dois sistemas. O método de registro até então disponível na literatura para a fusão das imagens de microscopia ótica e de microscopia eletrônica de varredura é um procedimento trabalhoso e extremamente dependente da interação do operador, uma vez que envolve a calibração do sistema com uma malha padrão a cada rotina de aquisição de imagens. Por esse motivo a técnica existente não é prática. Este trabalho propõe uma metodologia para automatizar o processo de registro de imagens de microscopia ótica e de microscopia eletrônica de varredura de maneira a aperfeiçoar e simplificar o uso da técnica de microscopia colocalizada. O método proposto pode ser subdividido em dois procedimentos: obtenção da transformação e registro das imagens com uso desta transformação. A obtenção da transformação envolve, primeiramente, o pré-processamento dos pares de forma a executar um registro grosseiro entre as imagens de cada par. Em seguida, são obtidos pontos homólogos, nas imagens óticas e de MEV. Para tal, foram utilizados dois métodos, o primeiro desenvolvido com base no algoritmo SIFT e o segundo definido a partir da varredura pelo máximo valor do coeficiente de correlação. Na etapa seguinte é calculada a transformação. Foram empregadas duas abordagens distintas: a média ponderada local (LWM) e os mínimos quadrados ponderados com polinômios ortogonais (MQPPO). O LWM recebe como entradas os chamados pseudo-homólogos, pontos que são forçadamente distribuídos de forma regular na imagem de referência, e que revelam, na imagem a ser registrada, os deslocamentos locais relativos entre as imagens. Tais pseudo-homólogos podem ser obtidos tanto pelo SIFT como pelo método do coeficiente de correlação. Por outro lado, o MQPPO recebe um conjunto de pontos com a distribuição natural. A análise dos registro de imagens obtidos empregou como métrica o valor da correlação entre as imagens obtidas. Observou-se que com o uso das variantes propostas SIFT-LWM e SIFT-Correlação foram obtidos resultados ligeiramente superiores aos do método com a malha padrão e LWM. Assim, a proposta, além de reduzir drasticamente a intervenção do operador, ainda possibilitou resultados mais precisos. Por outro lado, o método baseado na transformação fornecida pelos mínimos quadrados ponderados com polinômios ortogonais mostrou resultados inferiores aos produzidos pelo método que faz uso da malha padrão.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field of motor control, two hypotheses have been controversial: whether the brain acquires internal models that generate accurate motor commands, or whether the brain avoids this by using the viscoelasticity of musculoskeletal system. Recent observations on relatively low stiffness during trained movements support the existence of internal models. However, no study has revealed the decrease in viscoelasticity associated with learning that would imply improvement of internal models as well as synergy between the two hypothetical mechanisms. Previously observed decreases in electromyogram (EMG) might have other explanations, such as trajectory modifications that reduce joint torques. To circumvent such complications, we required strict trajectory control and examined only successful trials having identical trajectory and torque profiles. Subjects were asked to perform a hand movement in unison with a target moving along a specified and unusual trajectory, with shoulder and elbow in the horizontal plane at the shoulder level. To evaluate joint viscoelasticity during the learning of this movement, we proposed an index of muscle co-contraction around the joint (IMCJ). The IMCJ was defined as the summation of the absolute values of antagonistic muscle torques around the joint and computed from the linear relation between surface EMG and joint torque. The IMCJ during isometric contraction, as well as during movements, was confirmed to correlate well with joint stiffness estimated using the conventional method, i.e., applying mechanical perturbations. Accordingly, the IMCJ during the learning of the movement was computed for each joint of each trial using estimated EMG-torque relationship. At the same time, the performance error for each trial was specified as the root mean square of the distance between the target and hand at each time step over the entire trajectory. The time-series data of IMCJ and performance error were decomposed into long-term components that showed decreases in IMCJ in accordance with learning with little change in the trajectory and short-term interactions between the IMCJ and performance error. A cross-correlation analysis and impulse responses both suggested that higher IMCJs follow poor performances, and lower IMCJs follow good performances within a few successive trials. Our results support the hypothesis that viscoelasticity contributes more when internal models are inaccurate, while internal models contribute more after the completion of learning. It is demonstrated that the CNS regulates viscoelasticity on a short- and long-term basis depending on performance error and finally acquires smooth and accurate movements while maintaining stability during the entire learning process.