971 resultados para Numerical Evaluation
Resumo:
We consider the classical problem of sequential detection of change in a distribution (from hypothesis 0 to hypothesis 1), where the fusion centre receives vectors of periodic measurements, with the measurements being i.i.d. over time and across the vector components, under each of the two hypotheses. In our problem, the sensor devices ("motes") that generate the measurements constitute an ad hoc wireless network. The motes contend using a random access protocol (such as CSMA/CA) to transmit their measurement packets to the fusion centre. The fusion centre waits for vectors of measurements to accumulate before taking decisions. We formulate the optimal detection problem, taking into account the network delay experienced by the vectors of measurements, and find that, under periodic sampling, the detection delay decouples into network delay and decision delay. We obtain a lower bound on the network delay, and propose a censoring scheme, where lagging sensors drop their delayed observations in order to mitigate network delay. We show that this scheme can achieve the lower bound. This approach is explored via simulation. We also use numerical evaluation and simulation to study issues such as: the optimal sampling rate for a given number of sensors, and the optimal number of sensors for a given measurement rate
Resumo:
This paper considers antenna selection (AS) at a receiver equipped with multiple antenna elements but only a single radio frequency chain for packet reception. As information about the channel state is acquired using training symbols (pilots), the receiver makes its AS decisions based on noisy channel estimates. Additional information that can be exploited for AS includes the time-correlation of the wireless channel and the results of the link-layer error checks upon receiving the data packets. In this scenario, the task of the receiver is to sequentially select (a) the pilot symbol allocation, i.e., how to distribute the available pilot symbols among the antenna elements, for channel estimation on each of the receive antennas; and (b) the antenna to be used for data packet reception. The goal is to maximize the expected throughput, based on the past history of allocation and selection decisions, and the corresponding noisy channel estimates and error check results. Since the channel state is only partially observed through the noisy pilots and the error checks, the joint problem of pilot allocation and AS is modeled as a partially observed Markov decision process (POMDP). The solution to the POMDP yields the policy that maximizes the long-term expected throughput. Using the Finite State Markov Chain (FSMC) model for the wireless channel, the performance of the POMDP solution is compared with that of other existing schemes, and it is illustrated through numerical evaluation that the POMDP solution significantly outperforms them.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
The wave-theoretical analysis of acoustic and elastic waves refracted by a spherical boundary across which both velocity and density increase abruptly and thence either increase or decrease continuously with depth is formulated in terms of the general problem of waves generated at a steady point source and scattered by a radially heterogeneous spherical body. A displacement potential representation is used for the elastic problem that results in high frequency decoupling of P-SV motion in a spherically symmetric, radially heterogeneous medium. Through the application of an earth-flattening transformation on the radial solution and the Watson transform on the sum over eigenfunctions, the solution to the spherical problem for high frequencies is expressed as a Weyl integral for the corresponding half-space problem in which the effect of boundary curvature maps into an effective positive velocity gradient. The results of both analytical and numerical evaluation of this integral can be summarized as follows for body waves in the crust and upper mantle:
1) In the special case of a critical velocity gradient (a gradient equal and opposite to the effective curvature gradient), the critically refracted wave reduces to the classical head wave for flat, homogeneous layers.
2) For gradients more negative than critical, the amplitude of the critically refracted wave decays more rapidly with distance than the classical head wave.
3) For positive, null, and gradients less negative than critical, the amplitude of the critically refracted wave decays less rapidly with distance than the classical head wave, and at sufficiently large distances, the refracted wave can be adequately described in terms of ray-theoretical diving waves. At intermediate distances from the critical point, the spectral amplitude of the refracted wave is scalloped due to multiple diving wave interference.
These theoretical results applied to published amplitude data for P-waves refracted by the major crustal and upper mantle horizons (the Pg, P*, and Pn travel-time branches) suggest that the 'granitic' upper crust, the 'basaltic' lower crust, and the mantle lid all have negative or near-critical velocity gradients in the tectonically active western United States. On the other hand, the corresponding horizons in the stable eastern United States appear to have null or slightly positive velocity gradients. The distribution of negative and positive velocity gradients correlates closely with high heat flow in tectonic regions and normal heat flow in stable regions. The velocity gradients inferred from the amplitude data are generally consistent with those inferred from ultrasonic measurements of the effects of temperature and pressure on crustal and mantle rocks and probable geothermal gradients. A notable exception is the strong positive velocity gradient in the mantle lid beneath the eastern United States (2 x 10-3 sec-1), which appears to require a compositional gradient to counter the effect of even a small geothermal gradient.
New seismic-refraction data were recorded along a 800 km profile extending due south from the Canadian border across the Columbia Plateau into eastern Oregon. The source for the seismic waves was a series of 20 high-energy chemical explosions detonated by the Canadian government in Greenbush Lake, British Columbia. The first arrivals recorded along this profile are on the Pn travel-time branch. In northern Washington and central Oregon their travel time is described by T = Δ/8.0 + 7.7 sec, but in the Columbia Plateau the Pn arrivals are as much as 0.9 sec early with respect to this line. An interpretation of these Pn arrivals together with later crustal arrivals suggest that the crust under the Columbia Plateau is thinner by about 10 km and has a higher average P-wave velocity than the 35-km-thick, 62-km/sec crust under the granitic-metamorphic terrain of northern Washington. A tentative interpretation of later arrivals recorded beyond 500 km from the shots suggests that a thin 8.4-km/sec horizon may be present in the upper mantle beneath the Columbia Plateau and that this horizon may form the lid to a pronounced low-velocity zone extending to a depth of about 140 km.
Resumo:
Recentemente a utilização de estruturas mistas, aço concreto, vem ganhando espaço nas construções. Isso deve-se principalmente a economia de tempo em sua execução. Além disto, nesta solução a utilização do aço e do concreto otimiza as características estruturais de seus elementos: a resistência a tração do aço e a compressão do concreto. A transferência dos esforços entre os dois materiais possui grande influência no desempenho de uma estrutura mista, sendo comum a utilização de conectores de cisalhamento na região da interface entre estes dois materiais. O Eurocode 4 define um ensaio experimental denominado push-out de modo a determinar a resistência e ductilidade de conectores de cisalhamento. Seu desempenho é influenciado pelas resistências do concreto a compressão, as dimensões e taxa de armadura da laje de concreto, dimensões do perfil de aço, a disposição e a geometria dos conectores e pelas características dos aços utilizados no conector, no perfil e nas barras de reforço. Nota-se com isso uma grande quantidade de variáveis que influenciam o ensaio. Assim, o presente trabalho apresenta o desenvolvimento de um modelo em elementos finitos com base no programa ANSYS para simulação de ensaios push-out. Os resultados numéricos apresentados neste trabalho foram calibrados com resultados obtidos em ensaios experimentais existentes na literatura de ensaios push-out para conectores do tipo pino com cabeça (stud) e conectores tipo perfobond. Estes últimos apresentam elevada resistência sendo influenciados por inúmeros fatores como: número e diâmetro dos furos no conector e a inclusão ou não de barras de reforço extras nestes furos.
Resumo:
Restrições de espaço e altura são frequentemente impostas às edificações residenciais, comerciais, industriais, depósitos e galpões com um ou diversos pavimentos em função de aspectos de regulamentos regionais, técnicos, econômicos ou ainda de natureza estética. A fim de proporcionar a passagem de tubulações e dutos de grande diâmetro sob vigas de aço, grandes alturas são normalmente requeridas, demandando por vezes, magnitudes de altura inviáveis entre pavimentos de edificações. Diversas soluções estruturais podem ser utilizadas para equacionar tais obstáculos, onde dentre outras, pode-se citar as vigas com inércia variável, stub-girders, treliças mistas, vigas misuladas e vigas com uma ou múltiplas aberturas na alma com geometrias variadas. No que tange às vigas casteladas, solução estrutural pautada neste estudo, a estabilidade é sempre um motivo de preocupação tipicamente durante a construção quando os contraventamentos laterais ainda não estão instalados. De qualquer forma, o comprimento destravado em geral alcançado pelos vãos destas vigas, são longos o suficiente para que a instabilidade ocorra. Todavia, o acréscimo substancial da resistência à flexão de tais membros devido ao aumento da altura oriundo de seu processo fabril em relação ao perfil matriz, aliada a economia de material e utilidade fim de serviço, garante a atratividade no aproveitamento destas, para grandes vãos junto aos projetistas. Não obstante, este aumento proporcional no comprimento dos vãos faz com que a instabilidade lateral ganhe importância especial. Neste contexto, o presente trabalho tem por objetivo desenvolver um modelo numérico que permita a realização de uma avaliação paramétrica a partir da calibração do modelo com resultados experimentais, efetuar a análise do comportamento de vigas casteladas e verificar seus mecanismos de falha, considerando comportamento elasto-plástico, além das não-linearidades geométricas. Também é objetivo deste trabalho, avaliar, quantificar e determinar a influência das diferenças geométricas características das vigas casteladas em relação às vigas maciças com as mesmas dimensões, analisando e descrevendo o comportamento estrutural destas vigas de aço para diversos comprimentos de vãos. A metodologia empregada para tal estudo baseou-se em uma análise paramétrica com o auxílio do método numérico dos elementos finitos.
Analytical approximations for the modal acoustic impedances of simply supported, rectangular plates.
Resumo:
Coupling of the in vacuo modes of a fluid-loaded, vibrating structure by the resulting acoustic field, while known to be negligible for sufficiently light fluids, is still only partially understood. A particularly useful structural geometry for the study of this problem is the simply supported, rectangular flat plate, since it exhibits all the relevant physical features while still admitting an analytical description of the modes. Here the influence of the fluid can be expressed in terms of a set of doubly infinite integrals over wave number: the modal acoustic impedances. Closed-form solutions for these impedances do not exist and, while their numerical evaluation is possible, it greatly increases the computational cost of solving the coupled system of modal equations. There is thus a need for accurate analytical approximations. In this work, such approximations are sought in the limit where the modal wavelength is small in comparison with the acoustic wavelength and the plate dimensions. It is shown that contour integration techniques can be used to derive analytical formulas for this regime and that these formulas agree closely with the results of numerical evaluations. Previous approximations [Davies, J. Sound Vib. 15(1), 107-126 (1971)] are assessed in the light of the new results and are shown to give a satisfactory description of real impedance components, but (in general) erroneous expressions for imaginary parts.
Resumo:
Wavefront coding is a powerful technique that can be used to extend the depth of field of an incoherent imaging system. By adding a suitable phase mask to the aperture plane, the optical transfer function of a conventional imaging system can be made defocus invariant. Since 1995, when a cubic phase mask was first suggested, many kinds of phase masks have been proposed to achieve the goal of depth extension. In this Letter, a phase mask based on sinusoidal function is designed to enrich the family of phase masks. Numerical evaluation demonstrates that the proposed mask is not only less sensitive to focus errors than cubic, exponential, and modified logarithmic masks are, but it also has a smaller point-spread-function shifting effect. (C) 2010 Optical Society of America
Resumo:
A numerical and experimental investigation on the mode-I intralaminar toughness of a hybrid plain weave composite laminate manufactured using resin infusion under flexible tooling (RIFT) process is presented in this paper. The pre-cracked geometries consisted of overheight compact tension (OCT), double edge notch (DEN) and centrally cracked four-point-bending (4PBT) test specimens. The position as well as the strain field ahead of the crack tip during the loading stage was determined using a digital speckle photogrammetry system. The limitation on the applicability of the standard data reduction schemes for the determination of intralaminar toughness of composite materials is presented and discussed. A methodology based on the numerical evaluation of the strain energy release rate using the J-integral method is proposed to derive new geometric correction functions for the determination of the stress intensity factor for composites. The method accounts for material anisotropy and finite specimen dimension effects regardless of the geometry. The approach has been validated for alternative non-standard specimen geometries. A comparison between different methods currently available for computing the intralaminar fracture toughness in composite laminates is presented and a good agreement between numerical and experimental results using the proposed methodology was obtained.
Resumo:
This paper presents an experimental and numerical study focused on the tensile fibre fracture toughness characterisation of hybrid plain weave composite laminates using non-standardized Overheight Compact Tension (OCT) specimens. The position as well as the strain field ahead of the crack tip in the specimens was determined using a digital speckle photogrammetry system. The limitation on the applicability of standard data reduction schemes for the determination of the intralaminar fibre fracture toughness of composites is presented and discussed. A methodology based on the numerical evaluation of the strain energy release rate using the J-integral method is proposed to derive new geometric correction functions for the determination of stress intensity factor for alternative composite specimen geometries. A comparison between different methods currently available to compute the intralaminar fracture toughness in composites is also presented and discussed. Good agreement between numerical and experimental results using the proposed methodology was obtained.
Resumo:
The Maxwell equations play a fundamental role in the electromagnetic theory and lead to models useful in physics and engineering. This formalism involves integer-order differential calculus, but the electromagnetic diffusion points towards the adoption of a fractional calculus approach. This study addresses the skin effect and develops a new method for implementing fractional-order inductive elements. Two genetic algorithms are adopted, one for the system numerical evaluation and another for the parameter identification, both with good results.
Resumo:
A general derivation of the anharmonic coefficients for a periodic lattice invoking the special case of the central force interaction is presented. All of the contributions to mean square displacement (MSD) to order 14 perturbation theory are enumerated. A direct correspondance is found between the high temperature limit MSD and high temperature limit free energy contributions up to and including 0(14). This correspondance follows from the detailed derivation of some of the contributions to MSD. Numerical results are obtained for all the MSD contributions to 0(14) using the Lennard-Jones potential for the lattice constants and temperatures for which the Monte Carlo results were calculated by Heiser, Shukla and Cowley. The Peierls approximation is also employed in order to simplify the numerical evaluation of the MSD contributions. The numerical results indicate the convergence of the perturbation expansion up to 75% of the melting temperature of the solid (TM) for the exact calculation; however, a better agreement with the Monte Carlo results is not obtained when the total of all 14 contributions is added to the 12 perturbation theory results. Using Peierls approximation the expansion converges up to 45% of TM• The MSD contributions arising in the Green's function method of Shukla and Hubschle are derived and enumerated up to and including 0(18). The total MSD from these selected contributions is in excellent agreement with their results at all temperatures. Theoretical values of the recoilless fraction for krypton are calculated from the MSD contributions for both the Lennard-Jones and Aziz potentials. The agreement with experimental values is quite good.
Resumo:
In the present thesis we have formulated the Dalgarno-Lewis procedure for two-and three-photon processes and an elegant alternate expressions are derived. Starting from a brief review on various multiphoton processes we have discussed the difficulties coming in the perturbative treatment of multiphoton processes. A small discussion on various available methods for studying multiphoton processes are presented in chapter 2. These theoretical treatments mainly concentrate on the evaluation of the higher order matrix elements coming in the perturbation theory. In chapter 3 we have described the use of Dalgarno-Lewis procedure and its implimentation on second order matrix elements. The analytical expressions for twophoton transition amplitude, two-photon ionization cross section, dipole dynamic polarizability and Kramers-Heiseberg are obtained in a unified manner. Fourth chapter is an extension of the implicit summation technique presented in chapter 3. We have clearly mentioned the advantage of our method, especially the analytical continuation of the relevant expressions suited for various values of radiation frequency which is also used for efficient numerical analysis. A possible extension of the work is to study various multiphoton processcs from the stark shifted first excited states of hydrogen atom. We can also extend this procedure for studying multiphoton processes in alkali atoms as well as Rydberg atoms. Also, instead of going for analytical expressions, one can try a complete numerical evaluation of the higher order matrix elements using this procedure.
Resumo:
In der vorliegenden Arbeit wurde gezeigt, wie mit Hilfe der atomaren Vielteilchenstörungstheorie totale Energien und auch Anregungsenergien von Atomen und Ionen berechnet werden können. Dabei war es zunächst erforderlich, die Störungsreihen mit Hilfe computeralgebraischer Methoden herzuleiten. Mit Hilfe des hierbei entwickelten Maple-Programmpaketes APEX wurde dies für geschlossenschalige Systeme und Systeme mit einem aktiven Elektron bzw. Loch bis zur vierten Ordnung durchgeführt, wobei die entsprechenden Terme aufgrund ihrer großen Anzahl hier nicht wiedergegeben werden konnten. Als nächster Schritt erfolgte die analytische Winkelreduktion unter Anwendung des Maple-Programmpaketes RACAH, was zu diesem Zwecke entsprechend angepasst und weiterentwickelt wurde. Erst hier wurde von der Kugelsymmetrie des atomaren Referenzzustandes Gebrauch gemacht. Eine erhebliche Vereinfachung der Störungsterme war die Folge. Der zweite Teil dieser Arbeit befasst sich mit der numerischen Auswertung der bisher rein analytisch behandelten Störungsreihen. Dazu wurde, aufbauend auf dem Fortran-Programmpaket Ratip, ein Dirac-Fock-Programm für geschlossenschalige Systeme entwickelt, welches auf der in Kapitel 3 dargestellen Matrix-Dirac-Fock-Methode beruht. Innerhalb dieser Umgebung war es nun möglich, die Störungsterme numerisch auszuwerten. Dabei zeigte sich schnell, dass dies nur dann in einem angemessenen Zeitrahmen stattfinden kann, wenn die entsprechenden Radialintegrale im Hauptspeicher des Computers gehalten werden. Wegen der sehr hohen Anzahl dieser Integrale stellte dies auch hohe Ansprüche an die verwendete Hardware. Das war auch insbesondere der Grund dafür, dass die Korrekturen dritter Ordnung nur teilweise und die vierter Ordnung gar nicht berechnet werden konnten. Schließlich wurden die Korrelationsenergien He-artiger Systeme sowie von Neon, Argon und Quecksilber berechnet und mit Literaturwerten verglichen. Außerdem wurden noch Li-artige Systeme, Natrium, Kalium und Thallium untersucht, wobei hier die niedrigsten Zustände des Valenzelektrons betrachtet wurden. Die Ionisierungsenergien der superschweren Elemente 113 und 119 bilden den Abschluss dieser Arbeit.
Resumo:
Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.