988 resultados para Analytical procedure


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis considers in detail the dynamics of two oscillators with weak nonlinear coupling. There are three classes of such problems: non-resonant, where the Poincaré procedure is valid to the order considered; weakly resonant, where the Poincaré procedure breaks down because small divisors appear (but do not affect the O(1) term) and strongly resonant, where small divisors appear and lead to O(1) corrections. A perturbation method based on Cole's two-timing procedure is introduced. It avoids the small divisor problem in a straightforward manner, gives accurate answers which are valid for long times, and appears capable of handling all three types of problems with no change in the basic approach.

One example of each type is studied with the aid of this procedure: for the nonresonant case the answer is equivalent to the Poincaré result; for the weakly resonant case the analytic form of the answer is found to depend (smoothly) on the difference between the initial energies of the two oscillators; for the strongly resonant case we find that the amplitudes of the two oscillators vary slowly with time as elliptic functions of ϵ t, where ϵ is the (small) coupling parameter.

Our results suggest that, as one might expect, the dynamical behavior of such systems varies smoothly with changes in the ratio of the fundamental frequencies of the two oscillators. Thus the pathological behavior of Whittaker's adelphic integrals as the frequency ratio is varied appears to be due to the fact that Whittaker ignored the small divisor problem. The energy sharing properties of these systems appear to depend strongly on the initial conditions, so that the systems not ergodic.

The perturbation procedure appears to be applicable to a wide variety of other problems in addition to those considered here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deference to committees in Congress has been a much studied phenomena for close to 100 years. This deference can be characterized as the unwillingness of a potentially winning coalition on the House floor to impose its will on a small minority, a standing committee. The congressional scholar is then faced with two problems: observing such deference to committees, and explaining it. Shepsle and Weingast have proposed the existence of an ex-post veto for standing committees as an explanation of committee deference. They claim that as conference reports in the House and Senate are considered under a rule that does not allow amendments, the conferees enjoy agenda-setting power. In this paper I describe a test of such a hypothesis (along with competing hypotheses regarding the effects of the conference procedure). A random-utility model is utilized to estimate legislators' ideal points on appropriations bills from 1973 through 1980. I prove two things: 1) that committee deference can not be said to be a result of the conference procedure; and moreover 2) that committee deference does not appear to exist at all.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we apply an analytical model [V.V. Kulagin et al., Phys. Plasmas 14, 113101 (2007)] to describe the acceleration of an ultra-thin electron layer by a schematic single-cycle laser pulse and compare with one-dimensional particle-in-cell (1D-PIC) simulations. This is in the context of creating a relativistic mirror for coherent backscattering and supplements two related papers in this EPJD volume. The model is shown to reproduce the 1D-PIC results almost quantitatively for the short time of a few laser periods sufficient for the backscattering of ultra-short probe pulses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes a series of experimental, numerical, and analytical studies involving the Caltech magnetohydrodynamically (MHD)-driven plasma jet experiment. The plasma jet is created via a capacitor discharge that powers a magnetized coaxial planar electrodes system. The jet is collimated and accelerated by the MHD forces.

We present three-dimensional ideal MHD finite-volume simulations of the plasma jet experiment using an astrophysical magnetic tower as the baseline model. A compact magnetic energy/helicity injection is exploited in the simulation analogous to both the experiment and to astrophysical situations. Detailed analysis provides a comprehensive description of the interplay of magnetic force, pressure, and flow effects. We delineate both the jet structure and the transition process that converts the injected magnetic energy to other forms.

When the experimental jet is sufficiently long, it undergoes a global kink instability and then a secondary local Rayleigh-Taylor instability caused by lateral acceleration of the kink instability. We present an MHD theory of the Rayleigh-Taylor instability on the cylindrical surface of a plasma flux rope in the presence of a lateral external gravity. The Rayleigh-Taylor instability is found to couple to the classic current-driven instability, resulting in a new type of hybrid instability. The coupled instability, produced by combination of helical magnetic field, curvature of the cylindrical geometry, and lateral gravity, is fundamentally different from the classic magnetic Rayleigh-Taylor instability occurring at a two-dimensional planar interface.

In the experiment, this instability cascade from macro-scale to micro-scale eventually leads to the failure of MHD. When the Rayleigh-Taylor instability becomes nonlinear, it compresses and pinches the plasma jet to a scale smaller than the ion skin depth and triggers a fast magnetic reconnection. We built a specially designed high-speed 3D magnetic probe and successfully detected the high frequency magnetic fluctuations of broadband whistler waves associated with the fast reconnection. The magnetic fluctuations exhibit power-law spectra. The magnetic components of single-frequency whistler waves are found to be circularly polarized regardless of the angle between the wave propagation direction and the background magnetic field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A validação de limpeza de equipamentos é requisito regulatório para assegurar que os procedimentos de limpeza removem os resíduos de produto e agente de limpeza existentes até um nível de aceitação pré-determinado, garantindo que não haja contaminação cruzada. A metodologia analítica escolhida para monitorar a ocorrência de contaminação cruzada foi a determinação de carbono orgânico total (TOC) por ser uma técnica não específica permitindo assim quantificar os resíduos antes e após o procedimento de limpeza. Para execução desta validação foi selecionado o pior caso em relação ao contaminante. A vacina Hib foi escolhida como pior caso, pois possui maior aderência ao aço inox 316L, apresentando uma menor percentagem de recuperação, quando comparada à vacina Meningite A e C, sendo respectivamente de 93,0% e 98,4% para o tempo de extração de 30 segundos e 67,8% e 72,6% para o tempo de extração de 10 segundos. O resíduo aceitável de produto em água de rinsagem foi de 0,0007 g/mL de polissacarídeo (0,49 g/mL de TOC) e em swab foi de 0,006 g/mL de polissacarídeo (3,49 g/mL de TOC). As amostras retiradas para determinação de resíduo de produto foram analisadas e corrigidas pelo fator de recuperação deste resíduo para amostras de água de rinsagem que é de 98,5% e para amostras em swab que é de 98,4%. Já o resíduo aceitável para agente de limpeza (NaOH) foi de 3,5 g/mL que fornece um pH de 9,94, porém não existem evidências que a concentração calculada de resíduo de NaOH não interferirá quimicamente ao entrar em contato com a vacina. Assim o critério adotado foi o mesmo da água para injetáveis, segundo USP que é pH entre 5 e 7. As amostras retiradas para determinação de resíduo de agente de limpeza não foram corrigidas pelo fator de recuperação uma vez que o critério utilizado é muito mais crítico que o calculado. Todas as análises realizadas apresentaram resultados dentro dos parâmetros aceitáveis permitindo a conclusão de que o procedimento de limpeza para tanque de aço inox 316L é eficiente removendo os resíduos até níveis aceitáveis, evitando assim uma contaminação cruzada

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.

Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.

Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Theoretical and experimental studies were conducted to investigate the wave induced oscillations in an arbitrary shaped harbor with constant depth which is connected to the open-sea.

A theory termed the “arbitrary shaped harbor” theory is developed. The solution of the Helmholtz equation, ∇2f + k2f = 0, is formulated as an integral equation; an approximate method is employed to solve the integral equation by converting it to a matrix equation. The final solution is obtained by equating, at the harbor entrance, the wave amplitude and its normal derivative obtained from the solutions for the regions outside and inside the harbor.

Two special theories called the circular harbor theory and the rectangular harbor theory are also developed. The coordinates inside a circular and a rectangular harbor are separable; therefore, the solution for the region inside these harbors is obtained by the method of separation of variables. For the solution in the open-sea region, the same method is used as that employed for the arbitrary shaped harbor theory. The final solution is also obtained by a matching procedure similar to that used for the arbitrary shaped harbor theory. These two special theories provide a useful analytical check on the arbitrary shaped harbor theory.

Experiments were conducted to verify the theories in a wave basin 15 ft wide by 31 ft long with an effective system of wave energy dissipators mounted along the boundary to simulate the open-sea condition.

Four harbors were investigated theoretically and experimentally: circular harbors with a 10° opening and a 60° opening, a rectangular harbor, and a model of the East and West Basins of Long Beach Harbor located in Long Beach, California.

Theoretical solutions for these four harbors using the arbitrary shaped harbor theory were obtained. In addition, the theoretical solutions for the circular harbors and the rectangular harbor using the two special theories were also obtained. In each case, the theories have proven to agree well with the experimental data.

It is found that: (1) the resonant frequencies for a specific harbor are predicted correctly by the theory, although the amplification factors at resonance are somewhat larger than those found experimentally,(2) for the circular harbors, as the width of the harbor entrance increases, the amplification at resonance decreases, but the wave number bandwidth at resonance increases, (3) each peak in the curve of entrance velocity vs incident wave period corresponds to a distinct mode of resonant oscillation inside the harbor, thus the velocity at the harbor entrance appears to be a good indicator for resonance in harbors of complicated shape, (4) the results show that the present theory can be applied with confidence to prototype harbors with relatively uniform depth and reflective interior boundaries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two general, numerically exact, quantum mechanical methods have been developed for the calculation of energy transfer in molecular collisions. The methods do not treat electronic transitions because of the exchange symmetry of the electrons. All interactions between the atoms in the system are written as potential energies.

The first method is a matrix generalization of the invariant imbedding procedure, 17, 20 adapted for multi-channel collision processes. The second method is based on a direct integration of the matrix Schrödinger equation, with a re-orthogonalization transform applied during the integration.

Both methods have been applied to a collinear collision model for two diatoms, interacting via a repulsive exponential potential. Two major studies were performed. The first was to determine the energy dependence of the transition probabilities for an H2 on the H2 model system. Transitions are possible between translational energy and vibrational energy, and from vibrational modes of one H2 to the other H2. The second study was to determine the variation of vibrational energy transfer probability with differences in natural frequency of two diatoms similar to N2.

Comparisons were made to previous approximate analytical solutions of this same problem. For translational to vibrational energy transfer, the previous approximations were not adequate. For vibrational to vibrational energy transfer of one vibrational quantum, the approximations were quite good.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constitutive modeling in granular materials has historically been based on macroscopic experimental observations that, while being usually effective at predicting the bulk behavior of these type of materials, suffer important limitations when it comes to understanding the physics behind grain-to-grain interactions that induce the material to macroscopically behave in a given way when subjected to certain boundary conditions.

The advent of the discrete element method (DEM) in the late 1970s helped scientists and engineers to gain a deeper insight into some of the most fundamental mechanisms furnishing the grain scale. However, one of the most critical limitations of classical DEM schemes has been their inability to account for complex grain morphologies. Instead, simplified geometries such as discs, spheres, and polyhedra have typically been used. Fortunately, in the last fifteen years, there has been an increasing development of new computational as well as experimental techniques, such as non-uniform rational basis splines (NURBS) and 3D X-ray Computed Tomography (3DXRCT), which are contributing to create new tools that enable the inclusion of complex grain morphologies into DEM schemes.

Yet, as the scientific community is still developing these new tools, there is still a gap in thoroughly understanding the physical relations connecting grain and continuum scales as well as in the development of discrete techniques that can predict the emergent behavior of granular materials without resorting to phenomenology, but rather can directly unravel the micro-mechanical origin of macroscopic behavior.

In order to contribute towards closing the aforementioned gap, we have developed a micro-mechanical analysis of macroscopic peak strength, critical state, and residual strength in two-dimensional non-cohesive granular media, where typical continuum constitutive quantities such as frictional strength and dilation angle are explicitly related to their corresponding grain-scale counterparts (e.g., inter-particle contact forces, fabric, particle displacements, and velocities), providing an across-the-scale basis for better understanding and modeling granular media.

In the same way, we utilize a new DEM scheme (LS-DEM) that takes advantage of a mathematical technique called level set (LS) to enable the inclusion of real grain shapes into a classical discrete element method. After calibrating LS-DEM with respect to real experimental results, we exploit part of its potential to study the dependency of critical state (CS) parameters such as the critical state line (CSL) slope, CSL intercept, and CS friction angle on the grain's morphology, i.e., sphericity, roundness, and regularity.

Finally, we introduce a first computational algorithm to ``clone'' the grain morphologies of a sample of real digital grains. This cloning algorithm allows us to generate an arbitrary number of cloned grains that satisfy the same morphological features (e.g., roundness and aspect ratio) displayed by their real parents and can be included into a DEM simulation of a given mechanical phenomenon. In turn, this will help with the development of discrete techniques that can directly predict the engineering scale behavior of granular media without resorting to phenomenology.