955 resultados para High frequency inversion
Experimental, Numerical and Analytical Studies of the MHD-driven plasma jet, instabilities and waves
Resumo:
This thesis describes a series of experimental, numerical, and analytical studies involving the Caltech magnetohydrodynamically (MHD)-driven plasma jet experiment. The plasma jet is created via a capacitor discharge that powers a magnetized coaxial planar electrodes system. The jet is collimated and accelerated by the MHD forces.
We present three-dimensional ideal MHD finite-volume simulations of the plasma jet experiment using an astrophysical magnetic tower as the baseline model. A compact magnetic energy/helicity injection is exploited in the simulation analogous to both the experiment and to astrophysical situations. Detailed analysis provides a comprehensive description of the interplay of magnetic force, pressure, and flow effects. We delineate both the jet structure and the transition process that converts the injected magnetic energy to other forms.
When the experimental jet is sufficiently long, it undergoes a global kink instability and then a secondary local Rayleigh-Taylor instability caused by lateral acceleration of the kink instability. We present an MHD theory of the Rayleigh-Taylor instability on the cylindrical surface of a plasma flux rope in the presence of a lateral external gravity. The Rayleigh-Taylor instability is found to couple to the classic current-driven instability, resulting in a new type of hybrid instability. The coupled instability, produced by combination of helical magnetic field, curvature of the cylindrical geometry, and lateral gravity, is fundamentally different from the classic magnetic Rayleigh-Taylor instability occurring at a two-dimensional planar interface.
In the experiment, this instability cascade from macro-scale to micro-scale eventually leads to the failure of MHD. When the Rayleigh-Taylor instability becomes nonlinear, it compresses and pinches the plasma jet to a scale smaller than the ion skin depth and triggers a fast magnetic reconnection. We built a specially designed high-speed 3D magnetic probe and successfully detected the high frequency magnetic fluctuations of broadband whistler waves associated with the fast reconnection. The magnetic fluctuations exhibit power-law spectra. The magnetic components of single-frequency whistler waves are found to be circularly polarized regardless of the angle between the wave propagation direction and the background magnetic field.
Resumo:
In four chapters various aspects of earthquake source are studied.
Chapter I
Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.
Chapter II
The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.
Chapter III
Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."
Chapter IV
The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.
Resumo:
Sufficient stability criteria for classes of parametrically excited differential equations are developed and applied to example problems of a dynamical nature.
Stability requirements are presented in terms of 1) the modulus of the amplitude of the parametric terms, 2) the modulus of the integral of the parametric terms and 3) the modulus of the derivative of the parametric terms.
The methods employed to show stability are Liapunov’s Direct Method and the Gronwall Lemma. The type of stability is generally referred to as asymptotic stability in the sense of Liapunov.
The results indicate that if the equation of the system with the parametric terms set equal to zero exhibits stability and possesses bounded operators, then the system will be stable under sufficiently small modulus of the parametric terms or sufficiently small modulus of the integral of the parametric terms (high frequency). On the other hand, if the equation of the system exhibits individual stability for all values that the parameter assumes in the time interval, then the actual system will be stable under sufficiently small modulus of the derivative of the parametric terms (slowly varying).
Resumo:
The pattern of energy release during the Imperial Valley, California, earthquake of 1940 is studied by analysing the El Centro strong motion seismograph record and records from the Tinemaha seismograph station, 546 km from the epicenter. The earthquake was a multiple event sequence with at least 4 events recorded at El Centro in the first 25 seconds, followed by 9 events recorded in the next 5 minutes. Clear P, S and surface waves were observed on the strong motion record. Although the main part of the earthquake energy was released during the first 15 seconds, some of the later events were as large as M = 5.8 and thus are important for earthquake engineering studies. The moment calculated using Fourier analysis of surface waves agrees with the moment estimated from field measurements of fault offset after the earthquake. The earthquake engineering significance of the complex pattern of energy release is discussed. It is concluded that a cumulative increase in amplitudes of building vibration resulting from the present sequence of shocks would be significant only for structures with relatively long natural period of vibration. However, progressive weakening effects may also lead to greater damage for multiple event earthquakes.
The model with surface Love waves propagating through a single layer as a surface wave guide is studied. It is expected that the derived properties for this simple model illustrate well several phenomena associated with strong earthquake ground motion. First, it is shown that a surface layer, or several layers, will cause the main part of the high frequency energy, radiated from the nearby earthquake, to be confined to the layer as a wave guide. The existence of the surface layer will thus increase the rate of the energy transfer into the man-made structures on or near the surface of the layer. Secondly, the surface amplitude of the guided SH waves will decrease if the energy of the wave is essentially confined to the layer and if the wave propagates towards an increasing layer thickness. It is also shown that the constructive interference of SH waves will cause the zeroes and the peaks in the Fourier amplitude spectrum of the surface ground motion to be continuously displaced towards the longer periods as the distance from the source of the energy release increases.
Resumo:
The microwave response of the superconducting state in equilibrium and non-equilibrium configurations was examined experimentally and analytically. Thin film superconductors were mostly studied in order to explore spatial effects. The response parameter measured was the surface impedance.
For small microwave intensity the surface impedance at 10 GHz was measured for a variety of samples (mostly Sn) over a wide range of sample thickness and temperature. A detailed analysis based on the BCS theory was developed for calculating the surface impedance for general thickness and other experimental parameters. Experiment and theory agreed with each other to within the experimental accuracy. Thus it was established that the samples, thin films as well as bulk, were well characterised at low microwave powers (near equilibrium).
Thin films were perturbed by a small dc supercurrent and the effect on the superconducting order parameter and the quasiparticle response determined by measuring changes in the surface resistance (still at low microwave intensity and independent of it) due to the induced current. The use of fully superconducting resonators enabled the measurement of very small changes in the surface resistance (< 10-9 Ω/sq.). These experiments yield information regarding the dynamics of the order parameter and quasiparticle systems. For all the films studied the results could be described at temperatures near Tc by the thermodynamic depression of the order parameter due to the static current leading to a quadratic increase of the surface resistance with current.
For the thinnest films the low temperature results were surprising in that the surface resistance decreased with increasing current. An explanation is proposed according to which this decrease occurs due to an additional high frequency quasiparticle current caused by the combined presence of both static and high frequency fields. For frequencies larger than the inverse of the quasiparticle relaxation time this additional current is out of phase (by π) with the microwave electric field and is observed as a decrease of surface resistance. Calculations agree quantitatively with experimental results. This is the first observation and explanation of this non-equilibrium quasiparticle effect.
For thicker films of Sn, the low temperature surface resistance was found to increase with applied static current. It is proposed that due to the spatial non-uniformity of the induced current distribution across the thicker films, the above purely temporal analysis of the local quasiparticle response needs to be generalised to include space and time non-equilibrium effects.
The nonlinear interaction of microwaves arid superconducting films was also examined in a third set of experiments. The surface impedance of thin films was measured as a function of the incident microwave magnetic field. The experiments exploit the ability to measure the absorbed microwave power and applied microwave magnetic field absolutely. It was found that the applied surface microwave field could not be raised above a certain threshold level at which the absorption increased abruptly. This critical field level represents a dynamic critical field and was found to be associated with the penetration of the app1ied field into the film at values well below the thermodynamic critical field for the configuration of a field applied to one side of the film. The penetration occurs despite the thermal stability of the film which was unequivocally demonstrated by experiment. A new mechanism for such penetration via the formation of a vortex-antivortex pair is proposed. The experimental results for the thinnest films agreed with the calculated values of this pair generation field. The observations of increased transmission at the critical field level and suppression of the process by a metallic ground plane further support the proposed model.
Resumo:
为满足激光惯性约束聚变中靶面激光辐照不均匀性低于5%的要求, 在目前使用透镜列阵基础上, 提出了谱色散平滑与透镜列阵联用方案, 对其进行数值计算并分析其平滑效果和应用可行性。结果表明:焦斑的不均匀性从单独使用透镜列阵时的14%降低到与谱色散平滑结合后的3%;对焦斑点功率谱的分析表明谱色散平滑通过抑制焦斑中高频的频谱强度达到平滑效果。该方案可以进一步提高焦斑平滑效果, 计算结果对实际应用有着重要的参考意义。
Resumo:
分析了布里渊分布式光纤传感技术原理,采用自行研制的光纤单纵模分布反馈(DFB)激光器结合电光调制技术,利用相干检测技术,对布里渊微弱后向散射信号进行检测。通过改进滤波放大技术,对微弱后向散射光信号进行有效放大,再用扰偏技术及信号采样平均处理,实现对光纤传感器后向布里渊散射信号在11 GHz高频段直接采集显示。结果表明,探测所得布里渊散射信号峰值功率可达50 mV,能有效降低解调系统信号检测难度,改善了系统信噪比(SNR)。初步实验结果证明了该方案的可行性。
Resumo:
We propose a new method to increase the resolution of an optical system by modifying part of the spatial-frequency spectrum, viz., displacing the lower-frequency light to a high-frequency band, which makes the central maximum in the diffraction pattern narrower and increases the depth of focus. Simulation results show that this kind of apodizer (the term apodization was originally used to describe ways to reduce the sidelobes of the PSF, but in this paper, we use it in a wider sense) is superior to the phase-shifting ones. (C) 2001 Society of Photo-Optical Instrumentation Engineers.
Resumo:
The effective refractive index of a kind of granular composite, which consists of granular metallic and magnetic inclusions with different radius embedded in a host medium, is theoretically investigated. Results show that for certain volume fractions of these two inclusions, the negative permittivity peak shifts to low frequency and the peak value increases with increasing radius ratio of the radius of magnetic granulae to that of metallic granulae. Simultaneously, peak value of permeability decreases with the radius ratio, and value peak shifts to high frequency with increasing volume fraction of magnetic inclusion. Therefore, the radius ratio can affect the effective refractive index considerably, and it is found that by adjusting the radius ratio, the refractive index may change between negative and positive values for certain volume fractions of the two inclusions. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
报道了利用声光振幅调制锁模的方法,在激光二极管端面抽运Nd:YVO4激光器上获得320MHz高重复频率脉冲列的实验结果。实验采用平一平腔结构,腔长452mm,耦合输出镜透过率为3.6%。所用声光介质为熔融石英晶体,以铌酸锂作换能器,在驱动功率4.5W时,对1064nm波长衍射效率为50,相应的调制深度为0.31。在最佳锁模状态下,激光二极管抽运功率为3.5W,此时激光平均输出功率为15mw。示波器记录脉冲宽度680ps,实测光束质量因子M^2小于1.5。并在实验基础上对激光器工作的稳定性进行了分析,结果表
Resumo:
Several insectivorous bats have included fish in their diet, yet little is known about the processes underlying this trophic shift. We performed three field experiments with wild fishing bats to address how they manage to discern fish from insects and adapt their hunting technique to capture fish. We show that bats react only to targets protruding above the water and discern fish from insects based on prey disappearance patterns. Stationary fish trigger short and shallow dips and a terminal echolocation pattern with an important component of the narrowband and low frequency calls. When the fish disappears during the attack process, bats regulate their attack increasing the number of broadband and high frequency calls in the last phase of the echolocation as well as by lengthening and deepening their dips. These adjustments may allow bats to obtain more valuable sensorial information and to perform dips adjusted to the level of uncertainty on the location of the submerged prey. The observed ultrafast regulation may be essential for enabling fishing to become cost-effective in bats, and demonstrates the ability of bats to rapidly modify and synchronise their sensorial and motor features as a response to last minute stimulus variations.
Resumo:
[EN]Hyperventilation, which is common both in-hospital and out-of-hospital cardiac arrest, decreases coronary and cerebral perfusion contributing to poorer survival rates in both animals and humans. Current resucitation guidelines recommend continuous monitoring of exhaled carbon dioxide (CO2) during cardiopulmonary resucitation (CPR) and emphasize good quality of CPR, including ventilations at 8-10 min1. Most of commercial monitors/de- brilators incorporate methods to compute the respiratory rate based on capnography since it shows uctuations caused by ventilations. Chest compressions may induce artifacts in this signal making the calculation of the respiratory rate di cult. Nevertheless, the accuracy of these methods during CPR has not been documented yet. The aim of this project is to analyze whether the capnogram is reliable to compute ventilation rate during CPR. A total of 91 episodes, 63 out-of-hospital cardiac arrest episodes ( rst database) and 28 in-hospital cardiac arrest episodes (second database) were used to develop an algorithm to detect ventilations in the capnogram, and the nal aim is to provide an accurate ventilation rate for feedback purposes during CPR. Two graphic user interfaces were developed to make the analysis easier and another two were adapted to carry out this project. The use of this interfaces facilitates the managment of the databases and the calculation of the algorithm accuracy. In the rst database, as gold standard every ventilation was marked by visual inspection of both the impedance, which shows uctuations with every ventilation, and the capnography signal. In the second database, volume of the respiratory ow signal was used as gold standard to mark ventilation instants since it is not a ected by chest compressions. The capnogram was preprocessed to remove high frequency noise, and the rst di erence was computed to de ne the onset of inspiration and expiration. Then, morphological features were extracted and a decission algorithm built based on the extracted features to detect ventilation instants. Finally, ventilation rate was calculated using the detected instants of ventilation. According to the results obtained in this project, the capnogram can be reliably used to give feedback ventilation rate, and therefore, on hyperventilation in a resucitation scenario.
Resumo:
研制成功便携式激光尘埃粒子计数器的核心部件——微型光学传感器。该传感器采用直角散射光收集形式。以高功率半导体激光器作为光源,同时采用高性能的PIN型光电二极管作为光电探测器。散射光收集系统为单一大数值孔径的球面反射镜,其对粒子散射光的收集角范围从20°到160°。粒子散射光信号是脉冲信号,其频谱成份主要在高频段,所以在PIN型光电二极管后用一个带通式前置放大器来消除外界的低频噪声.根据米氏散射理论计算了该光学传感器的光散射响应特性,并用聚苯乙烯标准粒子实测了该光学传感器的性能。结果表明,该系统具有高的信噪
Resumo:
A elevada frequência de óbitos por causas mal definidas e por diagnósticos incompletos compromete a validade de indicadores de mortalidade por causas, constituindo obstáculo para a alocação racional dos recursos de saúde com base em perfil epidemiológico. O presente trabalho avalia a qualidade da informação da causa básica de morte na região do Médio Paraíba, estado do Rio de Janeiro, Brasil, nos anos de 2005 a 2009 para toda a população. Os dados provieram do Sistema de Informações sobre Mortalidade (SIM) disponibilizados pelo DATASUS/MS. A análise baseou-se em dois indicadores de mortalidade proporcional, por causas mal definidas (CMD - todos os óbitos cuja causa básica esteja incluída no capítulo XVIII da CID-10) e por diagnósticos incompletos (DI), segundo classificação apresentada no Projeto Carga de Doença do Brasil, 2002. As associações entre a qualidade da informação e variáveis demográficas, socioeconômicas e relacionadas à ocorrência do óbito foram investigadas por meio do cálculo das razões de chances de mortes por CMD e por DI, em relação às demais causas de morte. Observou-se na região do Médio Paraíba uma proporção de CMD de 4,54% no período de 2005 a 2009. A proporção de diagnósticos incompletos na região do Médio Paraíba no mesmo período mostrou-se elevada (20,59%). Somados os óbitos por CMD e DI na região do Médio Paraíba no quinquênio avaliado, chega-se a uma proporção de causas inadequadamente definidas (25,13%) bem acima do valor mediano de 12% estimado para a população mundial. As chances de CMD e DI decrescem quanto maior o grau de instrução. Quanto à variável raça, os óbitos de indivíduos da raça negra apresentaram maiores chances de ter CMD. Entre os óbitos de indivíduos de cor branca observaram-se maiores chances de constar um DI como causa básica. Nos óbitos sem assistência médica as chances de CMD e DI foram superiores em relação aos óbitos com assistência. Os óbitos em unidade hospitalar apresentaram menores chances de CMD e maiores chances de DI. As variáveis ignoradas ou não informadas apresentaram-se associadas a maiores chances de CMD e DI. Os resultados sugerem que na região do Médio Paraíba a qualidade dos dados de mortalidade no que concerne CMD está bem superior à nacional, assemelhando-se aos valores dos países desenvolvidos. Ainda assim, a proporção de causas residuais encontra-se bastante elevada, evidenciando que não obstante a expressiva melhora do SIM, persistem limitações que restringem a utilização mais ampla do sistema e impedem que os avanços nas políticas e programas na área da saúde sejam maiores.
Resumo:
O Inglês para Fins Específicos é a abordagem de ensino que ancora o trabalho feito no instituto federal no qual trabalho. Ele tem como premissa básica o ensino focado em uma análise de necessidades, ou seja, tudo o que é ensinado/aprendido está em alinhamento com o que os aprendizes necessitam em seu ambiente acadêmico e profissional. Por causa de mitos e dogmas enraizados ao longo das décadas, porém, a abordagem transformou-se exclusivamente no ensino de leitura e originou distorções na proposta original. O estudo parte da pergunta de pesquisa: O que o aluno participante desta pesquisa vê como prioridade em sua aprendizagem de inglês na escola? Outras perguntas surgiram após conversar com os dados coletados: Por que os alunos participantes querem um ensino e a escola propicia outro?; O que falta na sala de aula para amenizar essa lacuna entre o desejo dos alunos-participantes e a oferta da escola?; e, O que nós, professores, podemos fazer em relação às propostas pedagógicas para preencher esse espaço, à luz dos dados coletados e da revisão de literatura? Investigo então o posionamento de 65 aprendizes do 3 ano do ensino médio da escola na qual trabalho, através de questionários e entrevista nos moldes de um grupo focal com 5 participantes, sobre o que eles veem como prioridade na aprendizagem de inglês na escola e reflito sobre práticas pedagógicas que poderiam atender às necessidades apontadas pelos participantes. Dentro do espírito (in)disciplinar da Linguística Aplicada e dentro de um paradigma qualitativo, a análise das respostas pautou-se na recorrência de temas nas respostas às perguntas do questionário e participação no grupo focal. Temas que recorreram com frequência alta formaram categorias que serviram de base para a discussão. Selecionei duas perguntas do questionário para a análise: a pergunta (9) como você espera que seja o ensino de inglês no Ensino Médio? e a (10) quais habilidades vocês gostariam de desenvolver?, por sua relação direta com o objetivo da pesquisa. A categoria que emergiu das menções mais recorrentes nos proferimentos dos alunos foi comunicação oral, em um total de 62 menções. Outras oito categorias emergiram do corpus. Considerei frequência alta, e por isso categórica, a quantidade de 17 menções. A partir desses resultados, defendo neste trabalho que a voz do aluno (que grita urgência ao aprendizado da comunicação oral) seja ouvida nesse processo, e que uma nova pedagogia pós-metodológica seja implantada no ensino-aprendizagem de línguas no ensino médio e técnico