994 resultados para ray trajectory equation
Resumo:
介绍了采用光刻离子交换工艺制作平面交叉型微透镜阵列的方法。利用积分形式的光线方程式讨论了平面交叉型微透镜的近轴光学特性,研究了微透镜的光线轨迹方程式和一些重要的近轴成像特性,利用ABCD定理得到了平面交叉型微透镜像距、焦距、像高、横向放大率和主平面位置的数学表达式,焦距的理论计算结果和实验数据吻合得很好。
Resumo:
Refractive losses in laser-produced plasmas used as gain media are caused by electron density gradients, and limit the energy transport range. The pump pulse is thus deflected from the high-gain region and the short wavelength laser signal also steers away, causing loss of collimation. A Hohlraum used as a target makes the plasma homogeneous and can mitigate refractive losses by means of wave-guiding. A computational study combining a hydrodynamics code and an atomic physics code is presented, which includes a ray-tracing modeling based on the eikonal theory of the trajectory equation. This study presents gain calculations based on population inversion produced by free-electron collisions exciting bound electrons into metastable levels in the 3d94d1(J = 0) → 3d94p1(J = 1) transition of Ni-like Sn. Further, the Hohlraum suggests a dramatic enhancement of the conversion efficiency of collisionally excited x-ray lasing for Ni-like Sn.
Resumo:
Elastic anisotropy is a very common phenomenon in the Earth’s interior, especial for sedimentary rock as important gas and oil reservoirs. But in the processing and interpretation of seismic data, it is assumption that the media in the Earth’s interior is completely elastic and isotropic, and then the methods based on isotropy are used to deal with anisotropic seismic data, so it makes the seismic resolution lower and the error on images is caused. The research on seismic wave simulation technology can improve our understanding on the rules of seismic wave propagation in anisotropic media, and it can help us to resolve problems caused by anisotropy of media in the processing and interpretation of seismic data. So researching on weakly anisotropic media with rotated axis of symmetry, we study systematically the rules of seismic wave propagation in this kind of media, simulate the process with numerical calculation, and get the better research results. The first-order ray tracing (FORT) formulas of qP wave derived can adapt to every anisotropic media with arbitrary symmetry. The equations are considerably simpler than the exact ray tracing equations. The equations allow qP waves to be treated independently from qS waves, just as in isotropic media. They simplify considerably in media with higher symmetry anisotropy. In isotropic media, they reduce to the exact ray tracing equations. In contrast to other perturbation techniques used to trace rays in weakly anisotropic media, our approach does not require calculation of reference rays in a reference isotropic medium. The FORT-method rays are obtained directly. They are computationally more effective than standard ray tracing equations. Moreover the second-order travel time corrections formula derived can be used to reduce effectively the travel time error, and improve the accuracy of travel time calculation. The tensor transformation equations of weak-anisotropy parameters in media with rotated axis of symmetry derived from the Bond transformation equations resolve effectively the problems of coordinate transformation caused by the difference between global system of coordinate and local system of coordinate. The calculated weak-anisotropy parameters are completely suitable to the first-order ray tracing used in this paper, and their forms are simpler than those from the Bond transformation. In the numerical simulation on ray tracing, we use the travel time table calculation method that the locations of the grids in the ray beam are determined, then the travel times of the grids are obtained by the reversed distance interpolation. We get better calculation efficiency and accuracy by this method. Finally we verify the validity and adaptability of this method used in this paper with numerical simulations for the rotated TI model with anisotropy of about 8% and the rotated ORTHO model with anisotropy of about 20%. The results indicate that this method has better accuracy for both media with different types and different anisotropic strength. Keywords: weak-anisotropy, numerical simulation, ray tracing equation, travel time, inhomogeneity
Resumo:
The processes of seismic wave propagation in phase space and one way wave extrapolation in frequency-space domain, if without dissipation, are essentially transformation under the action of one parameter Lie groups. Consequently, the numerical calculation methods of the propagation ought to be Lie group transformation too, which is known as Lie group method. After a fruitful study on the fast methods in matrix inversion, some of the Lie group methods in seismic numerical modeling and depth migration are presented here. Firstly the Lie group description and method of seismic wave propagation in phase space is proposed, which is, in other words, symplectic group description and method for seismic wave propagation, since symplectic group is a Lie subgroup and symplectic method is a special Lie group method. Under the frame of Hamiltonian, the propagation of seismic wave is a symplectic group transformation with one parameter and consequently, the numerical calculation methods of the propagation ought to be symplectic method. After discrete the wave field in time and phase space, many explicit, implicit and leap-frog symplectic schemes are deduced for numerical modeling. Compared to symplectic schemes, Finite difference (FD) method is an approximate of symplectic method. Consequently, explicit, implicit and leap-frog symplectic schemes and FD method are applied in the same conditions to get a wave field in constant velocity model, a synthetic model and Marmousi model. The result illustrates the potential power of the symplectic methods. As an application, symplectic method is employed to give synthetic seismic record of Qinghai foothills model. Another application is the development of Ray+symplectic reverse-time migration method. To make a reasonable balance between the computational efficiency and accuracy, we combine the multi-valued wave field & Green function algorithm with symplectic reverse time migration and thus develop a new ray+wave equation prestack depth migration method. Marmousi model data and Qinghai foothills model data are processed here. The result shows that our method is a better alternative to ray migration for complex structure imaging. Similarly, the extrapolation of one way wave in frequency-space domain is a Lie group transformation with one parameter Z and consequently, the numerical calculation methods of the extrapolation ought to be Lie group methods. After discrete the wave field in depth and space, the Lie group transformation has the form of matrix exponential and each approximation of it gives a Lie group algorithm. Though Pade symmetrical series approximation of matrix exponential gives a extrapolation method which is traditionally regarded as implicit FD migration, it benefits the theoretic and applying study of seismic imaging for it represent the depth extrapolation and migration method in a entirely different way. While, the technique of coordinates of second kind for the approximation of the matrix exponential begins a new way to develop migration operator. The inversion of matrix plays a vital role in the numerical migration method given by Pade symmetrical series approximation. The matrix has a Toepelitz structure with a helical boundary condition and is easy to inverse with LU decomposition. A efficient LU decomposition method is spectral factorization. That is, after the minimum phase correlative function of each array of matrix had be given by a spectral factorization method, all of the functions are arranged in a position according to its former location to get a lower triangular matrix. The major merit of LU decomposition with spectral factorization (SF Decomposition) is its efficiency in dealing with a large number of matrixes. After the setup of a table of the spectral factorization results of each array of matrix, the SF decomposition can give the lower triangular matrix by reading the table. However, the relationship among arrays is ignored in this method, which brings errors in decomposition method. Especially for numerical calculation in complex model, the errors is fatal. Direct elimination method can give the exact LU decomposition But even it is simplified in our case, the large number of decomposition cost unendurable computer time. A hybrid method is proposed here, which combines spectral factorization with direct elimination. Its decomposition errors is 10 times little than that of spectral factorization, and its decomposition speed is quite faster than that of direct elimination, especially in dealing with a large number of matrix. With the hybrid method, the 3D implicit migration can be expected to apply on real seismic data. Finally, the impulse response of 3D implicit migration operator is presented.
Resumo:
After removal of the Selective Availability in 2000, the ionosphere became the dominant error source for Global Navigation Satellite Systems (GNSS), especially for the high-accuracy (cm-mm) demanding applications like the Precise Point Positioning (PPP) and Real Time Kinematic (RTK) positioning.The common practice of eliminating the ionospheric error, e. g. by the ionosphere free (IF) observable, which is a linear combination of observables on two frequencies such as GPS L1 and L2, accounts for about 99% of the total ionospheric effect, known as the first order ionospheric effect (Ion1). The remaining 1% residual range errors (RREs) in the IF observable are due to the higher - second and third, order ionospheric effects, Ion2 and Ion3, respectively. Both terms are related with the electron content along the signal path; moreover Ion2 term is associated with the influence of the geomagnetic field on the ionospheric refractive index and Ion3 with the ray bending effect of the ionosphere, which can cause significant deviation in the ray trajectory (due to strong electron density gradients in the ionosphere) such that the error contribution of Ion3 can exceed that of Ion2 (Kim and Tinin, 2007).The higher order error terms do not cancel out in the (first order) ionospherically corrected observable and as such, when not accounted for, they can degrade the accuracy of GNSS positioning, depending on the level of the solar activity and geomagnetic and ionospheric conditions (Hoque and Jakowski, 2007). Simulation results from early 1990s show that Ion2 and Ion3 would contribute to the ionospheric error budget by less than 1% of the Ion1 term at GPS frequencies (Datta-Barua et al., 2008). Although the IF observable may provide sufficient accuracy for most GNSS applications, Ion2 and Ion3 need to be considered for higher accuracy demanding applications especially at times of higher solar activity.This paper investigates the higher order ionospheric effects (Ion2 and Ion3, however excluding the ray bending effects associated with Ion3) in the European region in the GNSS positioning considering the precise point positioning (PPP) method. For this purpose observations from four European stations were considered. These observations were taken in four time intervals corresponding to various geophysical conditions: the active and quiet periods of the solar cycle, 2001 and 2006, respectively, excluding the effects of disturbances in the geomagnetic field (i.e. geomagnetic storms), as well as the years of 2001 and 2003, this time including the impact of geomagnetic disturbances. The program RINEX_HO (Marques et al., 2011) was used to calculate the magnitudes of Ion2 and Ion3 on the range measurements as well as the total electron content (TEC) observed on each receiver-satellite link. The program also corrects the GPS observation files for Ion2 and Ion3; thereafter it is possible to perform PPP with both the original and corrected GPS observation files to analyze the impact of the higher order ionospheric error terms excluding the ray bending effect which may become significant especially at low elevation angles (Ioannides and Strangeways, 2002) on the estimated station coordinates.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Stochastic models for competing clonotypes of T cells by multivariate, continuous-time, discrete state, Markov processes have been proposed in the literature by Stirk, Molina-París and van den Berg (2008). A stochastic modelling framework is important because of rare events associated with small populations of some critical cell types. Usually, computational methods for these problems employ a trajectory-based approach, based on Monte Carlo simulation. This is partly because the complementary, probability density function (PDF) approaches can be expensive but here we describe some efficient PDF approaches by directly solving the governing equations, known as the Master Equation. These computations are made very efficient through an approximation of the state space by the Finite State Projection and through the use of Krylov subspace methods when evolving the matrix exponential. These computational methods allow us to explore the evolution of the PDFs associated with these stochastic models, and bimodal distributions arise in some parameter regimes. Time-dependent propensities naturally arise in immunological processes due to, for example, age-dependent effects. Incorporating time-dependent propensities into the framework of the Master Equation significantly complicates the corresponding computational methods but here we describe an efficient approach via Magnus formulas. Although this contribution focuses on the example of competing clonotypes, the general principles are relevant to multivariate Markov processes and provide fundamental techniques for computational immunology.
Resumo:
We consider the problem of estimating the optimal parameter trajectory over a finite time interval in a parameterized stochastic differential equation (SDE), and propose a simulation-based algorithm for this purpose. Towards this end, we consider a discretization of the SDE over finite time instants and reformulate the problem as one of finding an optimal parameter at each of these instants. A stochastic approximation algorithm based on the smoothed functional technique is adapted to this setting for finding the optimal parameter trajectory. A proof of convergence of the algorithm is presented and results of numerical experiments over two different settings are shown. The algorithm is seen to exhibit good performance. We also present extensions of our framework to the case of finding optimal parameterized feedback policies for controlled SDE and present numerical results in this scenario as well.
Resumo:
Extended self-similarity (ESS), a procedure that remarkably extends the range of scaling for structure functions in Navier-Stokes turbulence and thus allows improved determination of intermittency exponents, has never been fully explained. We show that ESS applies to Burgers turbulence at high Reynolds numbers and we give the theoretical explanation of the numerically observed improved scaling at both the IR and UV end, in total a gain of about three quarters of a decade: there is a reduction of subdominant contributions to scaling when going from the standard structure function representation to the ESS representation. We conjecture that a similar situation holds for three-dimensional incompressible turbulence and suggest ways of capturing subdominant contributions to scaling.
Resumo:
System of kinematical conservation laws (KCL) govern evolution of a curve in a plane or a surface in space, even if the curve or the surface has singularities on it. In our recent publication K. R. Arun, P. Prasad, 3-D kinematical conservation laws (KCL): evolution of a surface in R-3-in particular propagation of a nonlinear wavefront, Wave Motion 46 (2009) 293-311] we have developed a mathematical theory to study the successive positions and geometry of a 3-D weakly nonlinear wavefront by adding an energy transport equation to KCL. The 7 x 7 system of equations of this KCL based 3-D weakly nonlinear ray theory (WNLRT) is quite complex and explicit expressions for its two nonzero eigenvalues could not be obtained before. In this short note, we use two different methods: (i) the equivalence of KCL and ray equations and (ii) the transformation of surface coordinates, to derive the same exact expressions for these eigenvalues. The explicit expressions for nonzero eigenvalues are important also for checking stability of any numerical scheme to solve 3-D WNLRT. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
We have carried out synchrotron based high-pressure x-ray diffraction study of orthorhombic EuMnO3, GdMnO3, TbMnO3 and DyMnO3 up to 54.4, 41.6, 47.0 and 50.2 GPa, respectively. The diffraction peaks of all the four manganites shift monotonically to higher diffraction angles and the crystals retain the orthorhombic structure till the highest pressure. We have fitted the observed volume versus pressure data with the Birch-Murnaghan equation of state and determined the bulk modulus to be 185 +/- 6 GPa, 190 +/- 16 GPa, 188 +/- 9 GPa and 192 +/- 8 GPa for EuMnO3, GdMnO3, TbMnO3 and DyMnO3, respectively. The bulk modulus of EuMnO3 is comparable to other manganites, in contrast to theoretical predictions.
Resumo:
Because the Earth’s upper mantle is inaccessible to us, in order to understand the chemical and physical processes that occur in the Earth’s interior we must rely on both experimental work and computational modeling. This thesis addresses both of these geochemical methods. In the first chapter, I develop an internally consistent comprehensive molar volume model for spinels in the oxide system FeO-MgO-Fe2O3-Cr2O3-Al2O3-TiO2. The model is compared to the current MELTS spinel model with a demonstration of the impact of the model difference on the estimated spinel-garnet lherzolite transition pressure. In the second chapter, I calibrate a molar volume model for cubic garnets in the system SiO2-Al2O3-TiO2-Fe2O3-Cr2O3-FeO-MnO-MgO-CaO-Na2O. I use the method of singular value analysis to calibrate excess volume of mixing parameters for the garnet model. The implications the model has for the density of the lithospheric mantle are explored. In the third chapter, I discuss the nuclear inelastic X-ray scattering (NRIXS) method, and present analysis of three orthopyroxene samples with different Fe contents. Longitudinal and shear wave velocities, elastic parameters, and other thermodynamic information are extracted from the raw NRIXS data.
Resumo:
Nonlinear Thomson backscattering of an intense Gaussian laser pulse by a counterpropagating energetic electron is investigated by numerically solving the electron equation of motion taking into account the radiative damping force. The backscattered radiation characteristics are different for linearly and circularly polarized lasers because of a difference in their ponderomotive forces acting on the electron. The radiative electron energy loss weakens the backscattered power, breaks the symmetry of the backscattered-pulse profile, and prolongs the duration of the backscattered radiation. With the circularly polarized laser, an adjustable double-peaked backscattered pulse can be obtained. Such a profile has potential applications as a subfemtosecond x-ray pump and probe with adjustable time delay and power ratio. (c) 2006 American Institute of Physics.
Resumo:
In this report, we start from Lagrange equation and analyze theoretically the electron dynamics in electromagnetic field. By solving the relativistic government equations of electron, the trajectories of an electron in plane laser pulse, focused laser pulse have been given for different initial conditions. The electron trajectory is determined by its initial momentum, the amplitude, spot size and polarization of the laser pulse. The optimum initial momentum of the electron for LSS (laser synchrotron source) is obtained. Linear polarized laser is more advantaged than circular polarized laser for generating harmonic radiation.
Resumo:
The theoretical model of direct diffraction phase-contrast imaging with partially coherent x-ray source is expressed by an operator of multiple integral. It is presented that the integral operator is linear. The problem of its phase retrieval is described by solving an operator equation of multiple integral. It is demonstrated that the solution of the phase retrieval is unstable. The numerical simulation is performed and the result validates that the solution of the phase retrieval is unstable.