980 resultados para nonlinear error
Resumo:
A detailed study of surface laser damage performed on a nonlinear optical crystal, urea L-malic acid, using 7 ns laser pulses at 10 Hz repetition rate from a Q-switched Nd:YAG laser at wavelengths of 532 and 1064 nm is reported. The single shot and multiple shot surface laser damage threshold values are determined to be 26.64±0.19 and 20.60±0.36 GW cm−2 at 1064 nm and 18.44±0.31 and 7.52±0.22 GW cm−2 at 532 nm laser radiation, respectively. The laser damage anisotropy is consistent with the Vickers mechanical hardness measurement performed along three crystallographic directions. The Knoop polar plot also reflects the damage morphology. Our investigation reveals a direct correlation between the laser damage profile and hardness anisotropy. Thermal breakdown of the crystal is identified as the possible mechanism of laser induced surface damage.
Resumo:
Three-component ferroelectric superlattices consisting of alternating layers of SrTiO3, BaTiO3, and CaTiO3 (SBC) with variable interlayer thickness were fabricated on Pt(111)/TiO2/SiO2/Si (100) substrates by pulsed laser deposition. The presence of satellite reflections in x-ray-diffraction analysis and a periodic concentration of Sr, Ba, and Ca throughout the film in depth profile of secondary ion mass spectrometry analysis confirm the fabrication of superlattice structures. The Pr (remnant polarization) and Ps (saturation polarization) of SBC superlattice with 16.4-nm individual layer thickness (SBC16.4) were found to be around 4.96 and 34 μC/cm2, respectively. The dependence of polarization on individual layer thickness and lattice strain were studied in order to investigate the size dependence of the dielectric properties. The dielectric constant of these superlattices was found to be much higher than the individual component layers present in the superlattice configuration. The relatively higher tunability ( ∼ 55%) obtained around 300 K indicates that the superlattice is a potential electrically tunable material for microwave applications at room temperature. The enhanced dielectric properties were thus discussed in terms of the interfacial strain driven polar region due to high lattice mismatch and electrostatic coupling due to polarization mismatch between individual layers.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
Use of some new planes such as the R-x, R2-x (where R represents in the n-dimensional phase space, the radius vector from the origin to any point on the trajectory described by the system) is suggested for analysis of nonlinear systems of any kind. The stability conditions in these planes are given. For easy understanding of the method, the transformation from the phase plane to the R-x, R2-x planes is brought out for second-order systems. In general, while these planes serve as useful as the phase plane, they have proved to be simpler in determining quickly the general behavior of certain classes of second-order nonlinear systems. A chart and a simple formula are suggested to evaluate time easily from the R-x and R2-x trajectories, respectively. A means of solving higher-order nonlinear systems is also illustrated. Finally, a comparative study of the trajectories near singular points on the phase plane and on the new planes is made.
Resumo:
Analysis of certain second-order nonlinear systems, not easily amenable to the phase-plane methods, and described by either of the following differential equations xÿn-2ÿ+ f(x)xÿ2n+g(x)xÿn+h(x)=0 ÿ+f(x)xÿn+h(x)=0 n≫0 can be effected easily by drawing the entire portrait of trajectories on a new plane; that is, on one of the xÿnÿx planes. Simple equations are given to evaluate time from a trajectory on any of these n planes. Poincaré's fundamental phase plane xÿÿx is conceived of as the simplest case of the general xÿnÿx plane.
Resumo:
This paper suggests the use of simple transformations like ÿ=kx, kx2 for second-order nonlinear differential equations to effect rapid plotting of the phase-plane trajectories. The method is particularly helpful in determining quickly the trajectory slopes along simple curves in any desired region of the phase plane. New planes such as the tÿ-x, tÿ2-x are considered for the study of some groups of nonlinear time-varying systems. Suggestions for solving certain higher-order nonlinear systems are also made.
Resumo:
In normal materials, the nonlinear optical effects arise from nonlinearities in the polarisabilities of the constituent atoms or molecules. On the other hand the nonlinear optical effects in liquid crystals arise from totally different processes. Also they occur at relatively low laser intensities. In a laser field a liquid crystal exhibits many novel and interesting nonlinear optical effects. In addition we also find laser field induced effects that are peculiar to liquid crystals, like structural transformations, orientational transitions, modulated structures and phase transitions, to name a few. Here we dwell upon a few of these interesting and important nonlinear optical phenomena that exist in nematic liquid crystals.
Resumo:
The DMS-FEM, which enables functional approximations with C(1) or still higher inter-element continuity within an FEM-based meshing of the domain, has recently been proposed by Sunilkumar and Roy [39,40]. Through numerical explorations on linear elasto-static problems, the method was found to have conspicuously superior convergence characteristics as well as higher numerical stability against locking. These observations motivate the present study, which aims at extending and exploring the DMS-FEM to (geometrically) nonlinear elasto-static problems of interest in solid mechanics and assessing its numerical performance vis-a-vis the FEM. In particular, the DMS-FEM is shown to vastly outperform the FEM (presently implemented through the commercial software ANSYS (R)) as the former requires fewer linearization and load steps to achieve convergence. In addition, in the context of nearly incompressible nonlinear systems prone to volumetric locking and with no special numerical artefacts (e.g. stabilized or mixed weak forms) employed to arrest locking, the DMS-FEM is shown to approach the incompressibility limit much more closely and with significantly fewer iterations than the FEM. The numerical findings are suggestive of the important role that higher order (uniform) continuity of the approximated field variables play in overcoming volumetric locking and the great promise that the method holds for a range of other numerically ill-conditioned problems of interest in computational structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Lead Telluride (PbTe) nanorods have been uniformly grown on silicon substrates, using the thermal evaporation technique under high vacuum conditions. The structural and morphological studies are done using X-ray diffraction and scanning electron microscopy. Optical nonlinearity studies using the open aperture z-scan employing 5 ns and 100 fs laser pulses reveal a three-photon type absorption. For nanosecond excitation the nonlinear absorption coefficients (gamma) are in the order of 10(-22) m(3) W-2 and for femtosecond excitation it is in the order of 10(-29) m(3) W-2. The role of free carriers and excitons in causing the nonlinearity in both excitation time domains is discussed. Results indicate that PbTe nanorods are good optical limiters with potential device applications. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.