27 resultados para second-order inelastic analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two different ways of performing low-energy electron diffraction (LEED) structure determinations for the p(2 x 2) structure of oxygen on Ni {111} are compared: a conventional LEED-IV structure analysis using integer and fractional-order IV-curves collected at normal incidence and an analysis using only integer-order IV-curves collected at three different angles of incidence. A clear discrimination between different adsorption sites can be achieved by the latter approach as well as the first and the best fit structures of both analyses are within each other's error bars (all less than 0.1 angstrom). The conventional analysis is more sensitive to the adsorbate coordinates and lateral parameters of the substrate atoms whereas the integer-order-based analysis is more sensitive to the vertical coordinates of substrate atoms. Adsorbate-related contributions to the intensities of integer-order diffraction spots are independent of the state of long-range order in the adsorbate layer. These results show, therefore, that for lattice-gas disordered adsorbate layers, for which only integer-order spots are observed, similar accuracy and reliability can be achieved as for ordered adsorbate layers, provided the data set is large enough.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure of turbulent flow over large roughness consisting of regular arrays of cubical obstacles is investigated numerically under constant pressure gradient conditions. Results are analysed in terms of first- and second-order statistics, by visualization of instantaneous flow fields and by conditional averaging. The accuracy of the simulations is established by detailed comparisons of first- and second-order statistics with wind-tunnel measurements. Coherent structures in the log region are investigated. Structure angles are computed from two-point correlations, and quadrant analysis is performed to determine the relative importance of Q2 and Q4 events (ejections and sweeps) as a function of height above the roughness. Flow visualization shows the existence of low-momentum regions (LMRs) as well as vortical structures throughout the log layer. Filtering techniques are used to reveal instantaneous examples of the association of the vortices with the LMRs, and linear stochastic estimation and conditional averaging are employed to deduce their statistical properties. The conditional averaging results reveal the presence of LMRs and regions of Q2 and Q4 events that appear to be associated with hairpin-like vortices, but a quantitative correspondence between the sizes of the vortices and those of the LMRs is difficult to establish; a simple estimate of the ratio of the vortex width to the LMR width gives a value that is several times larger than the corresponding ratio over smooth walls. The shape and inclination of the vortices and their spatial organization are compared to recent findings over smooth walls. Characteristic length scales are shown to scale linearly with height in the log region. Whilst there are striking qualitative similarities with smooth walls, there are also important differences in detail regarding: (i) structure angles and sizes and their dependence on distance from the rough surface; (ii) the flow structure close to the roughness; (iii) the roles of inflows into and outflows from cavities within the roughness; (iv) larger vortices on the rough wall compared to the smooth wall; (v) the effect of the different generation mechanism at the wall in setting the scales of structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method of clear-air turbulence (CAT) forecasting based on the Lighthill–Ford theory of spontaneous imbalance and emission of inertia–gravity waves has been derived and applied on episodic and seasonal time scales. A scale analysis of this shallow-water theory for midlatitude synoptic-scale flows identifies advection of relative vorticity as the leading-order source term. Examination of leading- and second-order terms elucidates previous, more empirically inspired CAT forecast diagnostics. Application of the Lighthill–Ford theory to the Upper Mississippi and Ohio Valleys CAT outbreak of 9 March 2006 results in good agreement with pilot reports of turbulence. Application of Lighthill–Ford theory to CAT forecasting for the 3 November 2005–26 March 2006 period using 1-h forecasts of the Rapid Update Cycle (RUC) 2 1500 UTC model run leads to superior forecasts compared to the current operational version of the Graphical Turbulence Guidance (GTG1) algorithm, the most skillful operational CAT forecasting method in existence. The results suggest that major improvements in CAT forecasting could result if the methods presented herein become operational.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structures of trimethylchlorogermane ((CH3)(3)GeCl) and trimethylbromogermane ((CH3)(3)GeBr) have been determined by gas-phase electron diffraction (GED), augmented by the results from ab initio calculations employing second-order Moller-Plesset (MP2) level of theory and the 6-311+G(d) basis set. All the electrons were included in the correlation calculation. The results from the ab initio calculations indicated that these molecules have C-3v symmetry, and models with this symmetry were used in the electron diffraction analysis. The results for the principal distances (r(g)) and angles (angle(alpha)) from the combined GED/ab initio study of trimethylchlorogermane (with estimated 2sigma uncertainties) are: r(Ge-C) = 1.950(4) Angstrom, r(Ge-Cl) = 2.173(4) Angstrom, r(C-H) = 1.090(9) Angstrom, angleCGeC = 112.7(7)degrees, angleCGeCl = 106.0(8)degrees, angleGeCH = 107.8(12)degrees. The results for the principal distances (r(g)) and angles (angle(alpha)) from the combined GED/ab initio study of trimethylbromogermane (with estimated 2sigma uncertainties) are: r(Ge-C) = 1.952(7) Angstrom, r(Ge-Br) = 2.325(4) Angstrom, r(C-H) = 1. 140(28) Angstrom, angleCGeC = 114.2(11)degrees, angleCGeBr = 104.2(13)degrees, angleGeCH 106.9(43)degrees. Local C-3v symmetry and staggered conformation were assumed for the methyl groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Higher order cumulant analysis is applied to the blind equalization of linear time-invariant (LTI) nonminimum-phase channels. The channel model is moving-average based. To identify the moving average parameters of channels, a higher-order cumulant fitting approach is adopted in which a novel relay algorithm is proposed to obtain the global solution. In addition, the technique incorporates model order determination. The transmitted data are considered as independently identically distributed random variables over some discrete finite set (e.g., set {±1, ±3}). A transformation scheme is suggested so that third-order cumulant analysis can be applied to this type of data. Simulation examples verify the feasibility and potential of the algorithm. Performance is compared with that of the noncumulant-based Sato scheme in terms of the steady state MSE and convergence rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the work was to study the survival of Lactobacillus plantarum NCIMB 8826 in model solutions and develop a mathematical model describing its dependence on pH, citric acid and ascorbic acid. A Central Composite Design (CCD) was developed studying each of the three factors at five levels within the following ranges, i.e., pH (3.0-4.2), citric acid (6-40 g/L), and ascorbic acid (100-1000 mg/L). In total, 17 experimental runs were carried out. The initial cell concentration in the model solutions was approximately 1 × 10(8)CFU/mL; the solutions were stored at 4°C for 6 weeks. Analysis of variance (ANOVA) of the stepwise regression demonstrated that a second order polynomial model fits well the data. The results demonstrated that high pH and citric acid concentration enhanced cell survival; one the other hand, ascorbic acid did not have an effect. Cell survival during storage was also investigated in various types of juices, including orange, grapefruit, blackcurrant, pineapple, pomegranate, cranberry and lemon juice. The model predicted well the cell survival in orange, blackcurrant and pineapple, however it failed to predict cell survival in grapefruit and pomegranate, indicating the influence of additional factors, besides pH and citric acid, on cell survival. Very good cell survival (less than 0.4 log decrease) was observed after 6 weeks of storage in orange, blackcurrant and pineapple juice, all of which had a pH of about 3.8. Cell survival in cranberry and pomegranate decreased very quickly, whereas in the case of lemon juice, the cell concentration decreased approximately 1.1 logs after 6 weeks of storage, albeit the fact that lemon juice had the lowest pH (pH~2.5) among all the juices tested. Taking into account the results from the compositional analysis of the juices and the model, it was deduced that in certain juices, other compounds seemed to protect the cells during storage; these were likely to be proteins and dietary fibre In contrast, in certain juices, such as pomegranate, cell survival was much lower than expected; this could be due to the presence of antimicrobial compounds, such as phenolic compounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feedback design for a second-order control system leads to an eigenstructure assignment problem for a quadratic matrix polynomial. It is desirable that the feedback controller not only assigns specified eigenvalues to the second-order closed loop system but also that the system is robust, or insensitive to perturbations. We derive here new sensitivity measures, or condition numbers, for the eigenvalues of the quadratic matrix polynomial and define a measure of the robustness of the corresponding system. We then show that the robustness of the quadratic inverse eigenvalue problem can be achieved by solving a generalized linear eigenvalue assignment problem subject to structured perturbations. Numerically reliable methods for solving the structured generalized linear problem are developed that take advantage of the special properties of the system in order to minimize the computational work required. In this part of the work we treat the case where the leading coefficient matrix in the quadratic polynomial is nonsingular, which ensures that the polynomial is regular. In a second part, we will examine the case where the open loop matrix polynomial is not necessarily regular.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The temporal variability of the atmosphere through which radio waves pass in the technique of differential radar interferometry can seriously limit the accuracy with which the method can measure surface motion. A forward, nested mesoscale model of the atmosphere can be used to simulate the variable water content along the radar path and the resultant phase delays. Using this approach we demonstrate how to correct an interferogram of Mount Etna in Sicily associated with an eruption in 2004-5. The regional mesoscale model (Unified Model) used to simulate the atmosphere at higher resolutions consists of four nested domains increasing in resolution (12, 4, 1, 0.3 km), sitting within the analysis version of a global numerical model that is used to initiate the simulation. Using the high resolution 3D model output we compute the surface pressure, temperature and the water vapour, liquid and solid water contents, enabling the dominant hydrostatic and wet delays to be calculated at specific times corresponding to the acquisition of the radar data. We can also simulate the second-order delay effects due to liquid water and ice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The detection of long-range dependence in time series analysis is an important task to which this paper contributes by showing that whilst the theoretical definition of a long-memory (or long-range dependent) process is based on the autocorrelation function, it is not possible for long memory to be identified using the sum of the sample autocorrelations, as usually defined. The reason for this is that the sample sum is a predetermined constant for any stationary time series; a result that is independent of the sample size. Diagnostic or estimation procedures, such as those in the frequency domain, that embed this sum are equally open to this criticism. We develop this result in the context of long memory, extending it to the implications for the spectral density function and the variance of partial sums of a stationary stochastic process. The results are further extended to higher order sample autocorrelations and the bispectral density. The corresponding result is that the sum of the third order sample (auto) bicorrelations at lags h,k≥1, is also a predetermined constant, different from that in the second order case, for any stationary time series of arbitrary length.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of slow vortical dynamics and its role in theoretical understanding is central to geophysical fluid dynamics. It leads, for example, to “potential vorticity thinking” (Hoskins et al. 1985). Mathematically, one imagines an invariant manifold within the phase space of solutions, called the slow manifold (Leith 1980; Lorenz 1980), to which the dynamics are constrained. Whether this slow manifold truly exists has been a major subject of inquiry over the past 20 years. It has become clear that an exact slow manifold is an exceptional case, restricted to steady or perhaps temporally periodic flows (Warn 1997). Thus the concept of a “fuzzy slow manifold” (Warn and Ménard 1986) has been suggested. The idea is that nearly slow dynamics will occur in a stochastic layer about the putative slow manifold. The natural question then is, how thick is this layer? In a recent paper, Ford et al. (2000) argue that Lighthill emission—the spontaneous emission of freely propagating acoustic waves by unsteady vortical flows—is applicable to the problem of balance, with the Mach number Ma replaced by the Froude number F, and that it is a fundamental mechanism for this fuzziness. They consider the rotating shallow-water equations and find emission of inertia–gravity waves at O(F2). This is rather surprising at first sight, because several studies of balanced dynamics with the rotating shallow-water equations have gone beyond second order in F, and found only an exponentially small unbalanced component (Warn and Ménard 1986; Lorenz and Krishnamurthy 1987; Bokhove and Shepherd 1996; Wirosoetisno and Shepherd 2000). We have no technical objection to the analysis of Ford et al. (2000), but wish to point out that it depends crucially on R 1, where R is the Rossby number. This condition requires the ratio of the characteristic length scale of the flow L to the Rossby deformation radius LR to go to zero in the limit F → 0. This is the low Froude number scaling of Charney (1963), which, while originally designed for the Tropics, has been argued to be also relevant to mesoscale dynamics (Riley et al. 1981). If L/LR is fixed, however, then F → 0 implies R → 0, which is the standard quasigeostrophic scaling of Charney (1948; see, e.g., Pedlosky 1987). In this limit there is reason to expect the fuzziness of the slow manifold to be “exponentially thin,” and balance to be much more accurate than is consistent with (algebraic) Lighthill emission.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.