989 resultados para Covariance matrix decomposition
Resumo:
We present a novel approach to calculating Low-Energy Electron Diffraction (LEED) intensities for ordered molecular adsorbates. First, the intra-molecular multiple scattering is computed to obtain a non-diagonal molecular T-matrix. This is then used to represent the entire molecule as a single scattering object in a conventional LEED calculation, where the Layer Doubling technique is applied to assemble the different layers, including the molecular ones. A detailed comparison with conventional layer-type LEED calculations is provided to ascertain the accuracy of this scheme of calculation. Advantages of this scheme for problems involving ordered arrays of molecules adsorbed on surfaces are discussed.
Resumo:
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.
Resumo:
Flight necessitates that the feather rachis is extremely tough and light. Yet, the crucial filamentous hierarchy of the rachis is unknown—study hindered by the tight chemical bonding between the filaments and matrix. We used novel microbial biodegradation to delineate the fibres of the rachidial cortex in situ. It revealed the thickest keratin filaments known to date (factor >10), approximately 6 µm thick, extending predominantly axially but with a small outer circumferential component. Near-periodic thickened nodes of the fibres are staggered with those in adjacent fibres in two- and three-dimensional planes, creating a fibre–matrix texture with high attributes for crack stopping and resistance to transverse cutting. Close association of the fibre layer with the underlying ‘spongy’ medulloid pith indicates the potential for higher buckling loads and greater elastic recoil. Strikingly, the fibres are similar in dimensions and form to the free filaments of the feather vane and plumulaceous and embryonic down, the syncitial barbules, but, identified for the first time in 140+ years of study in a new location—as a major structural component of the rachis. Early in feather evolution, syncitial barbules were consolidated in a robust central rachis, definitively characterizing the avian lineage of keratin.
Resumo:
Objectives: Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Materials and Methods: Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Results: Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. Conclusions: We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
A technique is derived for solving a non-linear optimal control problem by iterating on a sequence of simplified problems in linear quadratic form. The technique is designed to achieve the correct solution of the original non-linear optimal control problem in spite of these simplifications. A mixed approach with a discrete performance index and continuous state variable system description is used as the basis of the design, and it is shown how the global problem can be decomposed into local sub-system problems and a co-ordinator within a hierarchical framework. An analysis of the optimality and convergence properties of the algorithm is presented and the effectiveness of the technique is demonstrated using a simulation example with a non-separable performance index.
Resumo:
This paper introduces a method for simulating multivariate samples that have exact means, covariances, skewness and kurtosis. We introduce a new class of rectangular orthogonal matrix which is fundamental to the methodology and we call these matrices L matrices. They may be deterministic, parametric or data specific in nature. The target moments determine the L matrix then infinitely many random samples with the same exact moments may be generated by multiplying the L matrix by arbitrary random orthogonal matrices. This methodology is thus termed “ROM simulation”. Considering certain elementary types of random orthogonal matrices we demonstrate that they generate samples with different characteristics. ROM simulation has applications to many problems that are resolved using standard Monte Carlo methods. But no parametric assumptions are required (unless parametric L matrices are used) so there is no sampling error caused by the discrete approximation of a continuous distribution, which is a major source of error in standard Monte Carlo simulations. For illustration, we apply ROM simulation to determine the value-at-risk of a stock portfolio.
Resumo:
The coadsorption of water with organic molecules under near-ambient pressure and temperature conditions opens up new reaction pathways on model catalyst surfaces that are not accessible in conventional ultrahigh-vacuum surfacescience experiments. The surface chemistry of glycine and alanine at the water-exposed Cu{110} interface was studied in situ using ambient-pressure photoemission and X-ray absorption spectroscopy techniques. At water pressures above 10-5 Torr a significant pressure-dependent decrease in the temperature for dissociative desorption was observed for both amino acids, accompanied by the appearance of a newCN intermediate, which is not observed for lower pressures. The most likely reaction mechanisms involve dehydrogenation induced by O and/or OH surface species resulting from the dissociative adsorption of water. The linear relationship between the inverse decomposition temperature and the logarithm of water pressure enables determination of the activation energy for the surface reaction, between 213 and 232 kJ/mol, and a prediction of the decomposition temperature at the solidliquid interface by extrapolating toward the equilibrium vapor pressure. Such experiments near the equilibrium vapor pressure provide important information about elementary surface processes at the solidliquid interface, which can be retrieved neither under ultrahigh vacuum conditions nor from interfaces immersed in a solution.
Resumo:
Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.
Resumo:
A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.
Resumo:
Review and critical reflection on 'The Matrix', in relation to questions of genre, aesthetics, representation, and cultural and industrial contexts.