946 resultados para Fundamentals in linear algebra


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tribology of linear tape storage system including Linear Tape Open (LTO) and Travan5 was investigated by combining X-ray Photoelectron Spectroscopy (XPS), Auger Electron Spectroscopy (AES), Optical Microscopy and Atomic Force Microscopy (AFM) technologies. The purpose of this study was to understand the tribology mechanism of linear tape systems then projected recording densities may be achieved in future systems. Water vapour pressure or Normalized Water Content (NWC) rather than the Relative Humidity (RH) values (as are used almost universally in this field) determined the extent of PTR and stain (if produced) in linear heads. Approximately linear dependencies were found for saturated PTR increasing with normalized water content increasing over the range studied using the same tape. Fe Stain (if produced) preferentially formed on the head surfaces at the lower water contents. The stain formation mechanism had been identified. Adhesive bond formation is a chemical process that is governed by temperature. Thus the higher the contact pressure, the higher the contact temperature in the interface of head and tape, was produced higher the probability of adhesive bond formation and the greater the amount of transferred material (stain). Water molecules at the interface saturate the surface bonds and makes adhesive junctions less likely. Tape polymeric binder formulation also has a significant role in stain formation, with the latest generation binders producing less transfer of material. This is almost certainly due to higher cohesive bonds within the body of the magnetic layer. TiC in the two-phase ceramic tape-bearing surface (AlTiC) was found to oxidise to form TiO2.The oxidation rate of TiC increased with water content increasing. The oxide was less dense than the underlying carbide; hence the interface between TiO2 oxide and TiC was stressed. Removals of the oxide phase results in the formation of three-body abrasive particles that were swept across the tape head, and gave rise to three-body abrasive wear, particularly in the pole regions. Hence, PTR and subsequent which signal loss and error growth. The lower contact pressure of the LTO system comparing with the Travan5 system ensures that fewer and smaller three-body abrasive particles were swept across the poles and insulator regions. Hence, lower contact pressure, as well as reducing stain in the same time significantly reduces PTR in the LTO system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose two algorithms involving the relaxation of either the given Dirichlet data (boundary displacements) or the prescribed Neumann data (boundary tractions) on the over-specified boundary in the case of the alternating iterative algorithm of Kozlov et al. [16] applied to Cauchy problems in linear elasticity. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present quasi-Monte Carlo analogs of Monte Carlo methods for some linear algebra problems: solving systems of linear equations, computing extreme eigenvalues, and matrix inversion. Reformulating the problems as solving integral equations with a special kernels and domains permits us to analyze the quasi-Monte Carlo methods with bounds from numerical integration. Standard Monte Carlo methods for integration provide a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo methods use quasirandom sequences with the resulting convergence rate for numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically and through numerical tests that the use of quasirandom sequences improves both the magnitude of the error and the convergence rate of the considered Monte Carlo methods. We also analyze the complexity of considered quasi-Monte Carlo algorithms and compare them to the complexity of the analogous Monte Carlo and deterministic algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 94B05, 94B15.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MSC 2010: 46F30, 46F10

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Success in mathematics has been identified as a predictor of baccalaureate degree completion. Within the coursework of college mathematics, College Algebra has been identified as a high-risk course due to its low success rates. ^ Research in the field of attribution theory and academic achievement suggests a relationship between a student's attributional style and achievement. Theorists and researchers contend that attributions influence individual reactions to success and failure. They also report that individuals use attributions to explain and justify their performance. Studies in mathematics education identify attribution theory as the theoretical orientation most suited to explain academic performance in mathematics. This study focused on the relationship among a high risk course, low success rates, and attribution by examining the difference in the attributions passing and failing students gave for their performance in College Algebra. ^ The methods for the study included a pilot administration of the Causal Dimension Scale (CDSII) which was used to conduct reliability and principal component analyses. Then, students (n = 410) self-reported their performance on an in-class test and attributed their performance along the dimensions of locus of causality, stability, personal controllability, and external controllability. They also provided open-ended attribution statements to explain the cause of their performance. The quantitative data compared the passing and failing groups and their attributions for performance on a test using One-Way ANOVA and Pearson chi square procedures. The open-ended attribution statements were coded in relation to ability, effort, task difficulty, and luck and compared using a Pearson chi square procedure. ^ The results of the quantitative data comparing passing and failing groups and their attributions along the dimensions measured by the CDSII indicated statistical significance in locus of causality, stability, and personal controllability. The results comparing the open-ended attribution statements indicated statistical significance in the categories of effort and task difficulty. ^

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes index.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main topic of this thesis is confounding in linear regression models. It arises when a relationship between an observed process, the covariate, and an outcome process, the response, is influenced by an unmeasured process, the confounder, associated with both. Consequently, the estimators for the regression coefficients of the measured covariates might be severely biased, less efficient and characterized by misleading interpretations. Confounding is an issue when the primary target of the work is the estimation of the regression parameters. The central point of the dissertation is the evaluation of the sampling properties of parameter estimators. This work aims to extend the spatial confounding framework to general structured settings and to understand the behaviour of confounding as a function of the data generating process structure parameters in several scenarios focusing on the joint covariate-confounder structure. In line with the spatial statistics literature, our purpose is to quantify the sampling properties of the regression coefficient estimators and, in turn, to identify the most prominent quantities depending on the generative mechanism impacting confounding. Once the sampling properties of the estimator conditionally on the covariate process are derived as ratios of dependent quadratic forms in Gaussian random variables, we provide an analytic expression of the marginal sampling properties of the estimator using Carlson’s R function. Additionally, we propose a representative quantity for the magnitude of confounding as a proxy of the bias, its first-order Laplace approximation. To conclude, we work under several frameworks considering spatial and temporal data with specific assumptions regarding the covariance and cross-covariance functions used to generate the processes involved. This study allows us to claim that the variability of the confounder-covariate interaction and of the covariate plays the most relevant role in determining the principal marker of the magnitude of confounding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package Maple®.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many natural and technological applications generate time ordered sequences of networks, defined over a fixed set of nodes; for example time-stamped information about ‘who phoned who’ or ‘who came into contact with who’ arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time’s arrow is captured naturally through the non-mutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concepts of rank, underdetermined systems and consistency in linear algebra are discussed in the context of a puzzle. The article begins with a specific example, moving on to a generalization of the example and then to the general n x n case. As well as providing a solution of the puzzle, the article aims to provide students with a greater understanding of these abstract ideas in linear algebra through the study of the puzzle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper studies a generalisation of the dynamic Leontief input-output model. The standard dynamic Leontief model will be extended with the balance equation of renewable resources. The renewable stocks will increase regenerating and decrease exploiting primary natural resources. In this study the controllability of this extended model is examined by taking the consumption as the control parameter. Assuming balanced growth for both consumption and production, we investigate the exhaustion of renewable resources in dependence on the balanced growth rate and on the rate of natural regeneration. In doing so, classic results from control theory and on eigenvalue problems in linear algebra are applied.