941 resultados para Linear transformations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polydimethylsiloxane (PDMS) is the elastomer of choice to create a variety of microfluidic devices by soft lithography techniques (eg., [1], [2], [3], [4]). Accurate and reliable design, manufacture, and operation of microfluidic devices made from PDMS, require a detailed characterization of the deformation and failure behavior of the material. This paper discusses progress in a recently-initiated research project towards this goal. We have conducted large-deformation tension and compression experiments on traditional macroscale specimens, as well as microscale tension experiments on thin-film (≈ 50µm thickness) specimens of PDMS with varying ratios of monomer:curing agent (5:1, 10:1, 20:1). We find that the stress-stretch response of these materials shows significant variability, even for nominally identically prepared specimens. A non-linear, large-deformation rubber-elasticity model [5], [6] is applied to represent the behavior of PDMS. The constitutive model has been implemented in a finite-element program [7] to aid the design of microfluidic devices made from this material. As a first attempt towards the goal of estimating the non-linear material parameters for PDMS from indentation experiments, we have conducted micro-indentation experiments using a spherical indenter-tip, and carried out corresponding numerical simulations to verify how well the numerically-predicted P(load-h(depth of indentation) curves compare with the corresponding experimental measurements. The results are encouraging, and show the possibility of estimating the material parameters for PDMS from relatively simple micro-indentation experiments, and corresponding numerical simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aitchison and Bacon-Shone (1999) considered convex linear combinations of compositions. In other words, they investigated compositions of compositions, where the mixing composition follows a logistic Normal distribution (or a perturbation process) and the compositions being mixed follow a logistic Normal distribution. In this paper, I investigate the extension to situations where the mixing composition varies with a number of dimensions. Examples would be where the mixing proportions vary with time or distance or a combination of the two. Practical situations include a river where the mixing proportions vary along the river, or across a lake and possibly with a time trend. This is illustrated with a dataset similar to that used in the Aitchison and Bacon-Shone paper, which looked at how pollution in a loch depended on the pollution in the three rivers that feed the loch. Here, I explicitly model the variation in the linear combination across the loch, assuming that the mean of the logistic Normal distribution depends on the river flows and relative distance from the source origins

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel density estimation techniques in the context of compositional data analysis. Indeed, they gave two options for the choice of the kernel to be used in the kernel estimator. One of these kernels is based on the use the alr transformation on the simplex SD jointly with the normal distribution on RD-1. However, these authors themselves recognized that this method has some deficiencies. A method for overcoming these dificulties based on recent developments for compositional data analysis and multivariate kernel estimation theory, combining the ilr transformation with the use of the normal density with a full bandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu- Figueras (2006). Here we present an extensive simulation study that compares both methods in practice, thus exploring the finite-sample behaviour of both estimators

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades <-1 to >9) were separated, and each fraction was analysed for its chemical composition. The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø <8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar). To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend. Key words: sediment, geochemistry, grain size, regression, step function

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an earlier investigation (Burger et al., 2000) five sediment cores near the Rodrigues Triple Junction in the Indian Ocean were studied applying classical statistical methods (fuzzy c-means clustering, linear mixing model, principal component analysis) for the extraction of endmembers and evaluating the spatial and temporal variation of geochemical signals. Three main factors of sedimentation were expected by the marine geologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. The display of fuzzy membership values and/or factor scores versus depth provided consistent results for two factors only; the ultra-basic component could not be identified. The reason for this may be that only traditional statistical methods were applied, i.e. the untransformed components were used and the cosine-theta coefficient as similarity measure. During the last decade considerable progress in compositional data analysis was made and many case studies were published using new tools for exploratory analysis of these data. Therefore it makes sense to check if the application of suitable data transformations, reduction of the D-part simplex to two or three factors and visual interpretation of the factor scores would lead to a revision of earlier results and to answers to open questions . In this paper we follow the lines of a paper of R. Tolosana- Delgado et al. (2005) starting with a problem-oriented interpretation of the biplot scattergram, extracting compositional factors, ilr-transformation of the components and visualization of the factor scores in a spatial context: The compositional factors will be plotted versus depth (time) of the core samples in order to facilitate the identification of the expected sources of the sedimentary process. Kew words: compositional data analysis, biplot, deep sea sediments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows the impact of the atomic capabilities concept to include control-oriented knowledge of linear control systems in the decisions making structure of physical agents. These agents operate in a real environment managing physical objects (e.g. their physical bodies) in coordinated tasks. This approach is presented using an introspective reasoning approach and control theory based on the specific tasks of passing a ball and executing the offside manoeuvre between physical agents in the robotic soccer testbed. Experimental results and conclusions are presented, emphasising the advantages of our approach that improve the multi-agent performance in cooperative systems

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lecture slides and notes for a PhD level course on linear algebra for electrical engineers and computer scientists. This course is given in in the framework of the School of Electronics and Computer Science Mathematics Training Courses https://secure.ecs.soton.ac.uk/notes/pg_maths/ (ECS password required)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exercises and solutions in LaTex

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exercises and solutions in PDF

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this lecture we go over the fundamentals of interactive game narratives. Defining what we mean by narrative, and placing games in context with other ergodic literature. We look at non-linear structures, agency, and the narrative paradox. Concluding with a set of mechanisms that games designers use to manage agency in their narrative games.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analizar el modelo log-linear y sus posibilidades de aplicación en la investigación educativa. Al mismo tiempo se hace un estudio de un medio didáctico: el libro de texto, analizando su relación con la función docente del profesor. 308 profesores de EGB de las dos provincias canarias. El muestreo fue ocasional. Se trata de un diseño ex post facto. Se procedió a la aplicación piloto de un cuestionario en una muestra reducida y a su análisis por parte de un grupo de expertos. Después de las modificaciones oportunas, se procedió a su aplicación final con la colaboración de diversos encuestadores para su distribución y recogida. Las variables principales fueron las siguientes. Rreferidas al profesor: años de experiencia, grado de dependencia del libro de texto y ciclo. Referidas a medios didácticos: frecuencia de uso, razones de uso, finalidad didáctica y dimensiones más valoradas para la enseñanza. Cuestionario 'Uso de medios en la enseñanza'. No existe entre el profesorado, considerado globalmente, una tendencia mayoritaria por la dependencia e independencia hacia el libro de texto. Los profesores veteranos tienden a ser dependientes del libro de texto. El resto de los profesores no se inclinan por la dependencia o independencia. La relación del texto con los programas oficiales sólo es valorada por los profesores del ciclo medio. La dimensión curricular centrada en la metodología que más importancia merece son las actividades que propone el texto, seguida del planteamiento metodológico que se desprende de la guía didáctica. La dimensión más valorada es la adecuación del texto, seguida del planteamiento metodológico que se desprende de la guía didáctica. La dimensión más valorada es la adecuación del texto al nivel de conocimientos de los alumnos. Le sigue en importancia el lenguaje utilizado y, finalmente, los aspectos formales del texto (colorido, tamaño, ilustraciones, etc.). Se pone de manifiesto el papel del libro de texto como un medio destinado básicamente a uso del alumnado. Su uso para el profesor se limita a servir de apoyo en sus explicaciones, en tanto que motivar y evaluar el aprendizaje son funciones con las cuales parece incompatible el uso del libro de texto. El análisis log-linear constituye un poderoso instrumento de análisis de variables nominales, con un grado de sofisticación estadística solo disponible hasta ahora para variables continuas. La abundancia de variables nominales en la investigación educativa, le hace especialmente apropiado para nuestro campo. Las ventajas del análisis log-linear dependen de: la naturaleza de las variables, mínimo número de categorías si se incluyen datos continuos, puntos de corte, estrategias de muestreo, etc. Sigue sin disponerse de criterios claros con respecto al tamaño de la muestra y la interpretación de la intensidad de los parámetros. Tampoco se ha desarrollado un sistema de representación gráfica con esta técnica de análisis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este documento estima modelos lineales y no-lineales de corrección de errores para los precios spot de cuatro tipos de café. En concordancia con las leyes económicas, se encuentra evidencia que cuando los precios están por encima de su nivel de equilibrio, retornan a éste mas lentamente que cuando están por debajo. Esto puede reflejar el hecho que, en el corto plazo, para los países productores de café es mas fácil restringir la oferta para incrementar precios, que incrementarla para reducirlos. Además, se encuentra evidencia que el ajuste es más rápido cuando las desviaciones del equilibrio son mayores. Los pronósticos que se obtienen a partir de los modelos de corrección de errores no lineales y asimétricos considerados en el trabajo, ofrecen una leve mejoría cuando se comparan con los pronósticos que resultan de un modelo de paseo aleatorio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the long-run relationship between the parallel and the official exchange rate in Colombia over two regimes; a crawling peg period and a more flexible crawling band one. The short-run adjustment process of the parallel rate is examined both in a linear and a nonlinear context. We find that the change from the crawling peg to the crawling band regime did not affect the long-run relationship between the official and parallel exchange rates, but altered the short-run dynamics. Non-linear adjustment seems appropriate for the first period, mainly due to strict foreign controls that cause distortions in the transition back to equilibrium once disequilibrium occurs