2 resultados para Non-linear optical behaviour

em Massachusetts Institute of Technology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polydimethylsiloxane (PDMS) is the elastomer of choice to create a variety of microfluidic devices by soft lithography techniques (eg., [1], [2], [3], [4]). Accurate and reliable design, manufacture, and operation of microfluidic devices made from PDMS, require a detailed characterization of the deformation and failure behavior of the material. This paper discusses progress in a recently-initiated research project towards this goal. We have conducted large-deformation tension and compression experiments on traditional macroscale specimens, as well as microscale tension experiments on thin-film (≈ 50µm thickness) specimens of PDMS with varying ratios of monomer:curing agent (5:1, 10:1, 20:1). We find that the stress-stretch response of these materials shows significant variability, even for nominally identically prepared specimens. A non-linear, large-deformation rubber-elasticity model [5], [6] is applied to represent the behavior of PDMS. The constitutive model has been implemented in a finite-element program [7] to aid the design of microfluidic devices made from this material. As a first attempt towards the goal of estimating the non-linear material parameters for PDMS from indentation experiments, we have conducted micro-indentation experiments using a spherical indenter-tip, and carried out corresponding numerical simulations to verify how well the numerically-predicted P(load-h(depth of indentation) curves compare with the corresponding experimental measurements. The results are encouraging, and show the possibility of estimating the material parameters for PDMS from relatively simple micro-indentation experiments, and corresponding numerical simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.