15 resultados para Historicity of the cinematographic images of pirates
em Cambridge University Engineering Department Publications Database
Resumo:
The effects of varying corona surface treatment on ink drop impact and spreading on a polymer substrate have been investigated. The surface energy of substrates treated with different levels of corona was determined from static contact angle measurement by the Owens and Wendt method. A drop-on-demand print-head was used to eject 38 μm diameter drops of UV-curable graphics ink travelling at 2.7 m/s on to a flat polymer substrate. The kinematic impact phase was imaged with a high speed camera at 500k frames per second, while the spreading phase was imaged at 20k frames per secoiui. The resultant images were analyzed to track the changes in the drop diameter during the different phases of drop spreading. Further experiments were carried out with white-light intetferometry to accurately measure the final diameter of drops which had been printed on different corona treated substrates and UV cured. The results are correlated to characterize the effects of corona treatment on drop impact behavior and final print quality.
Resumo:
The three-dimensional structure of very large samples of monodisperse bead packs is studied by means of X-Ray Computed Tomography. We retrieve the coordinatesofeach bead inthe pack and wecalculate the average coordination number by using the tomographic images to single out the neighbors in contact. The results are compared with the average coordination number obtained in Aste et al. (2005) by using a deconvolution technique. We show that the coordination number increases with the packing fraction, varying between 6.9 and 8.2 for packing fractions between 0.59 and 0.64. © 2005 Taylor & Francis Group.
Resumo:
This paper proposes a method for extracting reliable architectural characteristics from complex porous structures using micro-computed tomography (μCT) images. The work focuses on a highly porous material composed of a network of fibres bonded together. The segmentation process, allowing separation of the fibres from the remainder of the image, is the most critical step in constructing an accurate representation of the network architecture. Segmentation methods, based on local and global thresholding, were investigated and evaluated by a quantitative comparison of the architectural parameters they yielded, such as the fibre orientation and segment length (sections between joints) distributions and the number of inter-fibre crossings. To improve segmentation accuracy, a deconvolution algorithm was proposed to restore the original images. The efficacy of the proposed method was verified by comparing μCT network architectural characteristics with those obtained using high resolution CT scans (nanoCT). The results indicate that this approach resolves the architecture of these complex networks and produces results approaching the quality of nanoCT scans. The extracted architectural parameters were used in conjunction with an affine analytical model to predict the axial and transverse stiffnesses of the fibre network. Transverse stiffness predictions were compared with experimentally measured values obtained by vibration testing. © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper introduces the Interlevel Product (ILP) which is a transform based upon the Dual-Tree Complex Wavelet. Coefficients of the ILP have complex values whose magnitudes indicate the amplitude of multilevel features, and whose phases indicate the nature of these features (e.g. ridges vs. edges). In particular, the phases of ILP coefficients are approximately invariant to small shifts in the original images. We accordingly introduce this transform as a solution to coarse scale template matching, where alignment concerns between decimation of a target and decimation of a larger search image can be mitigated, and computational efficiency can be maintained. Furthermore, template matching with ILP coefficients can provide several intuitive "near-matches" that may be of interest in image retrieval applications. © 2005 IEEE.
Resumo:
Fluid assessment methods, requiring small volumes and avoiding the need for jetting, are particularly useful in the design of functional fluids for inkjet printing applications. With the increasing use of complex (rather than Newtonian) fluids for manufacturing, single frequency fluid characterisation cannot reliably predict good jetting behaviour, owing to the range of shearing and extensional flow rates involved. However, the scope of inkjet fluid assessments (beyond achievement of a nominal viscosity within the print head design specification) is usually focused on the final application rather than the jetting processes. The experimental demonstration of the clear insufficiency of such approaches shows that fluid jetting can readily discriminate between fluids assessed as having similar LVE characterisation (within a factor of 2) for typical commercial rheometer measurements at shearing rates reaching 104rads-1.Jetting behaviour of weakly elastic dilute linear polystyrene solutions, for molecular weights of 110-488. kDa, recorded using high speed video was compared with recent results from numerical modelling and capillary thinning studies of the same solutions.The jetting images show behaviour ranging from near-Newtonian to "beads-on-a-string". The inkjet printing behaviour does not correlate simply with the measured extensional relaxation times or Zimm times, but may be consistent with non-linear extensibility L and the production of fully extended polymer molecules in the thinning jet ligament.Fluid test methods allowing a more complete characterisation of NLVE parameters are needed to assess inkjet printing feasibility prior to directly jetting complex fluids. At the present time, directly jetting such fluids may prove to be the only alternative. © 2014 The Authors.
Resumo:
The current study extends our earlier investigation on the real-time dynamics of print gap airflow around a single jetted drop over a moving substrate. In the present work, simulated web press printing was performed using a stationary grey-scale commercial inkjet print head to print full-width block of solid colour images onto a paper substrate with extended print gaps. The resultant printed images exhibit patterns or 'wood-graining' effects which become more prevalent as the relevant Reynolds number (Re) increases. The high-resolution scans of the printed images revealed that the patterns are created by oscillation and coalescence of neighboring printed tracks across the web. The phenomenon could be a result of drop stream perturbations caused by unsteady print gap airflow of the type similar to that observed in the previous study. ©2013; Society for Imaging Science and Technology.
Resumo:
Statistical approaches for building non-rigid deformable models, such as the Active Appearance Model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases. © 2009 IEEE.
Resumo:
This paper presents a novel method of using experimentally observed optical phenomena to reverse-engineer a model of the carbon nanofiber-addressed liquid crystal microlens array (C-MLA) using Zemax. It presents the first images of the optical profile for the C-MLA along the optic axis. The first working optical models of the C-MLA have been developed by matching the simulation results to the experimental results. This approach bypasses the need to know the exact carbon nanofiber-liquid crystal interaction and can be easily adapted to other systems where the nature of an optical device is unknown. Results show that the C-MLA behaves like a simple lensing system at 0.060-0.276 V/μm. In this lensing mode the C-MLA is successfully modeled as a reflective convex lens array intersecting with a flat reflective plane. The C-MLA at these field strengths exhibits characteristics of mostly spherical or low order aspheric arrays, with some aspects of high power aspherics. It also exhibits properties associated with varying lens apertures and strengths, which concur with previously theorized models based on E-field patterns. This work uniquely provides evidence demonstrating an apparent "rippling" of the liquid crystal texture at low field strengths, which were successfully reproduced using rippled Gaussian-like lens profiles. © 2014 Published by Elsevier B.V.
Resumo:
The dominant industrial approach for the reduction of NO x emissions in industrial gas turbines is the lean pre-mixed prevaporized concept. The main advantage of this concept is the lean operation of the combustion process; this decreases the heat release rate from the flame and results in a reduction in operating temperature. The direct measurement of heat release rates via simultaneous laser induced fluorescence of OH and CH 2O radicals using planar laser induced fluorescence. The product of the two images correlated with the forward production rate of the HCO radical, which in turn has correlated well with heat release rates from premixed hydrocarbon flames. The experimental methodology of the measurement of heat release rate and applications in different turbulent premixed flames were presented. This is an abstract of a paper presented at the 7th World Congress of Chemical Engineering (Glasgow, Scotland 7/10-14/2005).
Resumo:
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.
Resumo:
In stereo displays, binocular disparity creates a striking impression of depth. However, such displays present focus cues - blur and accommodation - that specify a different depth than disparity, thereby causing a conflict. This conflict causes several problems including misperception of the 3D layout, difficulty fusing binocular images, and visual fatigue. To address these problems, we developed a display that preserves the advantages of conventional stereo displays, while presenting correct or nearly correct focus cues. In our new stereo display each eye views a display through a lens that switches between four focal distances at very high rate. The switches are synchronized to the display, so focal distance and the distance being simulated on the display are consistent or nearly consistent with one another. Focus cues for points in-between the four focal planes are simulated by using a depth-weighted blending technique. We will describe the design of the new display, discuss the retinal images it forms under various conditions, and describe an experiment that illustrates the effectiveness of the display in maximizing visual performance while minimizing visual fatigue. © 2009 SPIE-IS&T.
Resumo:
We describe a method for verifying seismic modelling parameters. It is equivalent to performing several iterations of unconstrained least-squares migration (LSM). The approach allows the comparison of modelling/imaging parameter configurations with greater confidence than simply viewing the migrated images. The method is best suited to determining discrete parameters but can be used for continuous parameters albeit with greater computational expense.
Resumo:
This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.
Resumo:
We present a model for early vision tasks such as denoising, super-resolution, deblurring, and demosaicing. The model provides a resolution-independent representation of discrete images which admits a truly rotationally invariant prior. The model generalizes several existing approaches: variational methods, finite element methods, and discrete random fields. The primary contribution is a novel energy functional which has not previously been written down, which combines the discrete measurements from pixels with a continuous-domain world viewed through continous-domain point-spread functions. The value of the functional is that simple priors (such as total variation and generalizations) on the continous-domain world become realistic priors on the sampled images. We show that despite its apparent complexity, optimization of this model depends on just a few computational primitives, which although tedious to derive, can now be reused in many domains. We define a set of optimization algorithms which greatly overcome the apparent complexity of this model, and make possible its practical application. New experimental results include infinite-resolution upsampling, and a method for obtaining subpixel superpixels. © 2012 IEEE.
Resumo:
We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods.