954 resultados para Video-camera
Resumo:
The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.
Resumo:
FULL TEXT: Like many people one of my favourite pastimes over the holiday season is to watch the great movies that are offered on the television channels and new releases in the movie theatres or catching up on those DVDs that you have been wanting to watch all year. Recently we had the new ‘Star Wars’ movie, ‘The Force Awakens’, which is reckoned to become the highest grossing movie of all time, and the latest offering from James Bond, ‘Spectre’ (which included, for the car aficionados amongst you, the gorgeous new Aston Martin DB10). It is always amusing to see how vision correction or eye injury is dealt with by movie makers. Spy movies and science fiction movies have a freehand to design aliens with multiples eyes on stalks or retina scanning door locks or goggles that can see through walls. Eye surgery is usually shown in some kind of day case simplified laser treatment that gives instant results, apart from the great scene in the original ‘Terminator’ movie where Arnold Schwarzenegger's android character encounters an injury to one eye and then proceeds to remove the humanoid covering to this mechanical eye over a bathroom sink. I suppose it is much more difficult to try and include contact lenses in such movies. Although you may recall the film ‘Charlie's Angels’, which did have a scene where one of the Angels wore a contact lens that had a retinal image imprinted on it so she could by-pass a retinal scan door lock and an Eddy Murphy spy movie ‘I-Spy’, where he wore contact lenses that had electronic gadgetry that allowed whatever he was looking at to be beamed back to someone else, a kind of remote video camera device. Maybe we aren’t quite there in terms of devices available but these things are probably not the behest of science fiction anymore as the technology does exist to put these things together. The technology to incorporate electronics into contact lenses is being developed and I am sure we will be reporting on it in the near future. In the meantime we can continue to enjoy the unrealistic scenes of eye swapping as in the film ‘Minority Report’ (with Tom Cruise). Much more closely to home, than in a galaxy far far away, in this issue you can find articles on topics much nearer to the closer future. More and more optometrists in the UK are becoming registered for therapeutic work as independent prescribers and the number is likely to rise in the near future. These practitioners will be interested in the review paper by Michael Doughty, who is a member of the CLAE editorial panel (soon to be renamed the Jedi Council!), on prescribing drugs as part of the management of chronic meibomian gland dysfunction. Contact lenses play an active role in myopia control and orthokeratology has been used not only to help provide refractive correction but also in the retardation of myopia. In this issue there are three articles related to this topic. Firstly, an excellent paper looking at the link between higher spherical equivalent refractive errors and the association with slower axial elongation. Secondly, a paper that discusses the effectiveness and safety of overnight orthokeratology with high-permeability lens material. Finally, a paper that looks at the stabilisation of early adult-onset myopia. Whilst we are always eager for new and exciting developments in contact lenses and related instrumentation in this issue of CLAE there is a demonstration of a novel and practical use of a smartphone to assisted anterior segment imaging and suggestions of this may be used in telemedicine. It is not hard to imagine someone taking an image remotely and transmitting that back to a central diagnostic centre with the relevant expertise housed in one place where the information can be interpreted and instruction given back to the remote site. Back to ‘Star Wars’ and you will recall in the film ‘The Phantom Menace’ when Qui-Gon Jinn first meets Anakin Skywalker on Tatooine he takes a sample of his blood and sends a scan of it back to Obi-Wan Kenobi to send for analysis and they find that the boy has the highest midichlorian count ever seen. On behalf of the CLAE Editorial board (or Jedi Council) and the BCLA Council (the Senate of the Republic) we wish for you a great 2016 and ‘may the contact lens force be with you’. Or let me put that another way ‘the CLAE Editorial Board and BCLA Council, on behalf of, a great 2016, we wish for you!’
Resumo:
A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.
Resumo:
A large series of laboratory ice crushing experiments was performed to investigate the effects of external boundary condition and indenter contact geometry on ice load magnitude under crushing conditions. Four boundary conditions were considered: dry cases, submerged cases, and cases with the presence of snow and granular ice material on the indenter surface. Indenter geometries were a flat plate, wedge shaped indenter, (reverse) conical indenter, and spherical indenter. These were impacted with artificially produced ice specimens of conical shape with 20° and 30° cone angles. All indenter – ice combinations were tested in dry and submerged environments at 1 mm/s and 100 mm/s indentation rates. Additional tests with the flat indentation plate were conducted at 10 mm/s impact velocity and a subset of scenarios with snow and granular ice material was evaluated. The tests were performed using a material testing system (MTS) machine located inside a cold room at an ambient temperature of - 7°C. Data acquisition comprised time, vertical force, and displacement. In several tests with the flat plate and wedge shaped indenter, supplementary information on local pressure patterns and contact area were obtained using tactile pressure sensors. All tests were recorded with a high speed video camera and still photos were taken before and after each test. Thin sections were taken of some specimens as well. Ice loads were found to strongly depend on contact condition, interrelated with pre-existing confinement and indentation rate. Submergence yielded higher forces, especially at the high indentation rate. This was very evident for the flat indentation plate and spherical indenter, and with restrictions for the wedge shaped indenter. No indication was found for the conical indenter. For the conical indenter it was concluded that the structural restriction due to the indenter geometry was dominating. The working surface for the water to act was not sufficient to influence the failure processes and associated ice loads. The presence of snow and granular ice significantly increased the forces at the low indentation rate (with the flat indentation plate) that were higher compared to submerged cases and far above the dry contact condition. Contact area measurements revealed a correlation of higher forces with a concurrent increase in actual contact area that depended on the respective boundary condition. In submergence, ice debris constitution was changed; ice extrusion, as well as crack development and propagation were impeded. Snow and granular ice seemed to provide additional material sources for establishing larger contact areas. The dry contact condition generally had the smallest real contact area, as well as the lowest forces. The comparison of nominal and measured contact areas revealed distinct deviations. The incorporation of those differences in contact process pressures-area relationships indicated that the overall process pressure was not substantially affected by the increased loads.
Resumo:
Within the context of the overall ecological working programme Dynamics of Antarctic Marine Shelf Ecosystems (DynAMo) of the PS96 (ANT-XXXI/2) cruise of RV "Polarstern" to the Weddell Sea (Dec 2015 to Feb 2016), seabed imaging surveys were carried out along drift profiles by means of the Ocean Floor Observation System (OFOS) of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) Bremerhaven. The setup and mode of deployment of the OFOS was similar to that described by Bergmann and Klages (2012, doi:10.1016/j.marpolbul.2012.09.018). OFOS is a surface-powered gear equipped with two downward-looking cameras installed side-by-side: one high-resolution, wide-angle still camera (CANON® EOS 5D Mark III; lens: Canon EF 24 f/1.4L II, f stop: 13, exposure time: 1/125 sec; in-air view angles: 74° (horizontal), 53° (vertical), 84° (diagonal); image size: 5760 x 3840 px = 21 MPix; front of pressure resistant camera housing consisting of plexiglass dome port) and one high-definition color video camera (SONY® FCB-H11). The system was vertically lowered over the stern of the ship with a broadband fibre-optic cable, until it hovers approximately 1.5 m above the seabed. It was then towed after the slowly sailing ship at a speed of approximately 0.5 kn (0.25 m/s). The ship's Global Acoustic Positioning System (GAPS), combining Ultra Short Base Line (USBL), Inertial Navigation System (INS) and satellite-based Global Positioning System (GPS) technologies, was used to gain highly precise underwater position data of the OFOS. During the profile, OFOS was kept hanging at the preferred height above the seafloor by means of the live video feed and occasional minor cable-length adjustments with the winch to compensate small-scale bathymetric variations in seabed morphology. Information on water depth and height above the seafloor were continuously recorded by means of OFOS-mounted sensors (GAPS transponder, Tritech altimeter). Three lasers, which are placed beside the still camera, emit parallel beams and project red light points, arranged as an equilateral triangle with a side length of 50 cm, in each photo, thus providing a scale that can be used to calculate the seabed area depicted in each image and/or measure the size of organisms or seabed features visible in the image. In addition, the seabed area depicted was estimated using altimeter-derived height above seafloor and optical characteristics of the OFOS still camera. In automatic mode, a seabed photo, depicting an area of approximately 3.45 m**2 (= 2.3 m x 1.5 m; with variations depending on the actual height above ground), was taken every 30 seconds to obtain series of "TIMER" stills distributed at regular distances along the profiles that vary in length depending on duration of the cast. At a ship speed of 0.5 kn, the average distance between seabed images was approximately 5 m. Additional "HOTKEY" photos were taken from interesting objects (organisms, seabed features, such as putative iceberg scours) when they appeared in the live video feed (which was also recorded, in addition to the stills, for documentation and possible later analysis). If any image from this collection is used, please cite the reference as given above.
Resumo:
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Resumo:
This presentation was both an illustrated lecture and a published paper presented at the IMPACT 9 Conference Printmaking in the Post-Print Age, Hangzhou China 2015. It was an extension of the exhibition catalogue essay for the Bluecoat Gallery Exhibition of the same name. In 2014 I curated an exhibition The Negligent Eye at the Bluecoat Gallery in Liverpool as the result of longstanding interest in scanning and 3D printing and the role of these in changing the field of Print within Fine Art Practice. In the aftermath of curatingshow I have continued to reflect on this material with reference to the writings of Vilém Flusser and Hito Steyerl. The work in the exhibition came from a wide range of artists of all generations most of whom are not explicitly located within Printmaking. Whilst some work did not use any scanning technology at all, a shared fascination with the particular translating device of the systematizing ‘eye’ of a scanning digital video camera, flatbed or medical scanner was expressed by all the work in the show. Through writing this paper I aim to extend my own understanding of questions, which arose from the juxtapositions of work and the production of the accompanying catalogue. The show developed in dialogue with curators Bryan Biggs and Sarah-Jane Parsons of the Bluecoat Gallery who sent a series of questions about scanning to participating artists. In reflecting upon their answers I will extend the discussions begun in the process of this research. A kind of created attention deficit disorder seems to operate on us all today to make and distribute images and information at speed. What value do ways of making which require slow looking or intensive material explorations have in this accelerated system? What model of the world is being constructed by the drive to simulated realities toward ever-greater resolution, so called high definition? How are our perceptions of reality being altered by the world-view presented in the smooth colourful ever morphing simulations that surround us? The limitations of digital technology are often a starting point for artists to reflect on our relationship to real-world fragility. I will be looking at practices where tactility or dimensionality in a form of hard copy engages with these questions using examples from the exhibition. Artists included in the show were: Cory Arcangel, Christiane Baumgartner, Thomas Bewick, Jyll Bradley, Maurice Carlin, Helen Chadwick, Susan Collins, Conroy/Sanderson, Nicky Coutts, Elizabeth Gossling, Beatrice Haines, Juneau Projects, Laura Maloney, Bob Matthews, London Fieldworks (with the participation of Gustav Metzger), Marilène Oliver, Flora Parrott, South Atlantic Souvenirs, Imogen Stidworthy, Jo Stockham, Wolfgang Tillmans, Alessa Tinne, Michael Wegerer, Rachel Whiteread, Jane and Louise Wilson. Scanning, Art, Technology, Copy, Materiality.
Resumo:
Syftet med denna studie är att undersöka fördröjningsskillnader inom användargränssnitt mellan nativeutvecklade appar (utveckling till varje plattform) och appar av typen generated apps. Eftersom arbetet syftar till att bidra med information om prestanda ansågs en experimentell metod vara det bästa valet. Mätning av laddningstider gjordes med hjälp av en videokamera som filmade utförandet av experimenten vilket gjorde metoden simpel och liknar det som en användare kommer att uppleva. Avgränsning till plattformarna Android och iOS gjordes där Xamarin valdes som ramverk inom tekniker som skapar generated apps. Mätdata från experiment som undersökte laddningstider, experiment med användare som hanterade listors respons samt undersökning av CPU och minnesanvändning tyder på ett återkommande mönster. Xamarin Forms med XAML är den teknik som presterat sämst under experimenten som sedan följs av Xamarin Forms. Xamarin Android/iOS hade inte lika stora prestandaförluster jämfört med nativeutvecklade delar. Generellt hanterar Xamarin Forms telefonens resurser sämre än vad Xamarin Android/iOS och native gör. Resultat från studien kan användas som beslutsstöd vid val av teknik. Studien bidrar även med data som kan användas vid vidare forskning inom området.
Resumo:
Generally the flow properties of rivers, estuaries and coastal seas are highly dependent on the bed morphology. These include mainly three flow parameters, as bed shear stress, velocity profile and turbulent fluctuations. Here we investigate the effects of permeate of the bed on these flow properties We consider the effects of suction (W0) injection (W0) on these flow properties particularly the bottom stress. Four types of bottom permeability with different size of sand have been tested. The results indicate a substantial reduction and enhancement of the bed stress under respectively injection and suction as has been observed by others on wave motion in shallow seas. We consider 5 waves to shore with this rang of wave steepness ( 0/015 < so < 0/05 ) . Cr Calculated used of mansard method (1980). We search the stream line of current in bed with a video camera and looking this. Near the surface and the deep of bed and consider V=W (in or su)/ U(ru or rd) and bottom stress for 6 period of this study with Canly and Inman studies (1994). All these results are shown by curves with the effects of permeable bed.
Resumo:
Esta tese pretende descrever o desenvolvimento e arquitectura do software que constitui o Miradouro Virtual@, mais especificamente do componente referente à interface. O Miradouro Virtual@ é um dispositivo cujo propósito à semelhança dos tradicionais binóculos turísticos, é observar a paisagem, mas cuja interacção não está limitada à simples observação individual. Recorre à realidade aumentada para sobrepôr imagens geradas por computador a imagens reais, capturadas por um dispositivo para aquisição de imagem real (tipicamente uma câmara de vídeo), e mostra-as num ecrã touchscreen, permitindo deste modo, combinar elementos virtuais e multimédia com a paisagem real. A imagem final, composta, dá ao utilizador uma nova dimensão do espaço envolvente, permitindo-lhe explorar uma nova camada de informação não visível anteriormente. Sendo sensíveis à orientação do Miradouro Virtual@, os elementos virtuais e multimédia adaptam-se de acordo com os movimentos do dispositivo. O Miradouro Virtual@ é um produto composto por diversos elementos de hardware e software. O foco desta tese recai apenas nos componentes de software, mais especificamente na interface. Pretende dar a conhecer as limitações da versão anterior do software e mostrar as soluções encontradas que permitiram ultrapassar algumas dessas limitações. ABSTRACT; This thesis focuses on the design and development of the Virtual Sightseeing™ software, more specifically on the interface component. The Virtual Sightseeing™ is a device similar to the traditional scenic viewers that takes advantage of its generally known and popularity to build an innovative system. It works by using augmented reality to superimpose, in real-time, images generated by a computer onto a live stream captured by a video camera and displaying them on a touchscreen display. It allows adding multimedia elements to the real scenery by composing them in the image that is presented to the user. The multimedia information and virtual elements that are displayed are sensitive to the orientation and position of the device. They change as the user manually changes the orientation of the device. The Virtual Sightseeing™ is comprised of several hardware and software components. The focus of this thesis is on the software part, more specifically on the interface component. It intends to show the known limitations of the previous software version and how they were overcome in this new version.
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 1 (Obs.: Scene 1 of the "General simulated scenes" class): Comprises one of the scenes with two persons. Both of them use clothes of light colors. Both persons remain still in front of "J9"; one goes to the computer and then come back, and both go out. Video file labels: "20140326145315_IPCAM": recorded by the right camera,
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 1 (Obs.: Scene 1 of the "General simulated scenes" class): Comprises one of the scenes with two persons. Both of them use clothes of light colors. Both persons remain still in front of "J9"; one goes to the computer and then come back, and both go out. Video file labels: "20140326145316_IPCAM": recorded by the left camera.
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 2: Comprises one of the scenes with two persons. Both of them use clothes of dark colors. Both persons go to the side of Argonauta reactor and then come back and go out. Video file labels: "20140326154754_IPCAM": recorded by the right camera.
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 1: Comprises one of the scenes with one person performing spectography experiment. Video file labels:"20140327181336_IPCAM": recorded by the left camera.
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 2: Another take similat to Scene 1. Video file labels: "20140327180749_IPCAM": recorded by the right camera.