9 resultados para Enunciation scene
em Aston University Research Archive
Resumo:
Grime Scene Investigation was an eight part television series broadcast on BBC3 during Autumn 2006. In each episode a team of scientists from Aston University would visit a member of the public in their mobile laboratory to reveal the hidden world of microbes living in, on and around them. In this way microbiology was communicated in an informative and entertaining way. In this episode, Grime Scene Investigation dives head first into the murky world of Hooligan Swamp - a Bristolian band who pride themselves on living the rock'n'roll lifestyle to the full. The Swamp are facing an eviction notice and the environmental health authorities are threatening to brand their home a health hazard.
Resumo:
Substance use has an effect on an individual's propensity to commit acquisitive crime with recent studies showing substance users more likely to leave forensic material at a crime scene. An examination of acquisitive crime solved in Northamptonshire, U.K., during 2006 enabled 70 crime scene behavior characteristics to be analyzed for substance and nonsubstance use offenders. Logistical regression analyses have identified statistically significant crime scene behavior predictors that were found to be either present at or absent from the crime scene when the offender was a substance user. Most significant predictors present were indicative of a lack of preparation by the offender, irrational behavior, and a desire to steal high value, easily disposed of, property. Most significant predictors absent from the crime scene were indicative of more planning, preparation, and execution by the offender. Consideration is given to how this crime scene behavior might be used by police investigators to identify offenders.
Resumo:
Over the last decade, television screens and display monitors have increased in size considerably, but has this improved our televisual experience? Our working hypothesis was that the audiences adopt a general strategy that “bigger is better.” However, as our visual perceptions do not tap directly into basic retinal image properties such as retinal image size (C. A. Burbeck, 1987), we wondered whether object size itself might be an important factor. To test this, we needed a task that would tap into the subjective experiences of participants watching a movie on different-sized displays with the same retinal subtense. Our participants used a line bisection task to self-report their level of “presence” (i.e., their involvement with the movie) at several target locations that were probed in a 45-min section of the movie “The Good, The Bad, and The Ugly.” Measures of pupil dilation and reaction time to the probes were also obtained. In Experiment 1, we found that subjective ratings of presence increased with physical screen size, supporting our hypothesis. Face scenes also produced higher presence scores than landscape scenes for both screen sizes. In Experiment 2, reaction time and pupil dilation results showed the same trends as the presence ratings and pupil dilation correlated with presence ratings, providing some validation of the method. Overall, the results suggest that real-time measures of subjective presence might be a valuable tool for measuring audience experience for different types of (i) display and (ii) audiovisual material.
Resumo:
A significant change of scene in a gradually changing scene is detected with the aid of a least one camera means for capturing digital images of the scene. A current image of the scene is formed together with a present weighted reference image which is formed from a plurality of previous images of the scene. Cell data is established based on the current image and the present weighted reference image. The cell data is statistically analysed so as to be able to identify at least one difference corresponding to a significant change of scene. When identified, an indication of such significant change of scene is provided.
Resumo:
A new approach to locating gas and vapor plumes is proposed that is entirely passive. By modulating the transmission waveband of a narrow-band filter, an intensity modulation is established that allows regions of an image to be identified as containing a specific gas with absorption characteristics aligned with the filter. A system built from readily available components was constructed to identify regions of NO. Initial results show that this technique was able to distinguish an absorption cell containing NO gas in a test scene. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).
Resumo:
This paper examines a method for locating within a scene a distribution of an absorbing gas using a passive imaging technique. An oscillatory modulation of the angle of a narrowband dielectric filter located in front of a camera imaging a scene, gives rise to an intensity modulation that differs in regions occupied by the absorbing gas. A preliminary low cost system has been constructed from readily available components which demonstrates how the location of gas within a scene can be implemented. Modelling of the system has been carried out, especially highlighting the transmission effects of the dielectric filter upon different regions of the image.
Resumo:
As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.
Resumo:
Inspired by human visual cognition mechanism, this paper first presents a scene classification method based on an improved standard model feature. Compared with state-of-the-art efforts in scene classification, the newly proposed method is more robust, more selective, and of lower complexity. These advantages are demonstrated by two sets of experiments on both our own database and standard public ones. Furthermore, occlusion and disorder problems in scene classification in video surveillance are also first studied in this paper. © 2010 IEEE.