855 resultados para video object segmentation
Resumo:
This research establishes the feasibility of using a network centric technology, Jini, to provide a grid framework on which to perform parallel video encoding. A solution was implemented using Jini and obtained real-time on demand encoding of a 480 HD video stream. Further, a projection is made concerning the encoding of 1080 HD video in real-time, as the current grid was not powerful enough to achieve this above 15fps. The research found that Jini is able to provide a number of tools and services highly applicable in a grid environment. It is also suitable in terms of performance and responds well to a varying number of grid nodes. The main performance limiter was found to be the network bandwidth allocation, which when loaded with a large number of grid nodes was unable to handle the traffic.
Resumo:
Treating algebraic symbols as objects (eg. “‘a’ means ‘apple’”) is a means of introducing elementary simplification of algebra, but causes problems further on. This current school-based research included an examination of texts still in use in the mathematics department, and interviews with mathematics teachers, year 7 pupils and then year 10 pupils asking them how they would explain, “3a + 2a = 5a” to year 7 pupils. Results included the notion that the ‘algebra as object’ analogy can be found in textbooks in current usage, including those recently published. Teachers knew that they were not ‘supposed’ to use the analogy but not always clear why, nevertheless stating methods of teaching consistent with an‘algebra as object’ approach. Year 7 pupils did not explicitly refer to ‘algebra as object’, although some of their responses could be so interpreted. In the main, year 10 pupils used ‘algebra as object’ to explain simplification of algebra, with some complicated attempts to get round the limitations. Further research would look to establish whether the appearance of ‘algebra as object’ in pupils’ thinking between year 7 and 10 is consistent and, if so, where it arises. Implications also are for on-going teacher training with alternatives to introducing such simplification.
Resumo:
Our research investigates the impact that hearing has on the perception of digital video clips, with and without captions, by discussing how hearing loss, captions and deafness type affects user QoP (Quality of Perception). QoP encompasses not only a user's satisfaction with the quality of a multimedia presentation, but also their ability to analyse, synthesise and assimilate informational content of multimedia . Results show that hearing has a significant effect on participants’ ability to assimilate information, independent of video type and use of captions. It is shown that captions do not necessarily provide deaf users with a ‘greater level of information’ from video, but cause a change in user QoP, depending on deafness type, which provides a ‘greater level of context of the video’. It is also shown that post-lingual mild and moderately deaf participants predict less accurately their level of information assimilation than post-lingual profoundly deaf participants, despite residual hearing. A positive correlation was identified between level of enjoyment (LOE) and self-predicted level of information assimilation (PIA), independent of hearing level or hearing type. When this is considered in a QoP quality framework, it puts into question how the user perceives certain factors, such as ‘informative’ and ‘quality’.
Resumo:
This paper provides a solution for predicting moving/moving and moving/static collisions of objects within a virtual environment. Feasible prediction in real-time virtual worlds can be obtained by encompassing moving objects within a sphere and static objects within a convex polygon. Fast solutions are then attainable by describing the movement of objects parametrically in time as a polynomial.
Resumo:
The literature has identified issues around transitions among phases for all pupils (Cocklin, 1999) including pupils with special educational needs (SEN) (Morgan 1999, Maras and Aveling 2006). These issues include pupils’ uncertainties and worries about building size and spatial orientation, exposure to a range of teaching styles, relationships with peers and older pupils as well as parents’ difficulties in establishing effective communications with prospective secondary schools. Research has also identified that interventions to facilitate these educational transitions should consider managerial support, social and personal familiarisation with the new setting as well as personalised learning strategies (BECTA 2004). However, the role that digital technologies can play in supporting these strategies or facilitating the role of the professionals such as SENCos and heads of departments involved in supporting effective transitions for pupils with SEN has not been widely discussed. Uses of ICT include passing references of student-produced media presentations (Higgins 1993) and use of photographs of activities attached to a timetable to support familiarisation with the secondary curriculum for pupils with autism (Cumine et al. 1998).
Resumo:
The existence of hand-centred visual processing has long been established in the macaque premotor cortex. These hand-centred mechanisms have been thought to play some general role in the sensory guidance of movements towards objects, or, more recently, in the sensory guidance of object avoidance movements. We suggest that these hand-centred mechanisms play a specific and prominent role in the rapid selection and control of manual actions following sudden changes in the properties of the objects relevant for hand-object interactions. We discuss recent anatomical and physiological evidence from human and non-human primates, which indicates the existence of rapid processing of visual information for hand-object interactions. This new evidence demonstrates how several stages of the hierarchical visual processing system may be bypassed, feeding the motor system with hand-related visual inputs within just 70 ms following a sudden event. This time window is early enough, and this processing rapid enough, to allow the generation and control of rapid hand-centred avoidance and acquisitive actions, for aversive and desired objects, respectively
Resumo:
We investigate the impact of captions on deaf and hearing perception of multimedia video clips. We measure perception using a parameter called Quality of Perception (QoP), which encompasses not only a user's satisfaction with multimedia clips, but also his/her ability to perceive, synthesise and analyse the informational content of such presentations. By studying perceptual diversity, it is our aim to identify trends that will help future implementation of adaptive multimedia technologies. Results show that although hearing level has a significant affect on information assimilation, the effect of captions is not significant on the objective level of information assimilated. Deaf participants predict that captions significantly improve their level of information assimilation, although no significant objective improvement was measured. The level of enjoyment is unaffected by a participant’s level of hearing or use of captions.
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.
Resumo:
Video surveillance is a part of our daily life, even though we may not necessarily realize it. We might be monitored on the street, on highways, at ATMs, in public transportation vehicles, inside private and public buildings, in the elevators, in front of our television screens, next to our baby?s cribs, and any spot one can set a camera.
Resumo:
Perception and action are tightly linked: objects may be perceived not only in terms of visual features, but also in terms of possibilities for action. Previous studies showed that when a centrally located object has a salient graspable feature (e.g., a handle), it facilitates motor responses corresponding with the feature's position. However, such so-called affordance effects have been criticized as resulting from spatial compatibility effects, due to the visual asymmetry created by the graspable feature, irrespective of any affordances. In order to dissociate between affordance and spatial compatibility effects, we asked participants to perform a simple reaction-time task to typically graspable and non-graspable objects with similar visual features (e.g., lollipop and stop sign). Responses were measured using either electromyography (EMG) on proximal arm muscles during reaching-like movements, or with finger key-presses. In both EMG and button press measurements, participants responded faster when the object was either presented in the same location as the responding hand, or was affordable, resulting in significant and independent spatial compatibility and affordance effects, but no interaction. Furthermore, while the spatial compatibility effect was present from the earliest stages of movement preparation and throughout the different stages of movement execution, the affordance effect was restricted to the early stages of movement execution. Finally, we tested a small group of unilateral arm amputees using EMG, and found residual spatial compatibility but no affordance, suggesting that spatial compatibility effects do not necessarily rely on individuals’ available affordances. Our results show dissociation between affordance and spatial compatibility effects, and suggest that rather than evoking the specific motor action most suitable for interaction with the viewed object, graspable objects prompt the motor system in a general, body-part independent fashion
Resumo:
Does language modulate perception and categorisation of everyday objects? Here, we approach this question from the perspective of grammatical gender in bilinguals. We tested Spanish–English bilinguals and control native speakers of English in a semantic categorisation task on triplets of pictures in an all-in-English context while measuring event-related brain potentials (ERPs). Participants were asked to press a button when the third picture of a triplet belonged to the same semantic category as the first two, and another button when it belonged to a different category. Unbeknownst to them, in half of the trials, the gender of the third picture name in Spanish had the same gender as that of the first two, and the opposite gender in the other half. We found no priming in behavioural results of either semantic relatedness or gender consistency. In contrast, ERPs revealed not only the expected semantic priming effect in both groups, but also a negative modulation by gender inconsistency in Spanish–English bilinguals, exclusively. These results provide evidence for spontaneous and unconscious access to grammatical gender in participants functioning in a context requiring no access to such information, thereby providing support for linguistic relativity effects in the grammatical domain.