924 resultados para Video recording
Resumo:
O presente trabalho centra-se no desenvolvimento de um projeto de sensibilização à diversidade linguística e cultural, intitulado “Por ritmos nunca dantes navegados”, numa sala de um jardim de infância português, tendo como finalidade avaliar e compreender o desenvolvimento da consciência fonológica de um grupo de doze crianças entre os 3 e os 4 anos de idade. Para o efeito, recorremos a uma metodologia mista, que se traduziu em procedimentos de tipo quantitativo e qualitativo de recolha e análise de dados: no primeiro caso, recorremos a testes de consciência fonológica e de discriminação auditiva, aplicados a um grupo experimental e a um grupo de controlo de seis crianças antes e após as sessões de sensibilização; no segundo caso, recorremos à observação direta e à vídeo gravação das quatro sessões do projeto. A análise dos dados recolhidos, realizada através de análise estatística e de conteúdo, permitiu alcançar resultados relevantes, tendo-se registado um desenvolvimento significativo das capacidades de discriminação auditiva e de consciência fonológica (silábica e da palavra) do grupo experimental que esteve presente nas sessões de sensibilização. Os resultados da análise estatística parecem sugerir que o contacto com línguas diferentes, através de atividades de sensibilização, contribui para a descoberta de unidades segmentais das línguas, essenciais para realizar tarefas de discriminação auditiva e de consciência fonológica. A análise de conteúdo ajudou a corroborar os resultados obtidos nos testes de discriminação auditiva e de consciência fonológica, tendo as crianças do grupo experimental revelado, nas suas interações, ser capazes de discriminar sons, bem como segmentar e manipular palavras e sílabas. Estes resultados sustentam a importância das abordagens plurais na educação pré-escolar, pois o contacto com sonoridades, línguas e palavras diferentes faz com que a criança atente no objeto-língua, realizando sobre ele múltiplas e ricas atividades para o desenvolvimento de competências metalinguísticas e também sociais.
Resumo:
This study used a large spatial scale approach in order to better quantify the relationships between maerl bed structure and a selection of potentially forcing physical factors. Data on maerl bed structure and morpho-sedimentary characteristics were obtained from recent oceanographic surveys using underwater video recording and grab sampling. Considering the difficulties in carrying out real-time monitoring of highly variable hydrodynamic and physicochemical factors, these were generated by three-dimensional numerical models with high spatial and temporal resolution. The BIOENV procedure indicated that variation in the percentage cover of thalli can best be explained (correlation = 0.76) by a combination of annual mean salinity, annual mean nitrate concentration and annual mean current velocity, while the variation in the proportion of living thalli can best be explained (correlation = 0.47) by a combination of depth and mud content. Linear relationships showed that the percentage cover of maerl thalli was positively correlated with nitrate concentration (R2 = 0.78, P < 0.01) and negatively correlated with salinity (R2 = 0.81, P < 0.01), suggesting a strong effect of estuarine discharge on maerl bed structure, and also negatively correlated with current velocity (R2 = 0.81, P < 0.01). When maerl beds were deeper than 10 m, the proportion of living thalli was always below 30% but when they were shallower than 10 m, it varied between 4 and 100%, and was negatively correlated with mud content (R2 = 0.53, P < 0.01). On the other hand, when mud content was below 10%, the proportion of living thalli showed a negative correlation with depth (R2 = 0.84, P < 0.01). This large spatial scale explanation of maerl bed heterogeneity provides a realistic physical characterization of these ecologically interesting benthic habitats and usable findings for their conservation and management.
Resumo:
Preserving the cultural heritage of the performing arts raises difficult and sensitive issues, as each performance is unique by nature and the juxtaposition between the performers and the audience cannot be easily recorded. In this paper, we report on an experimental research project to preserve another aspect of the performing arts—the history of their rehearsals. We have specifically designed non-intrusive video recording and on-site documentation techniques to make this process transparent to the creative crew, and have developed a complete workflow to publish the recorded video data and their corresponding meta-data online as Open Data using state-of-the-art audio and video processing to maximize non-linear navigation and hypervideo linking. The resulting open archive is made publicly available to researchers and amateurs alike and offers a unique account of the inner workings of the worlds of theater and opera.
Resumo:
Video recording of the performance piece "Aguas (Homenaje a Las Aguas)" by Manuel Mendive.
Resumo:
This paper describes the work being conducted in the baseline rail level crossing project, supported by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper discusses the limitations of near-miss data for analysis obtained using current level crossing occurrence reporting practices. The project is addressing these limitations through the development of a data collection and analysis system with an underlying level crossing accident causation model. An overview of the methodology and improved data recording process are described. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.
Resumo:
Visual recording devices such as video cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard video cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.
Resumo:
Video is commonly used as a method for recording embodied interaction for purposes of analysis and design and has been proposed as a useful ‘material’ for interaction designers to engage with. But video is not a straight forward reproduction of embodied activity – in themselves video recordings ‘flatten’ the space of embodied interaction, they impose a perspective on unfolding action, and remove the embodied spatial and social context within which embodied interaction unfolds. This does not mean that video is not a useful medium with which to engage as part of a process of investigating and designing for embodied interaction – but crucially, it requires that as people attempting to engage with video, designers own bodies and bodily understandings must be engaged with and brought into play. This paper describes and reflects upon our experiences of engaging with video in two different activities as part of a larger research project investigating the design of gestural interfaces for a dental surgery context.
Resumo:
Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.
Resumo:
This paper describes a safety data recording and analysis system that has been developed to capture safety occurrences including precursors using high-definition forward-facing video from train cabs and data from other train-borne systems. The paper describes the data processing model and how events detected through data analysis are related to an underlying socio-technical model of accident causation. The integrated approach to safety data recording and analysis insures systemic factors that condition, influence or potentially contribute to an occurrence are captured both for safety occurrences and precursor events, providing a rich tapestry of antecedent causal factors that can significantly improve learning around accident causation. This can ultimately provide benefit to railways through the development of targeted and more effective countermeasures, better risk models and more effective use and prioritization of safety funds. Level crossing occurrences are a key focus in this paper with data analysis scenarios describing causal factors around near-miss occurrences. The paper concludes with a discussion on how the system can also be applied to other types of railway safety occurrences.
Resumo:
Introduction Markerless motion capture systems are relatively new devices that can significantly speed up capturing full body motion. A precision of the assessment of the finger’s position with this type of equipment was evaluated at 17.30 ± 9.56 mm when compare to an active marker system [1]. The Microsoft Kinect was proposed to standardized and enhanced clinical evaluation of patients with hemiplegic cerebral palsy [2]. Markerless motion capture systems have the potential to be used in a clinical setting for movement analysis, as well as for large cohort research. However, the precision of such system needs to be characterized. Global objectives • To assess the precision within the recording field of the markerless motion capture system Openstage 2 (Organic Motion, NY). • To compare the markerless motion capture system with an optoelectric motion capture system with active markers. Specific objectives • To assess the noise of a static body at 13 different location within the recording field of the markerless motion capture system. • To assess the smallest oscillation detected by the markerless motion capture system. • To assess the difference between both systems regarding the body joint angle measurement. Methods Equipment • OpenStage® 2 (Organic Motion, NY) o Markerless motion capture system o 16 video cameras (acquisition rate : 60Hz) o Recording zone : 4m * 5m * 2.4m (depth * width * height) o Provide position and angle of 23 different body segments • VisualeyezTM VZ4000 (PhoeniX Technologies Incorporated, BC) o Optoelectric motion capture system with active markers o 4 trackers system (total of 12 cameras) o Accuracy : 0.5~0.7mm Protocol & Analysis • Static noise: o Motion recording of an humanoid mannequin was done in 13 different locations o RMSE was calculated for each segment in each location • Smallest oscillation detected: o Small oscillations were induced to the humanoid mannequin and motion was recorded until it stopped. o Correlation between the displacement of the head recorded by both systems was measured. A corresponding magnitude was also measured. • Body joints angle: o Body motion was recorded simultaneously with both systems (left side only). o 6 participants (3 females; 32.7 ± 9.4 years old) • Tasks: Walk, Squat, Shoulder flexion & abduction, Elbow flexion, Wrist extension, Pronation / supination (not in results), Head flexion & rotation (not in results), Leg rotation (not in results), Trunk rotation (not in results) o Several body joint angles were measured with both systems. o RMSE was calculated between signals of both systems. Results Conclusion Results show that the Organic Motion markerless system has the potential to be used for assessment of clinical motor symptoms or motor performances However, the following points should be considered: • Precision of the Openstage system varied within the recording field. • Precision is not constant between limb segments. • The error seems to be higher close to the range of motion extremities.
Resumo:
We describe the application of two types of stereo camera systems in fisheries research, including the design, calibration, analysis techniques, and precision of the data obtained with these systems. The first is a stereo video system deployed by using a quick-responding winch with a live feed to provide species- and size- composition data adequate to produce acoustically based biomass estimates of rockfish. This system was tested on the eastern Bering Sea slope where rockfish were measured. Rockfish sizes were similar to those sampled with a bottom trawl and the relative error in multiple measurements of the same rockfish in multiple still-frame images was small. Measurement errors of up to 5.5% were found on a calibration target of known size. The second system consisted of a pair of still-image digital cameras mounted inside a midwater trawl. Processing of the stereo images allowed fish length, fish orientation in relation to the camera platform, and relative distance of the fish to the trawl netting to be determined. The video system was useful for surveying fish in Alaska, but it could also be used broadly in other situations where it is difficult to obtain species-composition or size-composition information. Likewise, the still-image system could be used for fisheries research to obtain data on size, position, and orientation of fish.
Resumo:
This paper examines the use of visual technologies by political activists in protest situations to monitor police conduct. Using interview data with Australian video activists, this paper seeks to understand the motivations, techniques and outcomes of video activism, and its relationship to counter-surveillance and police accountability. Our data also indicated that there have been significant transformations in the organization and deployment of counter-surveillance methods since 2000, when there were large-scale protests against the World Economic Forum meeting in Melbourne accompanied by a coordinated campaign that sought to document police misconduct. The paper identifies and examines two inter-related aspects of this: the act of filming and the process of dissemination of this footage. It is noted that technological changes over the last decade have led to a proliferation of visual recording technologies, particularly mobile phone cameras, which have stimulated a corresponding proliferation of images. Analogous innovations in internet communications have stimulated a coterminous proliferation of potential outlets for images Video footage provides activists with a valuable tool for safety and publicity. Nevertheless, we argue, video activism can have unintended consequences, including exposure to legal risks and the amplification of official surveillance. Activists are also often unable to control the political effects of their footage or the purposes to which it is used. We conclude by assessing the impact that transformations in both protest organization and media technologies might have for counter-surveillance techniques based on visual surveillance.
Resumo:
Introduction
The use of video capture of lectures in Higher Education is not a recent occurrence with web based learning technologies including digital recording of live lectures becoming increasing commonly offered by universities throughout the world (Holliman and Scanlon, 2004). However in the past decade the increase in technical infrastructural provision including the availability of high speed broadband has increased the potential and use of videoed lecture capture. This had led to a variety of lecture capture formats including pod casting, live streaming or delayed broadcasting of whole or part of lectures.
Additionally in the past five years there has been a significant increase in the popularity of online learning, specifically via Massive Open Online Courses (MOOCs) (Vardi, 2014). One of the key aspects of MOOCs is the simulated recording of lecture like activities. There has been and continues to be much debate on the consequences of the popularity of MOOCs, especially in relation to its potential uses within established University programmes.
There have been a number of studies dedicated to the effects of videoing lectures.
The clustered areas of research in video lecture capture have the following main themes:
• Staff perceptions including attendance, performance of students and staff workload
• Reinforcement versus replacement of lectures
• Improved flexibility of learning
• Facilitating engaging and effective learning experiences
• Student usage, perception and satisfaction
• Facilitating students learning at their own pace
Most of the body of the research has concentrated on student and faculty perceptions, including academic achievement, student attendance and engagement (Johnston et al, 2012).
Generally the research has been positive in review of the benefits of lecture capture for both students and faculty. This perception coupled with technical infrastructure improvements and student demand may well mean that the use of video lecture capture will continue to increase in frequency in the next number of years in tertiary education. However there is a relatively limited amount of research in the effects of lecture capture specifically in the area of computer programming with Watkins 2007 being one of few studies . Video delivery of programming solutions is particularly useful for enabling a lecturer to illustrate the complex decision making processes and iterative nature of the actual code development process (Watkins et al 2007). As such research in this area would appear to be particularly appropriate to help inform debate and future decisions made by policy makers.
Research questions and objectives
The purpose of the research was to investigate how a series of lecture captures (in which the audio of lectures and video of on-screen projected content were recorded) impacted on the delivery and learning of a programme of study in an MSc Software Development course in Queen’s University, Belfast, Northern Ireland. The MSc is conversion programme, intended to take graduates from non-computing primary degrees and upskill them in this area. The research specifically targeted the Java programming module within the course. It also analyses and reports on the empirical data from attendances and various video viewing statistics. In addition, qualitative data was collected from staff and student feedback to help contextualise the quantitative results.
Methodology, Methods and Research Instruments Used
The study was conducted with a cohort of 85 post graduate students taking a compulsory module in Java programming in the first semester of a one year MSc in Software Development. A pre-course survey of students found that 58% preferred to have available videos of “key moments” of lectures rather than whole lectures. A large scale study carried out by Guo concluded that “shorter videos are much more engaging” (Guo 2013). Of concern was the potential for low audience retention for videos of whole lectures.
The lecturers recorded snippets of the lecture directly before or after the actual physical delivery of the lecture, in a quiet environment and then upload the video directly to a closed YouTube channel. These snippets generally concentrated on significant parts of the theory followed by theory related coding demonstration activities and were faithful in replication of the face to face lecture. Generally each lecture was supported by two to three videos of durations ranging from 20 – 30 minutes.
Attendance
The MSc programme has several attendance based modules of which Java Programming was one element. In order to assess the consequence on attendance for the Programming module a control was established. The control used was a Database module which is taken by the same students and runs in the same semester.
Access engagement
The videos were hosted on a closed YouTube channel made available only to the students in the class. The channel had enabled analytics which reported on the following areas for all and for each individual video; views (hits), audience retention, viewing devices / operating systems used and minutes watched.
Student attitudes
Three surveys were taken in regard to investigating student attitudes towards the videoing of lectures. The first was before the start of the programming module, then at the mid-point and subsequently after the programme was complete.
The questions in the first survey were targeted at eliciting student attitudes towards lecture capture before they had experienced it in the programme. The midpoint survey gathered data in relation to how the students were individually using the system up to that point. This included feedback on how many videos an individual had watched, viewing duration, primary reasons for watching and the result on attendance, in addition to probing for comments or suggestions. The final survey on course completion contained questions similar to the midpoint survey but in summative view of the whole video programme.
Conclusions and Outcomes
The study confirmed findings of other such investigations illustrating that there is little or no effect on attendance at lectures. The use of the videos appears to help promote continual learning but they are particularly accessed by students at assessment periods. Students respond positively to the ability to access lectures digitally, as a means of reinforcing learning experiences rather than replacing them. Feedback from students was overwhelmingly positive indicating that the videos benefited their learning. Also there are significant benefits to part recording of lectures rather than recording whole lectures. The behaviour viewing trends analytics suggest that despite the increase in the popularity of online learning via MOOCs and the promotion of video learning on mobile devices in fact in this study the vast majority of students accessed the online videos at home on laptops or desktops However, in part, this is likely due to the nature of the taught subject, that being programming.
The research involved prerecording the lecture in smaller timed units and then uploading for distribution to counteract existing quality issues with recording entire live lectures. However the advancement and consequential improvement in quality of in situ lecture capture equipment may well help negate the need to record elsewhere. The research has also highlighted an area of potentially very significant use for performance analysis and improvement that could have major implications for the quality of teaching. A study of the analytics of the viewings of the videos could well provide a quick response formative feedback mechanism for the lecturer. If a videoed lecture either recorded live or later is a true reflection of the face to face lecture an analysis of the viewing patterns for the video may well reveal trends that correspond with the live delivery.
Resumo:
Video Capture of university lectures enables learners to be more flexible in their learning behaviour, for instance choosing to attend lectures in person or watch later. However attendance at lectures has been linked to academic success and is of concern for faculty staff contemplating the introduction of Video Lecture Capture. This research study was devised to assess the impact on learning of recording lectures in computer programming courses. The study also considered behavioural trends and attitudes of the students watching recorded lectures, such as when, where, frequency, duration and viewing devices used. The findings suggest there is no detrimental effect on attendance at lectures with video materials being used to support continual and reinforced learning with most access occurring at assessment periods. The analysis of the viewing behaviours provides a rich and accessible data source that could be potentially leveraged to improve lecture quality and enhance lecturer and learning performance.
Resumo:
Real operation scene This scene was recorded during a real Irradiation operation, more specifically during its final tasks (removing the irradiated sample). This scene was an extra recording to the script and planned ones. - Scene: Involved a number of persons, as: two operators, two personnel belonging to the radiological protection service, and the "client" who asked for the irradiation. Video file labels: "20140402150657_IPCAM": recorded by the right camera.