776 resultados para tutkimusvälineet - kenttätyö - video
Resumo:
Our ability to project the impact of global change on marine ecosystem is limited by our poor understanding on how to predict species sensitivity. For example, the impact of ocean acidification is highly species-specific, even in closely related taxa. The aim of this study was to test the hypothesis that the tolerance range of a given species to decreased pH corresponds to their natural range of exposure. Larvae of the green sea urchin Strongylocentrotus droebachiensis were cultured from fertilization to metamorphic competence (29 days) under a wide range of pH (from pHT = 8.0/pCO2 ~ 480 ?atm to pHT = 6.5/pCO2 ~ 20 000 ?atm) covering present (from pHT 8.7 to 7.6), projected near-future variability (from pHT 8.3 to 7.2) and beyond. Decreasing pH impacted all tested parameters (mortality, symmetry, growth, morphometry and respiration). Development of normal, although showing morphological plasticity, swimming larvae was possible as low as pHT >= 7.0. Within that range, decreasing pH increased mortality and asymmetry and decreased body length (BL) growth rate. Larvae raised at lowered pH and with similar BL had shorter arms and a wider body. Relative to a given BL, respiration rates and stomach volume both increased with decreasing pH suggesting changes in energy budget. At the lowest pHs (pHT <= 6.5), all the tested parameters were strongly negatively affected and no larva survived past 13 days post fertilization. In conclusion, sea urchin larvae appeared to be highly plastic when exposed to decreased pH until a physiological tipping point at pHT = 7.0. However, this plasticity was associated with direct (increased mortality) and indirect (decreased growth) consequences for fitness.
Resumo:
Anxiety disorders are the most prevalent form of psychopathology among children and adolescents. Because demand for treatment far exceeds availability, there is a need for alternative approaches that are engaging, accessible, cost-effective, and incorporate practice to reach as many youth as possible. One novel approach is a video game intervention called MindLight that uses two evidence-based strategies to target childhood anxiety problems. Using neurofeedback mechanics to train players to: (1) attend to positive rather than threatening stimuli and (2) down-regulate arousal during stressful situations, MindLight teaches children how to practice overcoming anxious thoughts and arousal in a fun and engaging context. The present study examined the effectiveness of MindLight versus online cognitive-behavioural therapy (CBT) based psychoeducation sessions as a comparison in reducing anxiety in a sample of 144 anxious children, which was measured in three ways: (1) anxiety symptoms, (2) state anxiety in response to stress, and (3) psychophysiological arousal in response to stress. Children between the ages of 8.05–17.78 years (M=13.61, SD=1.79) were randomly assigned to play MindLight or complete psychoeducation for five hours over three weeks. State anxiety and psychophysiological arousal were assessed in response to two stress tasks before and after exposure to MindLight or psychoeducation. Anxiety symptoms were also measured via a questionnaire. Overall, participants showed significant reductions in anxiety symptoms and state anxiety in response to stress, but not psychophysiological arousal in response to stress. Moreover, the magnitude of reductions in anxiety did not differ between interventions but by age and sex. Specifically, older participants showed a greater decrease in severity of state anxiety in response to a social stressor than younger participants and girls showed a greater decrease in severity of state anxiety in response to a cognitive stressor than boys. The present study suggests that playing MindLight results in similar reductions in anxiety as one of the more common means of delivering CBT principles to youth.
Resumo:
I present here a sequence of short videos, Scenes of Provincial Life, forming a unified, ongoing online work. In my written commentary I discuss the work‘s context, genesis and facture and presentation and thereby demonstrate its claim to originality as art work. I go on to suggest one possible interpretive framework for it. I then discuss the nature of art works as candidates for the generation of new knowledge and conclude that art works in general fulfil this function, in a very carefully defined way, as a necessary condition of being art works. I further connect the success of any work as art work with the richness of its knowledge generating capacity, inseparably allied to its aesthetic force. I conclude that if Scenes of Provincial Life is seen to have value as artwork it will therefore by definition be a creator of new knowledge.
Resumo:
In global engineering enterprises, information and knowledge sharing are critical factors that can determine a project’s success. This statement is widely acknowledged in published literature. However, according to some academics, tacit knowledge is derived from a person’s lifetime of experience, practice, perception and learning, which makes it hard to capture and document in order to be shared. This project investigates if social media tools can be used to improve and enable tacit knowledge sharing within a global engineering enterprise. This paper first provides a brief background of the subject area, followed by an explanation of the industrial investigation, from which the proposed knowledge framework to improve tacit knowledge sharing is presented. This project’s main focus is on the improvement of collaboration and knowledge sharing amongst product development engineers in order to improve the whole product development cycle.
Resumo:
Presentatie over de doelstelling en aanpak binnen het Viewbrics-project (www.viewbrics.nl), waarin gekeken wordt naar het effect van het gecombineerd gebruik van tekstuele rubrieken met videovoorbeelden op de kwaliteit van de feedback, mentale modellen en het aanleren van een aantal complexe (21e eeuwse) vaardigheden (samenwerken, presenteren, informatievaardigheden) in het Voortgezet Onderwijs.
Resumo:
Abstract. The performance objectives used for the formative assessment of com- plex skills are generally set through text-based analytic rubrics[1]. Moreover, video modeling examples are a widely applied method of observational learning, providing students with context-rich modeling examples of complex skills that act as an analogy for problem solving [1]. The purpose of this theoretical paper is to synthesize the components of video modeling and rubrics to support the formative assessment of complex skills. Based on theory, we argue that application of the developed Video Enhanced Rubrics (VER) fosters learners’ development of mental models, quality of provided feedback by various actors and finally, the learners mastery of complex skills.
Resumo:
To learn complex skills, like collaboration, learners need to acquire a concrete and consistent mental model of what it means to master this skill. If learners know their current mastery level and know their targeted mastery level, they can better determine their subsequent learning activities. Rubrics support learners in judging their skill performance as they provide textual descriptions of skills’ mastery levels with performance indicators for all constituent subskills. However, text-based rubrics have a limited capacity to support the formation of mental models with contextualized, time-related and observable behavioral aspects of a complex skill. This paper outlines the design of a study that intends to investigate the effect of rubrics with video modelling examples compared to text-based rubrics on skills acquisition and feedback provisioning. The hypothesis is that video-enhanced rubrics, compared to text based rubrics, will improve mental model formation of a complex skill and improve the feedback quality a learner receives (from e.g. teachers, peers) while practicing a skill, hence positively effecting final mastery of a skill.
Resumo:
Paper presentation at the TEA2016 conference, Tallinn, Estonia.
Resumo:
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.
Resumo:
In this paper we present a convolutional neuralnetwork (CNN)-based model for human head pose estimation inlow-resolution multi-modal RGB-D data. We pose the problemas one of classification of human gazing direction. We furtherfine-tune a regressor based on the learned deep classifier. Next wecombine the two models (classification and regression) to estimateapproximate regression confidence. We present state-of-the-artresults in datasets that span the range of high-resolution humanrobot interaction (close up faces plus depth information) data tochallenging low resolution outdoor surveillance data. We buildupon our robust head-pose estimation and further introduce anew visual attention model to recover interaction with theenvironment. Using this probabilistic model, we show thatmany higher level scene understanding like human-human/sceneinteraction detection can be achieved. Our solution runs inreal-time on commercial hardware
Resumo:
FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.