804 resultados para video as a research tool
Resumo:
Egan, K. (2007). Trash or Treasure?: Censorship and the Changing Meanings of the Video Nasties. InsidePopular Film. Manchester: Manchester University Press. RAE2008
Resumo:
The article deals with use of case studies for professional preparation of teachers to be. One of the suitable ways to develop professional teaching competences is to apply the method of a case study. A case study means a complex and creative solution for a given teaching situation in simulated teaching conditions. It is based on interactive and situational education and decision taking. A case study improves not only professional and teaching competences for becoming teachers – it also fulfi ls the task to develop at students their auto-evaluating and auto-refl exing skills. To increase professional competences it is mandatory to do a complex analysis of the video-record for the implemented study. A complex analysis is a subject of the research project of a student grant agency at the University of West Bohemia in Pilsen.
Resumo:
A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin-color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and predictions of the Markov model. The evolution of the skin-color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and resampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. The accuracy of the new dynamic skin color segmentation algorithm is compared to that obtained via a static color model. Segmentation accuracy is evaluated using labeled ground-truth video sequences taken from staged experiments and popular movies. An overall increase in segmentation accuracy of up to 24% is observed in 17 out of 21 test sequences. In all but one case the skin-color classification rates for our system were higher, with background classification rates comparable to those of the static segmentation.
Resumo:
In gesture and sign language video sequences, hand motion tends to be rapid, and hands frequently appear in front of each other or in front of the face. Thus, hand location is often ambiguous, and naive color-based hand tracking is insufficient. To improve tracking accuracy, some methods employ a prediction-update framework, but such methods require careful initialization of model parameters, and tend to drift and lose track in extended sequences. In this paper, a temporal filtering framework for hand tracking is proposed that can initialize and reset itself without human intervention. In each frame, simple features like color and motion residue are exploited to identify multiple candidate hand locations. The temporal filter then uses the Viterbi algorithm to select among the candidates from frame to frame. The resulting tracking system can automatically identify video trajectories of unambiguous hand motion, and detect frames where tracking becomes ambiguous because of occlusions or overlaps. Experiments on video sequences of several hundred frames in duration demonstrate the system's ability to track hands robustly, to detect and handle tracking ambiguities, and to extract the trajectories of unambiguous hand motion.
Resumo:
Hand signals are commonly used in applications such as giving instructions to a pilot for airplane take off or direction of a crane operator by a foreman on the ground. A new algorithm for recognizing hand signals from a single camera is proposed. Typically, tracked 2D feature positions of hand signals are matched to 2D training images. In contrast, our approach matches the 2D feature positions to an archive of 3D motion capture sequences. The method avoids explicit reconstruction of the 3D articulated motion from 2D image features. Instead, the matching between the 2D and 3D sequence is done by backprojecting the 3D motion capture data onto 2D. Experiments demonstrate the effectiveness of the approach in an example application: recognizing six classes of basketball referee hand signals in video.
Resumo:
We introduce a view-point invariant representation of moving object trajectories that can be used in video database applications. It is assumed that trajectories lie on a surface that can be locally approximated with a plane. Raw trajectory data is first locally approximated with a cubic spline via least squares fitting. For each sampled point of the obtained curve, a projective invariant feature is computed using a small number of points in its neighborhood. The resulting sequence of invariant features computed along the entire trajectory forms the view invariant descriptor of the trajectory itself. Time parametrization has been exploited to compute cross ratios without ambiguity due to point ordering. Similarity between descriptors of different trajectories is measured with a distance that takes into account the statistical properties of the cross ratio, and its symmetry with respect to the point at infinity. In experiments, an overall correct classification rate of about 95% has been obtained on a dataset of 58 trajectories of players in soccer video, and an overall correct classification rate of about 80% has been obtained on matching partial segments of trajectories collected from two overlapping views of outdoor scenes with moving people and cars.
Resumo:
The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.
Resumo:
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.
Resumo:
Occupational therapists need to embrace the use of mainstream technology in their quest to ensure that therapy remains current and meaningful to their clients. Technology can be useful to improve both functional independence and occupational performance. This opinion piece introduces how occupational therapists can apply mainstream technologies, including information and communication technologies such as the internet, computer software, portable devices and computer games, in their everyday interventions.
Resumo:
An enduring challenge for the policy and political sciences is valid and reliable depiction of policy designs. One emerging approach for dissecting policy designs is the application of Sue Crawford and Elinor Ostrom's institutional grammar tool. The grammar tool offers a method to identify, systematically, the core elements that comprise policies, including target audiences, expected patterns of behavior, and formal modes of sanctioning for noncompliance. This article provides three contributions to the study of policy designs by developing and applying the institutional grammar tool. First, we provide revised guidelines for applying the institutional grammar tool to the study of policy design. Second, an additional component to the grammar, called the oBject, is introduced. Third, we apply the modified grammar tool to four policies that shape Colorado State Aquaculture to demonstrate its effectiveness and utility in illuminating institutional linkages across levels of analysis. The conclusion summarizes the contributions of the article as well as points to future research and applications of the institutional grammar tool. © 2011 Policy Studies Organization.
Resumo:
Gemstone Team ILL (Interactive Language Learning)
Resumo:
Nolan and Temple Lang argue that “the ability to express statistical computations is an es- sential skill.” A key related capacity is the ability to conduct and present data analysis in a way that another person can understand and replicate. The copy-and-paste workflow that is an artifact of antiquated user-interface design makes reproducibility of statistical analysis more difficult, especially as data become increasingly complex and statistical methods become increasingly sophisticated. R Markdown is a new technology that makes creating fully-reproducible statistical analysis simple and painless. It provides a solution suitable not only for cutting edge research, but also for use in an introductory statistics course. We present experiential and statistical evidence that R Markdown can be used effectively in introductory statistics courses, and discuss its role in the rapidly-changing world of statistical computation.
Resumo:
BACKGROUND: In the domain of academia, the scholarship of research may include, but not limited to, peer-reviewed publications, presentations, or grant submissions. Programmatic research productivity is one of many measures of academic program reputation and ranking. Another measure or tool for quantifying learning success among physical therapists education programs in the USA is 100 % three year pass rates of graduates on the standardized National Physical Therapy Examination (NPTE). In this study, we endeavored to determine if there was an association between research productivity through artifacts and 100 % three year pass rates on the NPTE. METHODS: This observational study involved using pre-approved database exploration representing all accredited programs in the USA who graduated physical therapists during 2009, 2010 and 2011. Descriptive variables captured included raw research productivity artifacts such as peer reviewed publications and books, number of professional presentations, number of scholarly submissions, total grant dollars, and numbers of grants submitted. Descriptive statistics and comparisons (using chi square and t-tests) among program characteristics and research artifacts were calculated. Univariate logistic regression analyses, with appropriate control variables were used to determine associations between research artifacts and 100 % pass rates. RESULTS: Number of scholarly artifacts submitted, faculty with grants, and grant proposals submitted were significantly higher in programs with 100 % three year pass rates. However, after controlling for program characteristics such as grade point average, diversity percentage of cohort, public/private institution, and number of faculty, there were no significant associations between scholarly artifacts and 100 % three year pass rates. CONCLUSIONS: Factors outside of research artifacts are likely better predictors for passing the NPTE.
Resumo:
BACKGROUND: The detection of latent tuberculosis infection (LTBI) is a major component of tuberculosis (TB) control strategies. In addition to the tuberculosis skin test (TST), novel blood tests, based on in vitro release of IFN-gamma in response to Mycobacterium tuberculosis-specific antigens ESAT-6 and CFP-10 (IGRAs), are used for TB diagnosis. However, neither IGRAs nor the TST can separate acute TB from LTBI, and there is concern that responses in IGRAs may decline with time after infection. We have therefore evaluated the potential of the novel antigen heparin-binding hemagglutinin (HBHA) for in vitro detection of LTBI. METHODOLOGY AND PRINCIPAL FINDINGS: HBHA was compared to purified protein derivative (PPD) and ESAT-6 in IGRAs on lymphocytes drawn from 205 individuals living in Belgium, a country with low TB prevalence, where BCG vaccination is not routinely used. Among these subjects, 89 had active TB, 65 had LTBI, based on well-standardized TST reactions and 51 were negative controls. HBHA was significantly more sensitive than ESAT-6 and more specific than PPD for the detection of LTBI. PPD-based tests yielded 90.00% sensitivity and 70.00% specificity for the detection of LTBI, whereas the sensitivity and specificity for the ESAT-6-based tests were 40.74% and 90.91%, and those for the HBHA-based tests were 92.06% and 93.88%, respectively. The QuantiFERON-TB Gold In-Tube (QFT-IT) test applied on 20 LTBI subjects yielded 50% sensitivity. The HBHA IGRA was not influenced by prior BCG vaccination, and, in contrast to the QFT-IT test, remote (>2 years) infections were detected as well as recent (<2 years) infections by the HBHA-specific test. CONCLUSIONS: The use of ESAT-6- and CFP-10-based IGRAs may underestimate the incidence of LTBI, whereas the use of HBHA may combine the operational advantages of IGRAs with high sensitivity and specificity for latent infection.
Resumo:
We provide a select overview of tools supporting traditional Jewish learning. Then we go on to discuss our own HyperJoseph/HyperIsaac project in instructional hypermedia. Its application is to teaching, teacher training, and self-instruction in given Bible passages. The treatment of two narratives has been developed thus far. The tool enables an analysis of the text in several respects: linguistic, narratological, etc. Moreover, the Scriptures' focality throughout the cultural history makes this domain of application particularly challenging, in that there is a requirement for the tool to encompass the accretion of receptions in the cultural repertoire, i.e., several layers of textual traditions—either hermeneutic (i.e., interpretive), or appropriations—related to the given core passage, thus including "secondary" texts (i.e., such that are responding or derivative) from as disparate realms as Roman-age and later homiletics, Medieval and later commentaries or supercommentaries, literary appropriations, references to the arts and modern scholarship, etc. in particular, the Midrash (homiletic expansions) is adept at narrative gap filling, so the narratives mushroom at the interstices where the primary text is silent. The genealogy of the project is rooted in Weiss' index of novelist Agnon's writings, which was eventually upgraded into a hypertextual tool, including Agnon's full-text and ancillary materials. Those early tools being intended primarily for reference and research-support in literary studies, the Agnon hypertext system was initially emulated in the conception of HyperJoseph, which is applied to the Joseph story from Genesis. Then, the transition from a tool for reference to an instructional tool required a thorough reconception in an educational perspective, which led to HyperIsaac, on the sacrifice of Isaac, and to a redesign and upgrade of HyperJoseph as patterned after HyperIsaac.