975 resultados para worked example videos


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe a novel method for human activity segmentation and interpretation in surveillance applications based on Gabor filter-bank features. A complex human activity is modeled as a sequence of elementary human actions like walking, running, jogging, boxing, hand-waving etc. Since human silhouette can be modeled by a set of rectangles, the elementary human actions can be modeled as a sequence of a set of rectangles with different orientations and scales. The activity segmentation is based on Gabor filter-bank features and normalized spectral clustering. The feature trajectories of an action category are learnt from training example videos using Dynamic Time Warping. The combined segmentation and the recognition processes are very efficient as both the algorithms share the same framework and Gabor features computed for the former can be used for the later. We have also proposed a simple shadow detection technique to extract good silhouette which is necessary for good accuracy of an action recognition technique. © 2008 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

São propostas equações para a determinação da orientação, comprimento e área da sombra projetada por árvores destinadas ao plantio em pastagens para bovinos, considerando o local, a época do ano e a hora do dia. As equações abrangem árvores com os seguintes formatos de copa: esférica, lentiforme, cilíndrica, elipsóide, cônica e cônica invertida. Um exemplo é apresentado, discutindo-se a aplicação no sombreamento de pastagens.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the benefits of using immersive and interactive virtual reality environments to teach Dentistry. We present a tool for educators to manipulate and edit virtual models. One of the main contributions is that multimedia information can be semantically associated with parts of the model, through an ontology, enriching the experience; for example, videos can be linked to each tooth demonstrating how to extract them. The use of semantic information gives a greater flexibility to the models, since filters can be applied to create temporary models that show subsets of the original data in a human friendly way. We also explain how the software was written to run in arbitrary multi-projection environments. © 2011 Springer-Verlag.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the benefits of using immersive and interactive multiprojection environments (CAVE) to visualize molecules, and how it improves users’ understanding. We have proposed and implemented a tool for teachers to manipulate molecules and another to edit molecules and assist students at home. The contribution of the present research project are these tool that allows investigating structures, properties and dynamics of a molecular system which are extremely complex and comprises millions of atoms. The experience is enriched through multimedia information associated with parts of the model; for example, videos and text can be linked to specific molecule, demonstrating some detail. This solution is based on a teaching-learning process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper provides a brief but comprehensive guide to creating, preparing and dissecting a 'virtual' fossil, using a worked example to demonstrate some standard data processing techniques. Computed tomography (CT) is a 3D imaging modality for producing 'virtual' models of an object on a computer. In the last decade, CT technology has greatly improved, allowing bigger and denser objects to be scanned increasingly rapidly. The technique has now reached a stage where systems can facilitate large-scale, non-destructive comparative studies of extinct fossils and their living relatives. Consequently the main limiting factor in CT-based analyses is no longer scanning, but the hurdles of data processing (see disclaimer). The latter comprises the techniques required to convert a 3D CT volume (stack of digital slices) into a virtual image of the fossil that can be prepared (separated) from the matrix and 'dissected' into its anatomical parts. This technique can be applied to specimens or part of specimens embedded in the rock matrix that until now have been otherwise impossible to visualise. This paper presents a suggested workflow explaining the steps required, using as example a fossil tooth of Sphenacanthus hybodoides (Egerton), a shark from the Late Carboniferous of England. The original NHMUK copyrighted CT slice stack can be downloaded for practice of the described techniques, which include segmentation, rendering, movie animation, stereo-anaglyphy, data storage and dissemination. Fragile, rare specimens and type materials in university and museum collections can therefore be virtually processed for a variety of purposes, including virtual loans, website illustrations, publications and digital collections. Micro-CT and other 3D imaging techniques are increasingly utilized to facilitate data sharing among scientists and on education and outreach projects. Hence there is the potential to usher in a new era of global scientific collaboration and public communication using specimens in museum collections.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Systematic reviews of well-designed trials constitute a high level of scientific evidence and are important for medical decision making. Meta-analysis facilitates integration of the evidence using a transparent and systematic approach, leading to a broader interpretation of treatment effectiveness and safety than can be attained from individual studies. Traditional meta-analyses are limited to comparing just 2 interventions concurrently and cannot combine evidence concerning multiple treatments. A relatively recent extension of the traditional meta-analytical approach is network meta-analysis, which allows, under certain assumptions, the quantitative synthesis of all evidence under a unified framework and across a network of all eligible trials. Network meta-analysis combines evidence from direct and indirect information via common comparators; interventions can therefore be ranked in terms of the analyzed outcome. In this article, the network meta-analysis approach is introduced in a nontechnical manner using a worked example on the treatment effectiveness of conventional and self-ligating appliances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Missing outcome data are common in clinical trials and despite a well-designed study protocol, some of the randomized participants may leave the trial early without providing any or all of the data, or may be excluded after randomization. Premature discontinuation causes loss of information, potentially resulting in attrition bias leading to problems during interpretation of trial findings. The causes of information loss in a trial, known as mechanisms of missingness, may influence the credibility of the trial results. Analysis of trials with missing outcome data should ideally be handled with intention to treat (ITT) rather than per protocol (PP) analysis. However, true ITT analysis requires appropriate assumptions and imputation of missing data. Using a worked example from a published dental study, we highlight the key issues associated with missing outcome data in clinical trials, describe the most recognized approaches to handling missing outcome data, and explain the principles of ITT and PP analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The judicial interest in ‘scientific’ evidence has driven recent work to quantify results for forensic linguistic authorship analysis. Through a methodological discussion and a worked example this paper examines the issues which complicate attempts to quantify results in work. The solution suggested to some of the difficulties is a sampling and testing strategy which helps to identify potentially useful, valid and reliable markers of authorship. An important feature of the sampling strategy is that these markers identified as being generally valid and reliable are retested for use in specific authorship analysis cases. The suggested approach for drawing quantified conclusions combines discriminant function analysis and Bayesian likelihood measures. The worked example starts with twenty comparison texts for each of three potential authors and then uses a progressively smaller comparison corpus, reducing to fifteen, ten, five and finally three texts per author. This worked example demonstrates how reducing the amount of data affects the way conclusions can be drawn. With greater numbers of reference texts quantified and safe attributions are shown to be possible, but as the number of reference texts reduces the analysis shows how the conclusion which should be reached is that no attribution can be made. The testing process at no point results in instances of a misattribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a method of uncertainty evaluation for axi-symmetric measurement machines which is compliant with GUM and PUMA methodologies. Specialized measuring machines for the inspection of axisymmetric components enable the measurement of properties such as roundness (radial runout), axial runout and coning. These machines typically consist of a rotary table and a number of contact measurement probes located on slideways. Sources of uncertainty include the probe calibration process, probe repeatability, probe alignment, geometric errors in the rotary table, the dimensional stability of the structure holding the probes and form errors in the reference hemisphere which is used to calibrate the system. The generic method is described and an evaluation of an industrial machine is described as a worked example. Type A uncertainties were obtained from a repeatability study of the probe calibration process, a repeatability study of the actual measurement process, a system stability test and an elastic deformation test. Type B uncertainties were obtained from calibration certificates and estimates. Expanded uncertainties, at 95% confidence, were then calculated for the measurement of; radial runout (1.2 µm with a plunger probe or 1.7 µm with a lever probe); axial runout (1.2 µm with a plunger probe or 1.5 µm with a lever probe); and coning/swash (0.44 arc seconds with a plunger probe or 0.60 arc seconds with a lever probe).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background - This review provides a worked example of ‘best fit’ framework synthesis using the Theoretical Domains Framework (TDF) of health psychology theories as an a priori framework in the synthesis of qualitative evidence. Framework synthesis works best with ‘policy urgent’ questions. Objective - The review question selected was: what are patients’ experiences of prevention programmes for cardiovascular disease (CVD) and diabetes? The significance of these conditions is clear: CVD claims more deaths worldwide than any other; diabetes is a risk factor for CVD and leading cause of death. Method - A systematic review and framework synthesis were conducted. This novel method for synthesizing qualitative evidence aims to make health psychology theory accessible to implementation science and advance the application of qualitative research findings in evidence-based healthcare. Results - Findings from 14 original studies were coded deductively into the TDF and subsequently an inductive thematic analysis was conducted. Synthesized findings produced six themes relating to: knowledge, beliefs, cues to (in)action, social influences, role and identity, and context. A conceptual model was generated illustrating combinations of factors that produce cues to (in)action. This model demonstrated interrelationships between individual (beliefs and knowledge) and societal (social influences, role and identity, context) factors. Conclusion - Several intervention points were highlighted where factors could be manipulated to produce favourable cues to action. However, a lack of transparency of behavioural components of published interventions needs to be corrected and further evaluations of acceptability in relation to patient experience are required. Further work is needed to test the comprehensiveness of the TDF as an a priori framework for ‘policy urgent’ questions using ‘best fit’ framework synthesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In elite sports, nearly all performances are captured on video. Despite the massive amounts of video that has been captured in this domain over the last 10-15 years, most of it remains in an 'unstructured' or 'raw' form, meaning it can only be viewed or manually annotated/tagged with higher-level event labels which is time consuming and subjective. As such, depending on the detail or depth of annotation, the value of the collected repositories of archived data is minimal as it does not lend itself to large-scale analysis and retrieval. One such example is swimming, where each race of a swimmer is captured on a camcorder and in-addition to the split-times (i.e., the time it takes for each lap), stroke rate and stroke-lengths are manually annotated. In this paper, we propose a vision-based system which effectively 'digitizes' a large collection of archived swimming races by estimating the location of the swimmer in each frame, as well as detecting the stroke rate. As the videos are captured from moving hand-held cameras which are located at different positions and angles, we show our hierarchical-based approach to tracking the swimmer and their different parts is robust to these issues and allows us to accurately estimate the swimmer location and stroke rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FASTtrack: Pharmaceutical Compounding and Dispensing focuses on what you really need to know in order to pass exams. Concise, bulleted information, key points, tips and an all-important self-assessment section which includes MCQs, case studies, sample essay questions and worked examples. Based on the successful textbook, Pharmaceutical Compounding and Dispensing, this FASTtrack book has been designed to assist the student compounder in understanding the key dosage forms encountered within extemporaneous dispensing. For this new second edition all the references to modern texts (for example, the BNF) have been updated, as well as labelling to reflect changes since the publication of the first edition. Some worked examples have been changed owing to the availability of pharmaceutical ingredients. Free access to online videos demonstrating various dispensing procedures is included. Are your exams coming up? Are you drowning in textbooks and lecture notes and wondering where to begin? Take the FASTtrack route to successful study for your examinations. FASTtrack provides the ultimate lecture notes and is a must-have for all pharmacy students wanting to study and test themselves for forthcoming exams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.