33 resultados para Computer vision teaching


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graph-based representations have been used with considerable success in computer vision in the abstraction and recognition of object shape and scene structure. Despite this, the methodology available for learning structural representations from sets of training examples is relatively limited. In this paper we take a simple yet effective Bayesian approach to attributed graph learning. We present a naïve node-observation model, where we make the important assumption that the observation of each node and each edge is independent of the others, then we propose an EM-like approach to learn a mixture of these models and a Minimum Message Length criterion for components selection. Moreover, in order to avoid the bias that could arise with a single estimation of the node correspondences, we decide to estimate the sampling probability over all the possible matches. Finally we show the utility of the proposed approach on popular computer vision tasks such as 2D and 3D shape recognition. © 2011 Springer-Verlag.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Acquiring 3D shape from images is a classic problem in Computer Vision occupying researchers for at least 20 years. Only recently however have these ideas matured enough to provide highly accurate results. We present a complete algorithm to reconstruct 3D objects from images using the stereo correspondence cue. The technique can be described as a pipeline of four basic building blocks: camera calibration, image segmentation, photo-consistency estimation from images, and surface extraction from photo-consistency. In this Chapter we will put more emphasis on the latter two: namely how to extract geometric information from a set of photographs without explicit camera visibility, and how to combine different geometry estimates in an optimal way. © 2010 Springer-Verlag Berlin Heidelberg.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Photometric Stereo is a powerful image based 3D reconstruction technique that has recently been used to obtain very high quality reconstructions. However, in its classic form, Photometric Stereo suffers from two main limitations: Firstly, one needs to obtain images of the 3D scene under multiple different illuminations. As a result the 3D scene needs to remain static during illumination changes, which prohibits the reconstruction of deforming objects. Secondly, the images obtained must be from a single viewpoint. This leads to depth-map based 2.5 reconstructions, instead of full 3D surfaces. The aim of this Chapter is to show how these limitations can be alleviated, leading to the derivation of two practical 3D acquisition systems: The first one, based on the powerful Coloured Light Photometric Stereo method can be used to reconstruct moving objects such as cloth or human faces. The second, permits the complete 3D reconstruction of challenging objects such as porcelain vases. In addition to algorithmic details, the Chapter pays attention to practical issues such as setup calibration, detection and correction of self and cast shadows. We provide several evaluation experiments as well as reconstruction results. © 2010 Springer-Verlag Berlin Heidelberg.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the study of complex networks, vertex centrality measures are used to identify the most important vertices within a graph. A related problem is that of measuring the centrality of an edge. In this paper, we propose a novel edge centrality index rooted in quantum information. More specifically, we measure the importance of an edge in terms of the contribution that it gives to the Von Neumann entropy of the graph. We show that this can be computed in terms of the Holevo quantity, a well known quantum information theoretical measure. While computing the Von Neumann entropy and hence the Holevo quantity requires computing the spectrum of the graph Laplacian, we show how to obtain a simplified measure through a quadratic approximation of the Shannon entropy. This in turns shows that the proposed centrality measure is strongly correlated with the negative degree centrality on the line graph. We evaluate our centrality measure through an extensive set of experiments on real-world as well as synthetic networks, and we compare it against commonly used alternative measures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Laplacian-based descriptors, such as the Heat Kernel Signature and the Wave Kernel Signature, allow one to embed the vertices of a graph onto a vectorial space, and have been successfully used to find the optimal matching between a pair of input graphs. While the HKS uses a heat di↵usion process to probe the local structure of a graph, the WKS attempts to do the same through wave propagation. In this paper, we propose an alternative structural descriptor that is based on continuoustime quantum walks. More specifically, we characterise the structure of a graph using its average mixing matrix. The average mixing matrix is a doubly-stochastic matrix that encodes the time-averaged behaviour of a continuous-time quantum walk on the graph. We propose to use the rows of the average mixing matrix for increasing stopping times to develop a novel signature, the Average Mixing Matrix Signature (AMMS). We perform an extensive range of experiments and we show that the proposed signature is robust under structural perturbations of the original graphs and it outperforms both the HKS and WKS when used as a node descriptor in a graph matching task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To examine the effect of uncorrected astigmatism in older adults. SETTING: University Vision Clinic METHOD: Twenty-one healthy presbyopes, aged 58.9±2.8 years, had astigmatism of 0.0 to -4.0 x 90?DC and -3.0DC of cylinder at 90?, 180? and 45? induced with spectacle lenses, with the mean spherical equivalent compensated to plano, in random order. Visual acuity was assessed binocularly using a computerised test chart at 95%, 50% and 10% contrast. Near acuity and reading speed were measured using standardised reading texts. Light scatter was quantified with the cQuant and driving reaction times with a computer simulator. Finally visual clarity of a mobile phone and computer screen was subjectively rated. RESULTS: Distance visual acuity decreased with increasing uncorrected astigmatic power (F=174.50, p<0.001) and was reduced at lower contrasts (F=170.77, p<0.001). Near visual acuity and reading speed also decreased with increasing uncorrected astigmatism power (p<0.001). Light scatter was not significantly affected by uncorrected astigmatism (p>0.05), but the reliability and variability of measurements decreased with increasing uncorrected astigmatic power (p<0.05). Driving simulator performance was also unaffected by uncorrected astigmatism (p>0.05), but subjective rating of clarity decreased with increasing uncorrected astigmatic power (p<0.001). Uncorrected astigmatism at 45? or 180? orientation resulted in a worse distance and near visual acuity, and subjective rated clarity than 90? orientation (p<0.05). CONCLUSION: Uncorrected astigmatism, even as low as 1.0DC, causes a significant burden on a patient’s vision. If left uncorrected, this could impact significantly on their independence, quality of life and wellbeing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Universities which set up online repositories for the management of learning and teaching resources commonly find that uptake is poor. Tutors are often reluctant to upload their materials to e-repositories, even though the same tutors are happy to upload resources to the virtual learning environment (e.g. Blackboard, Moodle, Sakai) and happy to upload their research papers to the university’s research publications repository. The paper reviews this phenomenon and suggests constructive ways in which tutors can be encouraged to engage with an e-repository. The authors have recently completed a major project “Developing Repositories at Worcester” which is part of a group of similar projects in the UK. The paper includes the feedback and the lessons learned from these projects, based on the publications and reports they have produced. They cover ways of embedding repository use into institutional working practice, and give examples of different types of repository designed to meet the needs of those using different kinds of learning and teaching resources. As well as this specific experience, the authors summarise some of the main findings from UK publications, in particular the December 2008 report of Joint Information Systems Committee: Good intentions: improving the evidence base in support of sharing learning materials and Online Innovation in Higher Education, Ron Cooke’s report to a UK government initiative on the future of Higher Education. The issues covered include the development of Web 2.0 style repositories rather than conventionally structured ones, the use of tags rather than metadata, the open resources initiative, the best use for conventional repositories, links to virtual learning environments, and the processes for the management and support of repositories within universities. In summary the paper presents an optimistic, constructive view of how to embed the use of e-repositories into the working practices of university tutors. Equally, the authors are aware of the considerable difficulties in making progress and are realistic about what can be achieved. The paper uses evidence and experience drawn from those working in this field to suggest a strategic vision in which the management of e-learning resources is productive, efficient and meets the needs of both tutors and their students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a detailed contrastive description of the textual functioning of connectives in English and Arabic. Particular emphasis is placed on the organisational force of connectives and their role in sustaining cohesion. The description is intended as a contribution for a better understanding of the variations in the dominant tendencies for text organisation in each language. The findings are expected to be utilised for pedagogical purposes, particularly in improving EFL teaching of writing at the undergraduate level. The study is based on an empirical investigation of the phenomenon of connectivity and, for optimal efficiency, employs computer-aided procedures, particularly those adopted in corpus linguistics, for investigatory purposes. One important methodological requirement is the establishment of two comparable and statistically adequate corpora, also the design of software and the use of existing packages and to achieve the basic analysis. Each corpus comprises ca 250,000 words of newspaper material sampled in accordance to a specific set of criteria and assembled in machine readable form prior to the computer-assisted analysis. A suite of programmes have been written in SPITBOL to accomplish a variety of analytical tasks, and in particular to perform a battery of measurements intended to quantify the textual functioning of connectives in each corpus. Concordances and some word lists are produced by using OCP. Results of these researches confirm the existence of fundamental differences in text organisation in Arabic in comparison to English. This manifests itself in the way textual operations of grouping and sequencing are performed and in the intensity of the textual role of connectives in imposing linearity and continuity and in maintaining overall stability. Furthermore, computation of connective functionality and range of operationality has identified fundamental differences in the way favourable choices for text organisation are made and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After thirty years of vacillation, the Tanzanian government has made a firm decision to Swahilize its secondary education system. It has also embarked on an ambitious economic and social development programme (Vision 2025) to transform its peasant society into a modern agricultural community. However, there is a faction in Tanzania opposed to Kiswahili as the medium of education. Already many members of the middle and upper class their children to English medium primary schools to avoid the Kiswahili medium public schools and to prepare their children for the English medium secondary system presently in place. Within the education system, particularly at university level, there is a desire to maintain English as the medium of education. English is seen to provide access to the international scientific community, to cutting edge technology and to the global economy. My interest in this conflict of interests stems from several years' experience teaching English to students at Sokoine University of Agriculture. Students specialise in agriculture and are expected to work with the peasant population on graduation. The students experience difficulties studying in English and then find their Kiswahili skills insufficient to explain to farmers the new techniques and technologies that they have studied in English. They are hampered by a complex triglossic situation in which they use their mother tongue with family and friends, Kiswahili, the national language for early education and most public communication within Tanzania, and English for advanced studies. My aim in this thesis was - to study the language policy in Tanzania and see how it is understood and implemented; - to examine the attitudes towards the various languages and their various roles; - to investigate actual language behaviour in Tanzanian higher education. My conclusion is that the dysfunctionality of the present study has to be addressed. Diglossic public life in Tanzania has to be accommodated. The only solution appears to be a compromise, namely a bilingual education system which supports from all cases of society by using Kiswahili, together with an early introduction of English and its promotion as a privileged foreign language, so that Tanzania can continue to develop internally through Kiswahili and at the same time retain access to the globalising world through the medium of English.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Golfers, coaches and researchers alike, have all keyed in on golf putting as an important aspect of overall golf performance. Of the three principle putting tasks (green reading, alignment and the putting action phase), the putting action phase has attracted the most attention from coaches, players and researchers alike. This phase includes the alignment of the club with the ball, the swing, and ball contact. A significant amount of research in this area has focused on measuring golfer’s vision strategies with eye tracking equipment. Unfortunately this research suffers from a number of shortcomings, which limit its usefulness. The purpose of this thesis was to address some of these shortcomings. The primary objective of this thesis was to re-evaluate golfer’s putting vision strategies using binocular eye tracking equipment and to define a new, optimal putting vision strategy which was associated with both higher skill and success. In order to facilitate this research, bespoke computer software was developed and validated, and new gaze behaviour criteria were defined. Additionally, the effects of training (habitual) and competition conditions on the putting vision strategy were examined, as was the effect of ocular dominance. Finally, methods for improving golfer’s binocular vision strategies are discussed, and a clinical plan for the optometric management of the golfer’s vision is presented. The clinical management plan includes the correction of fundamental aspects of golfers’ vision, including monocular refractive errors and binocular vision defects, as well as enhancement of their putting vision strategy, with the overall aim of improving performance on the golf course. This research has been undertaken in order to gain a better understanding of the human visual system and how it relates to the sport performance of golfers specifically. Ultimately, the analysis techniques and methods developed are applicable to the assessment of visual performance in all sports.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This conceptual article examines the relationship between marketing and sustainability through the dual lenses of anthropocentric and ecocentric epistemology. Using the current anthropocentric epistemology and its associated dominant social paradigm, corporate ecological sustainability in commercial practice and business school research and teaching is difficult to achieve. However, adopting an ecocentric epistemology enables the development of an alternative business and marketing approach that places equal importance on nature, the planet, and ecological sustainability as the source of human and other species' well-being, as well as the source of all products and services. This article examines ecocentric, transformational business, and marketing strategies epistemologically, conceptually and practically and thereby proposes six ecocentric, transformational, strategic marketing universal premises as part of a vision of and solution to current global un-sustainability. Finally, this article outlines several opportunities for management practice and further research. © 2012 Springer Science+Business Media Dordrecht.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction-The design of the UK MPharm curriculum is driven by the Royal Pharmaceutical Society of Great Britain (RPSGB) accreditation process and the EU directive (85/432/EEC).[1] Although the RPSGB is informed about teaching activity in UK Schools of Pharmacy (SOPs), there is no database which aggregates information to provide the whole picture of pharmacy education within the UK. The aim of the teaching, learning and assessment study [2] was to document and map current programmes in the 16 established SOPs. Recent developments in programme delivery have resulted in a focus on deep learning (for example, through problem based learning approaches) and on being more student centred and less didactic through lectures. The specific objectives of this part of the study were (a) to quantify the content and modes of delivery of material as described in course documentation and (b) having categorised the range of teaching methods, ask students to rate how important they perceived each one for their own learning (using a three point Likert scale: very important, fairly important or not important). Material and methods-The study design compared three datasets: (1) quantitative course document review, (2) qualitative staff interview and (3) quantitative student self completion survey. All 16 SOPs provided a set of their undergraduate course documentation for the year 2003/4. The documentation variables were entered into Excel tables. A self-completion questionnaire was administered to all year four undergraduates, using a pragmatic mixture of methods, (n=1847) in 15 SOPs within Great Britain. The survey data were analysed (n=741) using SPSS, excluding non-UK students who may have undertaken part of their studies within a non-UK university. Results and discussion-Interviews showed that individual teachers and course module leaders determine the choice of teaching methods used. Content review of the documentary evidence showed that 51% of the taught element of the course was delivered using lectures, 31% using practicals (includes computer aided learning) and 18% small group or interactive teaching. There was high uniformity across the schools for the first three years; variation in the final year was due to the project. The average number of hours per year across 15 schools (data for one school were not available) was: year 1: 408 hours; year 2: 401 hours; year 3: 387 hours; year 4: 401 hours. The survey showed that students perceived lectures to be the most important method of teaching after dispensing or clinical practicals. Taking the very important rating only: 94% (n=694) dispensing or clinical practicals; 75% (n=558) lectures; 52% (n=386) workshops, 50% (n=369) tutorials, 43% (n=318) directed study. Scientific laboratory practices were rated very important by only 31% (n=227). The study shows that teaching of pharmacy to undergraduates in the UK is still essentially didactic through a high proportion of formal lectures and with high levels of staff-student contact. Schools consider lectures still to be the most cost effective means of delivering the core syllabus to large cohorts of students. However, this does limit the scope for any optionality within teaching, the scope for small group work is reduced as is the opportunity to develop multi-professional learning or practice placements. Although novel teaching and learning techniques such as e-learning have expanded considerably over the past decade, schools of pharmacy have concentrated on lectures as the best way of coping with the huge expansion in student numbers. References [1] Council Directive. Concerning the coordination of provisions laid down by law, regulation or administrative action in respect of certain activities in the field of pharmacy. Official Journal of the European Communities 1985;85/432/EEC. [2] Wilson K, Jesson J, Langley C, Clarke L, Hatfield K. MPharm Programmes: Where are we now? Report commissioned by the Pharmacy Practice Research Trust., 2005.