109 resultados para cog humanoid robot embodied learning phd thesis metaphor pancake reaching vision


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the role of the academic head of department (HoD) has always been important to university management and performance, an increasing significance given to bureaucracy, academic performance and productivity, and government accountability has greatly elevated the importance of this position. Previous research and anecdotal evidence suggests that as academics move into HoD roles, usually with little or no training, they experience a problem of struggling to adequately manage key aspects of their role. It is this problem – and its manifestations – that forms the research focus of this study. Based on the research question, “What are the career trajectories of academics who become HoDs in a selected post-1992 university?” the study aimed to achieve greater understanding of why academics become HoDs, what it is like being a HoD, and how the experience influences their future career plans. The study adopts an interpretive approach, in line with social constructivism. Edited topical life history interviews were undertaken with 17 male and female HoDs, from a range of disciplines, in a post-1992 UK university. These data were analysed using coding, categorisation and theme formation techniques and developing profiles of each of the respondents. The findings from this study suggest that academics who become HoDs not only need the capacity to assume a range of personal and professional identities, but need to regularly adopt and switch between them. Whether individuals can successfully balance and manage these multiple identities, or whether they experience major conflicts and difficulties within or between them, greatly affects their experiences of being a HoD and may influence their subsequent career decisions. It is claimed that the focus, approach and analytical framework - based on the interrelationships between the concepts of socialisation, identity and career trajectory - provide a distinct and original contribution to knowledge in this area. Although the results of this study cannot be generalised, the findings may help other individuals and institutions move towards a firmer understanding of the academic who becomes HoD - in relation to theory, practice and future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed multimedia supports a symbiotic infotainment duality, i.e. the ability to transfer information to the user, yet also provide the user with a level of satisfaction. As multimedia is ultimately produced for the education and / or enjoyment of viewers, the user’s-perspective concerning the presentation quality is surely of equal importance as objective Quality of Service (QoS) technical parameters, to defining distributed multimedia quality. In order to extensively measure the user-perspective of multimedia video quality, we introduce an extended model of distributed multimedia quality that segregates quality into three discrete levels: the network-level, the media-level and content-level, using two distinct quality perspectives: the user-perspective and the technical-perspective. Since experimental questionnaires do not provide continuous monitoring of user attention, eye tracking was used in our study in order to provide a better understanding of the role that the human element plays in the reception, analysis and synthesis of multimedia data. Results showed that video content adaptation, results in disparity in user video eye-paths when: i) no single / obvious point of focus exists; or ii) when the point of attention changes dramatically. Accordingly, appropriate technical- and user-perspective parameter adaptation is implemented, for all quality abstractions of our model, i.e. network-level (via simulated delay and jitter), media-level (via a technical- and user-perspective manipulated region-of-interest attentive display) and content-level (via display-type and video clip-type). Our work has shown that user perception of distributed multimedia quality cannot be achieved by means of purely technical-perspective QoS parameter adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In terrestrial television transmission multiple paths of various lengths can occur between the transmitter and the receiver. Such paths occur because of reflections from objects outside the direct transmission path. The multipath signals arriving at the receiver are all detected along with the intended signal causing time displaced replicas called 'ghosts' to appear on the television picture. With an increasing number of people living within built up areas, ghosting is becoming commonplace and therefore deghosting is becoming increasingly important. This thesis uses a deterministic time domain approach to deghosting, resulting in a simple solution to the problem of removing ghosts. A new video detector is presented which reduces the synchronous detector local oscillator phase error, caused by any practical size of ghost, to a lower level than has ever previously been achieved. From the new detector, dispersion of the video signal is minimised and a known closed-form time domain description of the individual ghost components within the detected video is subsequently obtained. Developed from mathematical descriptions of the detected video, a new specific deghoster filter structure is presented which is capable of removing both inphase (I) and also the phase quadrature (Q) induced ghost signals derived from the VSB operation. The new deghoster filter requires much less hardware than any previous deghoster which is capable of removing both I and Q ghost components. A new channel identification algorithm was also required and written which is based upon simple correlation techniques to find the delay and complex amplitude characteristics of individual ghosts. The result of the channel identification is then passed to the new I and Q deghoster filter for ghost cancellation. Generated from the research work performed for this thesis, five papers have been published. D

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Arctic is a region particularly susceptible to rapid climate change. General circulation models (GCMs) suggest a polar amplification of any global warming signal by a factor of about 1.5 due, in part, to sea ice feedbacks. The dramatic recent decline in multi-year sea ice cover lies outside the standard deviation of the CMIP3 ensemble GCM predictions. Sea ice acts as a barrier between cold air and warmer oceans during winter, as well as inhibiting evaporation from the ocean surface water during the summer. An ice free Arctic would likely have an altered hydrological cycle with more evaporation from the ocean surface leading to changes in precipitation distribution and amount. Using the U.K. Met Office Regional Climate Model (RCM), HadRM3, the atmospheric effects of the observed and projected reduction in Arctic sea ice are investigated. The RCM is driven by the atmospheric GCM HadAM3. Both models are forced with sea surface temperature and sea ice for the period 2061-2090 from the CMIP3 HadGEM1 experiments. Here we use an RCM at 50km resolution over the Arctic and 25km over Svalbard, which captures well the present-day pattern of precipitation and provides a detailed picture of the projected changes in the behaviour of the oceanic-atmosphere moisture fluxes and how they affect precipitation. These experiments show that the projected 21stCentury sea ice decline alone causes large impacts to the surface mass balance (SMB) on Svalbard. However Greenland’s SMB is not significantly affected by sea ice decline alone, but responds with a strongly negative shift in SMB when changes to SST are incorporated into the experiments. This is the first study to characterise the impact of changes in future sea ice to Arctic terrestrial cryosphere mass balance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project is concerned with the way that illustrations, photographs, diagrams and graphs, and typographic elements interact to convey ideas on the book page. A framework for graphic description is proposed to elucidate this graphic language of ‘complex texts’. The model is built up from three main areas of study, with reference to a corpus of contemporary children’s science books. First, a historical survey puts the subjects for study in context. Then a multidisciplinary discussion of graphic communication provides a theoretical underpinning for the model; this leads to various proposals, such as the central importance of ratios and relationships among parts in creating meaning in graphic communication. Lastly a series of trials in description contribute to the structure of the model itself. At the heart of the framework is an organising principle that integrates descriptive models from fields of design, literary criticism, art history, and linguistics, among others, as well as novel categories designed specifically for book design. Broadly, design features are described in terms of elemental component parts (micro-level), larger groupings of these (macro-level), and finally in terms of overarching, ‘whole book’ qualities (meta-level). Various features of book design emerge at different levels; for instance, the presence of nested discursive structures, a form of graphic recursion in editorial design, is proposed at the macro-level. Across these three levels are the intersecting categories of ‘rule’ and ‘context’, offering different perspectives with which to describe graphic characteristics. Contextbased features are contingent on social and cultural environment, the reader’s previous knowledge, and the actual conditions of reading; rule-based features relate to the systematic or codified aspects of graphic language. The model aims to be a frame of reference for graphic description, of use in different forms of qualitative or quantitative research and as a heuristic tool in practice and teaching.