896 resultados para Diagnostic imaging - Data processing


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Would a research assistant - who can search for ideas related to those you are working on, network with others (but only share the things you have chosen to share), doesn’t need coffee and who might even, one day, appear to be conscious - help you get your work done? Would it help your students learn? There is a body of work showing that digital learning assistants can be a benefit to learners. It has been suggested that adaptive, caring, agents are more beneficial. Would a conscious agent be more caring, more adaptive, and better able to deal with changes in its learning partner’s life? Allow the system to try to dynamically model the user, so that it can make predictions about what is needed next, and how effective a particular intervention will be. Now, given that the system is essentially doing the same things as the user, why don’t we design the system so that it can try to model itself in the same way? This should mimic a primitive self-awareness. People develop their personalities, their identities, through interacting with others. It takes years for a human to develop a full sense of self. Nobody should expect a prototypical conscious computer system to be able to develop any faster than that. How can we provide a computer system with enough social contact to enable it to learn about itself and others? We can make it part of a network. Not just chatting with other computers about computer ‘stuff’, but involved in real human activity. Exposed to ‘raw meaning’ – the developing folksonomies coming out of the learning activities of humans, whether they are traditional students or lifelong learners (a term which should encompass everyone). Humans have complex psyches, comprised of multiple strands of identity which reflect as different roles in the communities of which they are part – so why not design our system the same way? With multiple internal modes of operation, each capable of being reflected onto the outside world in the form of roles – as a mentor, a research assistant, maybe even as a friend. But in order to be able to work with a human for long enough to be able to have a chance of developing the sort of rich behaviours we associate with people, the system needs to be able to function in a practical and helpful role. Unfortunately, it is unlikely to get a free ride from many people (other than its developer!) – so it needs to be able to perform a useful role, and do so securely, respecting the privacy of its partner. Can we create a system which learns to be more human whilst helping people learn?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different systems, different purposes – but how do they compare as learning environments? We undertook a survey of students at the University, asking whether they learned from their use of the systems, whether they made contact with other students through them, and how often they used them. Although it was a small scale survey, the results are quite enlightening and quite surprising. Blackboard is populated with learning material, has all the students on a module signed up to it, a safe environment (in terms of Acceptable Use and some degree of staff monitoring) and provides privacy within the learning group (plus lecturer and relevant support staff). Facebook, on the other hand, has no learning material, only some of the students using the system, and on the face of it, it has the opportunity for slips in privacy and potential bullying because the Acceptable Use policy is more lax than an institutional one, and breaches must be dealt with on an exception basis, when reported. So why do more students find people on their courses through Facebook than Blackboard? And why are up to 50% of students reporting that they have learned from using Facebook? Interviews indicate that students in subjects which use seminars are using Facebook to facilitate working groups – they can set up private groups which give them privacy to discuss ideas in an environment which perceived as safer than Blackboard can provide. No staff interference, unless they choose to invite them in, and the opportunity to select who in the class can engage. The other striking finding is the difference in use between the genders. Males are using blackboard more frequently than females, whilst the reverse is true for Facebook. Interviews suggest that this may have something to do with needing to access lecture notes… Overall, though, it appears that there is little relationship between the time spent engaging with Blackboard and reports that students have learned from it. Because Blackboard is our central repository for notes, any contact is likely to result in some learning. Facebook, however, shows a clear relationship between frequency of use and perception of learning – and our students post frequently to Facebook. Whilst much of this is probably trivia and social chit chat, the educational elements of it are, de facto, contructivist in nature. Further questions need to be answered - Is the reason the students learn from Facebook because they are creating content which others will see and comment on? Is it because they can engage in a dialogue, without the risk of interruption by others?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Competency management is a very important part of a well-functioning organisation. Unfortunately competency descriptions are not uniformly specified nor defined across borders: National, sectorial or organisational, leading to an opaque competency description market with a multitude of competency frameworks and competency benchmarks. An ontology is a formalised description of a domain, which enables automated reasoning engines to be built which by utilising the interrelations between entities can make “intelligent” choices in different situations within the domain. Introducing formalised competency ontologies automated tools, such as skill gap analysis, training suggestion generation, job search and recruitment, can be developed, which compare and contrast different competency descriptions on the semantic level. The major problem with defining a common formalised ontology for competencies is that there are so many viewpoints of competencies and competency frameworks. Work within the TRACE project has focused on finding common trends within different competency frameworks in order to allow an intermediate competency description to be made, which other frameworks can reference. This research has shown that competencies can be divided up into “knowledge”, “skills” and what we call “others”. An ontology has been created based on this with a simple structure of different “kinds” of “knowledges” and “skills” using semantic interrelations to define the basic semantic structure of the ontology. A prototype tool for analysing a skill gap analysis has been developed. Personal profiles can be produced using the tool and a skill gap analysis is performed on a desired competency profile by using an ontologically based inference engine, which is able to list closest fit and possible proficiency gaps

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many algorithms have been developed to achieve motion segmentation for video surveillance. The algorithms produce varying performances under the infinite amount of changing conditions. It has been recognised that individually these algorithms have useful properties. Fusing the statistical result of these algorithms is investigated, with robust motion segmentation in mind.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent area for investigation into the development of adaptable robot control is the use of living neuronal networks to control a mobile robot. The so-called Animat paradigm comprises a neuronal network (the ‘brain’) connected to an external embodiment (in this case a mobile robot), facilitating potentially robust, adaptable robot control and increased understanding of neural processes. Sensory input from the robot is provided to the neuronal network via stimulation on a number of electrodes embedded in a specialist Petri dish (Multi Electrode Array (MEA)); accurate control of this stimulation is vital. We present software tools allowing precise, near real-time control of electrical stimulation on MEAs, with fast switching between electrodes and the application of custom stimulus waveforms. These Linux-based tools are compatible with the widely used MEABench data acquisition system. Benefits include rapid stimulus modulation in response to neuronal activity (closed loop) and batch processing of stimulation protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new control paradigm for Brain Computer Interfaces (BCIs) is proposed. BCIs provide a means of communication direct from the brain to a computer that allows individuals with motor disabilities an additional channel of communication and control of their external environment. Traditional BCI control paradigms use motor imagery, frequency rhythm modification or the Event Related Potential (ERP) as a means of extracting a control signal. A new control paradigm for BCIs based on speech imagery is initially proposed. Further to this a unique system for identifying correlations between components of the EEG and target events is proposed and introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses and compares the use of vision based and non-vision based technologies in developing intelligent environments. By reviewing the related projects that use vision based techniques in intelligent environment design, the achieved functions, technical issues and drawbacks of those projects are discussed and summarized, and the potential solutions for future improvement are proposed, which leads to the prospective direction of my PhD research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.