962 resultados para Computer Modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les surfaces de subdivision fournissent une méthode alternative prometteuse dans la modélisation géométrique, et ont des avantages sur la représentation classique de trimmed-NURBS, en particulier dans la modélisation de surfaces lisses par morceaux. Dans ce mémoire, nous considérons le problème des opérations géométriques sur les surfaces de subdivision, avec l'exigence stricte de forme topologique correcte. Puisque ce problème peut être mal conditionné, nous proposons une approche pour la gestion de l'incertitude qui existe dans le calcul géométrique. Nous exigeons l'exactitude des informations topologiques lorsque l'on considère la nature de robustesse du problème des opérations géométriques sur les modèles de solides, et il devient clair que le problème peut être mal conditionné en présence de l'incertitude qui est omniprésente dans les données. Nous proposons donc une approche interactive de gestion de l'incertitude des opérations géométriques, dans le cadre d'un calcul basé sur la norme IEEE arithmétique et la modélisation en surfaces de subdivision. Un algorithme pour le problème planar-cut est alors présenté qui a comme but de satisfaire à l'exigence topologique mentionnée ci-dessus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron. The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims at giving a concise survey of the present state-of-the-art of mathematical modelling in mathematics education and instruction. It will consist of four parts. In part 1, some basic concepts relevant to the topic will be clarified and, in particular, mathematical modelling will be defined in a broad, comprehensive sense. Part 2 will review arguments for the inclusion of modelling in mathematics teaching at schools and universities, and identify certain schools of thought within mathematics education. Part 3 will describe the role of modelling in present mathematics curricula and in everyday teaching practice. Some obstacles for mathematical modelling in the classroom will be analysed, as well as the opportunities and risks of computer usage. In part 4, selected materials and resources for teaching mathematical modelling, developed in the last few years in America, Australia and Europe, will be presented. The examples will demonstrate many promising directions of development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A revolution\0\0\0 in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in bulldozers, and SITE CONTROLLER, a central computer system that maintains design data and directs operations. The first two components are widely available; I built SITE CONTROLLER to complete the triangle and describe it here. SITE CONTROLLER assists civil engineers in the design, estimation, and construction of earthworks, including hazardous waste site remediation. The core of SITE CONTROLLER is a site modelling system that represents existing and prospective terrain shapes, roads, hydrology, etc. Around this core are analysis, simulation, and vehicle control tools. Integrating these modules into one program enables civil engineers and contractors to use a single interface and database throughout the life of a project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As ubiquitous systems have moved out of the lab and into the world the need to think more systematically about how there are realised has grown. This talk will present intradisciplinary work I have been engaged in with other computing colleagues on how we might develop more formal models and understanding of ubiquitous computing systems. The formal modelling of computing systems has proved valuable in areas as diverse as reliability, security and robustness. However, the emergence of ubiquitous computing raises new challenges for formal modelling due to their contextual nature and dependence on unreliable sensing systems. In this work we undertook an exploration of modelling an example ubiquitous system called the Savannah game using the approach of bigraphical rewriting systems. This required an unusual intra-disciplinary dialogue between formal computing and human- computer interaction researchers to model systematically four perspectives on Savannah: computational, physical, human and technical. Each perspective in turn drew upon a range of different modelling traditions. For example, the human perspective built upon previous work on proxemics, which uses physical distance as a means to understand interaction. In this talk I hope to show how our model explains observed inconsistencies in Savannah and ex- tend it to resolve these. I will then reflect on the need for intradisciplinary work of this form and the importance of the bigraph diagrammatic form to support this form of engagement. Speaker Biography Tom Rodden Tom Rodden (rodden.info) is a Professor of Interactive Computing at the University of Nottingham. His research brings together a range of human and technical disciplines, technologies and techniques to tackle the human, social, ethical and technical challenges involved in ubiquitous computing and the increasing used of personal data. He leads the Mixed Reality Laboratory (www.mrl.nott.ac.uk) an interdisciplinary research facility that is home of a team of over 40 researchers. He founded and currently co-directs the Horizon Digital Economy Research Institute (www.horizon.ac.uk), a university wide interdisciplinary research centre focusing on ethical use of our growing digital footprint. He has previously directed the EPSRC Equator IRC (www.equator.ac.uk) a national interdisciplinary research collaboration exploring the place of digital interaction in our everyday world. He is a fellow of the British Computer Society and the ACM and was elected to the ACM SIGCHI Academy in 2009 (http://www.sigchi.org/about/awards/).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquesta tesi s'emmarca dins del projecte CICYT TAP 1999-0443-C05-01. L'objectiu d'aquest projecte és el disseny, implementació i avaluació de robots mòbils, amb un sistema de control distribuït, sistemes de sensorització i xarxa de comunicacions per realitzar tasques de vigilància. Els robots han de poder-se moure per un entorn reconeixent la posició i orientació dels diferents objectes que l'envolten. Aquesta informació ha de permetre al robot localitzar-se dins de l'entorn on es troba per poder-se moure evitant els possibles obstacles i dur a terme la tasca encomanada. El robot ha de generar un mapa dinàmic de l'entorn que serà utilitzat per localitzar la seva posició. L'objectiu principal d'aquest projecte és aconseguir que un robot explori i construeixi un mapa de l'entorn sense la necessitat de modificar el propi entorn. Aquesta tesi està enfocada en l'estudi de la geometria dels sistemes de visió estereoscòpics formats per dues càmeres amb l'objectiu d'obtenir informació geomètrica 3D de l'entorn d'un vehicle. Aquest objectiu tracta de l'estudi del modelatge i la calibració de càmeres i en la comprensió de la geometria epipolar. Aquesta geometria està continguda en el que s'anomena emph{matriu fonamental}. Cal realitzar un estudi del càlcul de la matriu fonamental d'un sistema estereoscòpic amb la finalitat de reduir el problema de la correspondència entre dos plans imatge. Un altre objectiu és estudiar els mètodes d'estimació del moviment basats en la geometria epipolar diferencial per tal de percebre el moviment del robot i obtenir-ne la posició. Els estudis de la geometria que envolta els sistemes de visió estereoscòpics ens permeten presentar un sistema de visió per computador muntat en un robot mòbil que navega en un entorn desconegut. El sistema fa que el robot sigui capaç de generar un mapa dinàmic de l'entorn a mesura que es desplaça i determinar quin ha estat el moviment del robot per tal de emph{localitzar-se} dins del mapa. La tesi presenta un estudi comparatiu dels mètodes de calibració de càmeres més utilitzats en les últimes dècades. Aquestes tècniques cobreixen un gran ventall dels mètodes de calibració clàssics. Aquest mètodes permeten estimar els paràmetres de la càmera a partir d'un conjunt de punts 3D i de les seves corresponents projeccions 2D en una imatge. Per tant, aquest estudi descriu un total de cinc tècniques de calibració diferents que inclouen la calibració implicita respecte l'explicita i calibració lineal respecte no lineal. Cal remarcar que s'ha fet un gran esforç en utilitzar la mateixa nomenclatura i s'ha estandaritzat la notació en totes les tècniques presentades. Aquesta és una de les dificultats principals a l'hora de poder comparar les tècniques de calibració ja què cada autor defineix diferents sistemes de coordenades i diferents conjunts de paràmetres. El lector és introduït a la calibració de càmeres amb la tècnica lineal i implícita proposada per Hall i amb la tècnica lineal i explicita proposada per Faugeras-Toscani. A continuació es passa a descriure el mètode a de Faugeras incloent el modelatge de la distorsió de les lents de forma radial. Seguidament es descriu el conegut mètode proposat per Tsai, i finalment es realitza una descripció detallada del mètode de calibració proposat per Weng. Tots els mètodes són comparats tant des del punt de vista de model de càmera utilitzat com de la precisió de la calibració. S'han implementat tots aquests mètodes i s'ha analitzat la precisió presentant resultats obtinguts tant utilitzant dades sintètiques com càmeres reals. Calibrant cada una de les càmeres del sistema estereoscòpic es poden establir un conjunt de restriccions geomètri ques entre les dues imatges. Aquestes relacions són el que s'anomena geometria epipolar i estan contingudes en la matriu fonamental. Coneixent la geometria epipolar es pot: simplificar el problema de la correspondència reduint l'espai de cerca a llarg d'una línia epipolar; estimar el moviment d'una càmera quan aquesta està muntada sobre un robot mòbil per realitzar tasques de seguiment o de navegació; reconstruir una escena per aplicacions d'inspecció, propotipatge o generació de motlles. La matriu fonamental s'estima a partir d'un conjunt de punts en una imatges i les seves correspondències en una segona imatge. La tesi presenta un estat de l'art de les tècniques d'estimació de la matriu fonamental. Comença pels mètode lineals com el dels set punts o el mètode dels vuit punts, passa pels mètodes iteratius com el mètode basat en el gradient o el CFNS, fins arribar las mètodes robustos com el M-Estimators, el LMedS o el RANSAC. En aquest treball es descriuen fins a 15 mètodes amb 19 implementacions diferents. Aquestes tècniques són comparades tant des del punt de vista algorísmic com des del punt de vista de la precisió que obtenen. Es presenten el resultats obtinguts tant amb imatges reals com amb imatges sintètiques amb diferents nivells de soroll i amb diferent quantitat de falses correspondències. Tradicionalment, l'estimació del moviment d'una càmera està basada en l'aplicació de la geometria epipolar entre cada dues imatges consecutives. No obstant el cas tradicional de la geometria epipolar té algunes limitacions en el cas d'una càmera situada en un robot mòbil. Les diferencies entre dues imatges consecutives són molt petites cosa que provoca inexactituds en el càlcul de matriu fonamental. A més cal resoldre el problema de la correspondència, aquest procés és molt costós en quant a temps de computació i no és gaire efectiu per aplicacions de temps real. En aquestes circumstàncies les tècniques d'estimació del moviment d'una càmera solen basar-se en el flux òptic i en la geometria epipolar diferencial. En la tesi es realitza un recull de totes aquestes tècniques degudament classificades. Aquests mètodes són descrits unificant la notació emprada i es remarquen les semblances i les diferencies entre el cas discret i el cas diferencial de la geometria epipolar. Per tal de poder aplicar aquests mètodes a l'estimació de moviment d'un robot mòbil, aquest mètodes generals que estimen el moviment d'una càmera amb sis graus de llibertat, han estat adaptats al cas d'un robot mòbil que es desplaça en una superfície plana. Es presenten els resultats obtinguts tant amb el mètodes generals de sis graus de llibertat com amb els adaptats a un robot mòbil utilitzant dades sintètiques i seqüències d'imatges reals. Aquest tesi finalitza amb una proposta de sistema de localització i de construcció d'un mapa fent servir un sistema estereoscòpic situat en un robot mòbil. Diverses aplicacions de robòtica mòbil requereixen d'un sistema de localització amb l'objectiu de facilitar la navegació del vehicle i l'execució del les trajectòries planificades. La localització es sempre relativa al mapa de l'entorn on el robot s'està movent. La construcció de mapes en un entorn desconegut és una tasca important a realitzar per les futures generacions de robots mòbils. El sistema que es presenta realitza la localització i construeix el mapa de l'entorn de forma simultània. A la tesi es descriu el robot mòbil GRILL, que ha estat la plataforma de treball emprada per aquesta aplicació, amb el sistema de visió estereoscòpic que s'ha dissenyat i s'ha muntat en el robot. També es descriu tots el processos que intervenen en el sistema de localització i construcció del mapa. La implementació d'aquest processos ha estat possible gràcies als estudis realitzats i presentats prèviament (calibració de càmeres, estimació de la matriu fonamental, i estimació del moviment) sense els quals no s'hauria pogut plantejar aquest sistema. Finalment es presenten els mapes en diverses trajectòries realitzades pel robot GRILL en el laboratori. Les principals contribucions d'aquest treball són: ·Un estat de l'art sobre mètodes de calibració de càmeres. El mètodes són comparats tan des del punt de vista del model de càmera utilitzat com de la precisió dels mètodes. ·Un estudi dels mètodes d'estimació de la matriu fonamental. Totes les tècniques estudiades són classificades i descrites des d'un punt de vista algorísmic. ·Un recull de les tècniques d'estimació del moviment d'una càmera centrat en el mètodes basat en la geometria epipolar diferencial. Aquestes tècniques han estat adaptades per tal d'estimar el moviment d'un robot mòbil. ·Una aplicació de robòtica mòbil per tal de construir un mapa dinàmic de l'entorn i localitzar-se per mitja d'un sistema estereoscòpic. L'aplicació presentada es descriu tant des del punt de vista del maquinari com del programari que s'ha dissenyat i implementat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our understanding of the climate system has been revolutionized recently, by the development of sophisticated computer models. The predictions of such models are used to formulate international protocols, intended to mitigate the severity of global warming and its impacts. Yet, these models are not perfect representations of reality, because they remove from explicit consideration many physical processes which are known to be key aspects of the climate system, but which are too small or fast to be modelled. The purpose of this paper is to give a personal perspective of the current state of knowledge regarding the problem of unresolved scales in climate models. A recent novel solution to the problem is discussed, in which it is proposed, somewhat counter-intuitively, that the performance of models may be improved by adding random noise to represent the unresolved processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent severe flooding in the UK has highlighted the need for better information on flood risk, increasing the pressure on engineers to enhance the capabilities of computer models for flood prediction. This paper evaluates the benefits to be gained from the use of remotely sensed data to support flood modelling. The remotely sensed data available can be used either to produce high-resolution digital terrain models (DTMs) (light detection and ranging (Lidar) data), or to generate accurate inundation mapping of past flood events (airborne synthetic aperture radar (SAR) data and aerial photography). The paper reports on the modelling of real flood events that occurred at two UK sites on the rivers Severn and Ouse. At these sites a combination of remotely sensed data and recorded hydrographs was available. It is concluded first that light detection and ranging Lidar generated DTMs support the generation of considerably better models and enhance the visualisation of model results and second that flood outlines obtained from airborne SAR or aerial images help develop an appreciation of the hydraulic behaviour of important model components, and facilitate model validation. The need for further research is highlighted by a number of limitations, namely: the difficulties in obtaining an adequate representation of hydraulically important features such as embankment crests and walls; uncertainties in the validation data; and difficulties in extracting flood outlines from airborne SAR images in urban areas.