843 resultados para Artificial intelligence algorithms


Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Planning with partial observability can be formulated as a non-deterministic search problem in belief space. The problem is harder than classical planning as keeping track of beliefs is harder than keeping track of states, and searching for action policies is harder than searching for action sequences. In this work, we develop a framework for partial observability that avoids these limitations and leads to a planner that scales up to larger problems. For this, the class of problems is restricted to those in which 1) the non-unary clauses representing the uncertainty about the initial situation are nvariant, and 2) variables that are hidden in the initial situation do not appear in the body of conditional effects, which are all assumed to be deterministic. We show that such problems can be translated in linear time into equivalent fully observable non-deterministic planning problems, and that an slight extension of this translation renders the problem solvable by means of classical planners. The whole approach is sound and complete provided that in addition, the state-space is connected. Experiments are also reported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El present TFM té per objectiu aplicar tècniques d'intel·ligència artificial per analitzar la incidència de l'esforç d'alta intensitat en la generació d'IncRNA.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El present TFM té per objectiu aplicar tècniques d'intel·ligència artificial per realitzar el seguiment de les extremitats dels ratolins i les vibrisses del seu musell. Aquest objectiu es deriva de la necessitat per part dels realitzadors d'experiments optogenètics de registrar els moviments dels ratolins.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to improve the management of copyright in the Internet, known as Digital Rights Management, there is the need for a shared language for copyright representation. Current approaches are based on purely syntactic solutions, i.e. a grammar that defines a rights expression language. These languages are difficult to put into practise due to the lack of explicit semantics that facilitate its implementation. Moreover, they are simple from the legal point of view because they are intended just to model the usage licenses granted by content providers to end-users. Thus, they ignore the copyright framework that lies behind and the whole value chain from creators to end-users. Our proposal is to use a semantic approach based on semantic web ontologies. We detail the development of a copyright ontology in order to put this approach into practice. It models the copyright core concepts for creation, rights and the basic kinds of actions that operate on content. Altogether, it allows building a copyright framework for the complete value chain. The set of actions operating on content are our smaller building blocks in order to cope with the complexity of copyright value chains and statements and, at the same time, guarantee a high level of interoperability and evolvability. The resulting copyright modelling framework is flexible and complete enough to model many copyright scenarios, not just those related to the economic exploitation of content. The ontology also includes moral rights, so it is possible to model this kind of situations as it is shown in the included example model for a withdrawal scenario. Finally, the ontology design and the selection of tools result in a straightforward implementation. Description Logic reasoners are used for license checking and retrieval. Rights are modelled as classes of actions, action patterns are modelled also as classes and the same is done for concrete actions. Then, to check if some right or license grants an action is reduced to check for class subsumption, which is a direct functionality of these reasoners.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tractable cases of the binary CSP are mainly divided in two classes: constraint language restrictions and constraint graph restrictions. To better understand and identify the hardest binary CSPs, in this work we propose methods to increase their hardness by increasing the balance of both the constraint language and the constraint graph. The balance of a constraint is increased by maximizing the number of domain elements with the same number of occurrences. The balance of the graph is defined using the classical definition from graph the- ory. In this sense we present two graph models; a first graph model that increases the balance of a graph maximizing the number of vertices with the same degree, and a second one that additionally increases the girth of the graph, because a high girth implies a high treewidth, an important parameter for binary CSPs hardness. Our results show that our more balanced graph models and constraints result in harder instances when compared to typical random binary CSP instances, by several orders of magnitude. Also we detect, at least for sparse constraint graphs, a higher treewidth for our graph models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we provide a new method to generate hard k-SAT instances. We incrementally construct a high girth bipartite incidence graph of the k-SAT instance. Having high girth assures high expansion for the graph, and high expansion implies high resolution width. We have extended this approach to generate hard n-ary CSP instances and we have also adapted this idea to increase the expansion of the system of linear equations used to generate XORSAT instances, being able to produce harder satisfiable instances than former generators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, edge matching puzzles, an NP-complete problem, have received, thanks to money-prized contests, considerable attention from wide audiences. We consider these competitions not only a challenge for SAT/CSP solving techniques but also as an opportunity to showcase the advances in the SAT/CSP community to a general audience. This paper studies the NP-complete problem of edge matching puzzles focusing on providing generation models of problem instances of variable hardness and on its resolution through the application of SAT and CSP techniques. From the generation side, we also identify the phase transition phenomena for each model. As solving methods, we employ both; SAT solvers through the translation to a SAT formula, and two ad-hoc CSP solvers we have developed, with different levels of consistency, employing several generic and specialized heuristics. Finally, we conducted an extensive experimental investigation to identify the hardest generation models and the best performing solving techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Työn tavoitteena oli uudentyyppisen hiontaprosessin ohjausjärjestelmän kehittäminen ja testaaminen UPM-Kymmene Kaukaan hiomossa. Uuden ohjausjärjestelmän perusajatuksena oli pitää yksittäistä hiomakiveä jatkuvasti optimaalisessa toimintapisteessä, ja hiomon kaikilla koneilla pyrittiin samaan kivenalusmassan laatuun. Uutta ohjaustapaa kutsuttiin Optimum operating point -strategiaksi (OOPS). Hiomakiven pitäminen optimaalisessa toimintapisteessä tapahtui pääosin vesiteräyskäsittelyllä, jonka intensiteettiä ohjasi asiantuntijajärjestelmä (AI-järjestelmä). Lisäksi testattiin ohjelmoidun anturan nopeussäädön vaikutusta hiomakoneen resurssien käyttöön. AI-järjestelmä päätteli vesiteräyskäsittelyn tarpeellisuuden CSF-mallin ja hiomakoneen resurssien perusteella. Seurannasta saatujen tulosten perusteella AI-järjestelmän käyttöönotto vesiteräyskäsittelyssä paransi tuotannon ja massan laadun tasaisuutta. Hiomakoneiden resurssien havaittiin pienenevän puunsyöttölinjan mukaisesti. Paksummat pöllit kerääntyvät linjan päähän, jolloin varsinkin hydrauliikkapaineiden tarve lisääntyy linjan päässä olevilla hiomakoneilla. Hiomakoneen resurssit saatiin paremmin käyttöön kuormittamalla konetta ohjelmoidulla anturan nopeussäädöllä (ONS) kuin vakioidulla anturan nopeussäädöllä (VNS). Kuitenkin hydraulipaineresurssien puutteellisuus rajoitti koeajon aikana ONS:n toimintaa. Resursseja ei optimoitu koeajon aikana, koska kiven pinnan haluttiin pysyvän mahdollisimman stabiilina. Kivelle ei suoritettu mekaanista käsittelyä, vaikka kivenpinnan massan kuljetuskapasiteetin havaittiin olevan huono. AI-järjestelmä otettiin ohjaamaan vesiteräyskäsittelyä vasta ONS - VNS -koeajon jälkeen. Mekaanisen rullateräyksen jälkeen massan ominaisuudet muuttuivat, koska kiven pinnan terävät särmät katkoivat kuituja alentaen massan sitoutumiskykyä. Heti rullakäsittelyn jälkeen mitattu CSF saattoi jopa alentua huomattavasti, mutta AI-järjestelmän laskema CSF nousi selvästi indikoiden energian ominaiskulutuksen (EOK) laskua. Muutaman päivän hionnan jälkeen mitattu ja laskettu CSF saavuttivat saman tason.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aquest projecte s'ha desenvolupat dins de l'àrea de visió per computadors, mitjançant el reconeixement d'un patró podem definir tres eixos que conformen un espai tridimensional on hem implementat un videojoc de combats entre robots a sobre d'un entorn real.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'Hexadom és un joc de taula, de creació pròpia, inspirat en el dòmino tradicional, amb el que hi comparteix l'objectiu: jugar totes les fitxes (o hexàgons en el cas de l'Hexadom). Però hi ha una diferència substancial: la seva complexitat; ja que a diferència del joc original -on s'uneixen els extrems de dues fitxes que tinguin el mateix número- a l'Hexadom s'han d'unir hexàgons entre si, com a mínim per dos dels seus costats, mantenint la coherència entre els colors.