849 resultados para Artificial intelligence
Resumo:
In this article we propose a novel method for calculating cardiac 3-D strain. The method requires the acquisition of myocardial short-axis (SA) slices only and produces the 3-D strain tensor at every point within every pair of slices. Three-dimensional displacement is calculated from SA slices using zHARP which is then used for calculating the local displacement gradient and thus the local strain tensor. There are three main advantages of this method. First, the 3-D strain tensor is calculated for every pixel without interpolation; this is unprecedented in cardiac MR imaging. Second, this method is fast, in part because there is no need to acquire long-axis (LA) slices. Third, the method is accurate because the 3-D displacement components are acquired simultaneously and therefore reduces motion artifacts without the need for registration. This article presents the theory of computing 3-D strain from two slices using zHARP, the imaging protocol, and both phantom and in-vivo validation.
Resumo:
El present TFM té per objectiu aplicar tècniques d'intel·ligència artificial per realitzar el seguiment de les extremitats dels ratolins i les vibrisses del seu musell. Aquest objectiu es deriva de la necessitat per part dels realitzadors d'experiments optogenètics de registrar els moviments dels ratolins.
Resumo:
DDM is a framework that combines intelligent agents and artificial intelligence traditional algorithms such as classifiers. The central idea of this project is to create a multi-agent system that allows to compare different views into a single one.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
In order to improve the management of copyright in the Internet, known as Digital Rights Management, there is the need for a shared language for copyright representation. Current approaches are based on purely syntactic solutions, i.e. a grammar that defines a rights expression language. These languages are difficult to put into practise due to the lack of explicit semantics that facilitate its implementation. Moreover, they are simple from the legal point of view because they are intended just to model the usage licenses granted by content providers to end-users. Thus, they ignore the copyright framework that lies behind and the whole value chain from creators to end-users. Our proposal is to use a semantic approach based on semantic web ontologies. We detail the development of a copyright ontology in order to put this approach into practice. It models the copyright core concepts for creation, rights and the basic kinds of actions that operate on content. Altogether, it allows building a copyright framework for the complete value chain. The set of actions operating on content are our smaller building blocks in order to cope with the complexity of copyright value chains and statements and, at the same time, guarantee a high level of interoperability and evolvability. The resulting copyright modelling framework is flexible and complete enough to model many copyright scenarios, not just those related to the economic exploitation of content. The ontology also includes moral rights, so it is possible to model this kind of situations as it is shown in the included example model for a withdrawal scenario. Finally, the ontology design and the selection of tools result in a straightforward implementation. Description Logic reasoners are used for license checking and retrieval. Rights are modelled as classes of actions, action patterns are modelled also as classes and the same is done for concrete actions. Then, to check if some right or license grants an action is reduced to check for class subsumption, which is a direct functionality of these reasoners.
Resumo:
The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.
Resumo:
Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.
Resumo:
Tractable cases of the binary CSP are mainly divided in two classes: constraint language restrictions and constraint graph restrictions. To better understand and identify the hardest binary CSPs, in this work we propose methods to increase their hardness by increasing the balance of both the constraint language and the constraint graph. The balance of a constraint is increased by maximizing the number of domain elements with the same number of occurrences. The balance of the graph is defined using the classical definition from graph the- ory. In this sense we present two graph models; a first graph model that increases the balance of a graph maximizing the number of vertices with the same degree, and a second one that additionally increases the girth of the graph, because a high girth implies a high treewidth, an important parameter for binary CSPs hardness. Our results show that our more balanced graph models and constraints result in harder instances when compared to typical random binary CSP instances, by several orders of magnitude. Also we detect, at least for sparse constraint graphs, a higher treewidth for our graph models.
Resumo:
In this paper we provide a new method to generate hard k-SAT instances. We incrementally construct a high girth bipartite incidence graph of the k-SAT instance. Having high girth assures high expansion for the graph, and high expansion implies high resolution width. We have extended this approach to generate hard n-ary CSP instances and we have also adapted this idea to increase the expansion of the system of linear equations used to generate XORSAT instances, being able to produce harder satisfiable instances than former generators.
Resumo:
Recently, edge matching puzzles, an NP-complete problem, have received, thanks to money-prized contests, considerable attention from wide audiences. We consider these competitions not only a challenge for SAT/CSP solving techniques but also as an opportunity to showcase the advances in the SAT/CSP community to a general audience. This paper studies the NP-complete problem of edge matching puzzles focusing on providing generation models of problem instances of variable hardness and on its resolution through the application of SAT and CSP techniques. From the generation side, we also identify the phase transition phenomena for each model. As solving methods, we employ both; SAT solvers through the translation to a SAT formula, and two ad-hoc CSP solvers we have developed, with different levels of consistency, employing several generic and specialized heuristics. Finally, we conducted an extensive experimental investigation to identify the hardest generation models and the best performing solving techniques.
Resumo:
Työn tavoitteena oli uudentyyppisen hiontaprosessin ohjausjärjestelmän kehittäminen ja testaaminen UPM-Kymmene Kaukaan hiomossa. Uuden ohjausjärjestelmän perusajatuksena oli pitää yksittäistä hiomakiveä jatkuvasti optimaalisessa toimintapisteessä, ja hiomon kaikilla koneilla pyrittiin samaan kivenalusmassan laatuun. Uutta ohjaustapaa kutsuttiin Optimum operating point -strategiaksi (OOPS). Hiomakiven pitäminen optimaalisessa toimintapisteessä tapahtui pääosin vesiteräyskäsittelyllä, jonka intensiteettiä ohjasi asiantuntijajärjestelmä (AI-järjestelmä). Lisäksi testattiin ohjelmoidun anturan nopeussäädön vaikutusta hiomakoneen resurssien käyttöön. AI-järjestelmä päätteli vesiteräyskäsittelyn tarpeellisuuden CSF-mallin ja hiomakoneen resurssien perusteella. Seurannasta saatujen tulosten perusteella AI-järjestelmän käyttöönotto vesiteräyskäsittelyssä paransi tuotannon ja massan laadun tasaisuutta. Hiomakoneiden resurssien havaittiin pienenevän puunsyöttölinjan mukaisesti. Paksummat pöllit kerääntyvät linjan päähän, jolloin varsinkin hydrauliikkapaineiden tarve lisääntyy linjan päässä olevilla hiomakoneilla. Hiomakoneen resurssit saatiin paremmin käyttöön kuormittamalla konetta ohjelmoidulla anturan nopeussäädöllä (ONS) kuin vakioidulla anturan nopeussäädöllä (VNS). Kuitenkin hydraulipaineresurssien puutteellisuus rajoitti koeajon aikana ONS:n toimintaa. Resursseja ei optimoitu koeajon aikana, koska kiven pinnan haluttiin pysyvän mahdollisimman stabiilina. Kivelle ei suoritettu mekaanista käsittelyä, vaikka kivenpinnan massan kuljetuskapasiteetin havaittiin olevan huono. AI-järjestelmä otettiin ohjaamaan vesiteräyskäsittelyä vasta ONS - VNS -koeajon jälkeen. Mekaanisen rullateräyksen jälkeen massan ominaisuudet muuttuivat, koska kiven pinnan terävät särmät katkoivat kuituja alentaen massan sitoutumiskykyä. Heti rullakäsittelyn jälkeen mitattu CSF saattoi jopa alentua huomattavasti, mutta AI-järjestelmän laskema CSF nousi selvästi indikoiden energian ominaiskulutuksen (EOK) laskua. Muutaman päivän hionnan jälkeen mitattu ja laskettu CSF saavuttivat saman tason.
Resumo:
The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.
Resumo:
We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.