872 resultados para computer science, artificial Intelligence
Resumo:
L'objectiu principal d'aquest treball és aplicar tècniques de visió articial per aconseguir localitzar i fer el seguiment de les extremitats dels ratolins dins l'entorn de prova de les investigacions d'optogenètica del grup de recerca del Neuroscience Institute de la Universitat de Princeton, Nova Jersey.
Resumo:
En aquest projecte es vol explorar en el mercat per trobar una bona solució open source de business intelligence que permeti als dirigents d'un club de fitness millorar la gestió dels seus centres i respondre's algunes preguntes que s'han començat a fer sobre el funcionament del seu negoci, el qual intueixen que ha patit un retrocés de beneficis i de confiança dels seus socis. La finalitat del treball ha estat crear un data warehouse que s'ajustés a les dades de què disposen, transformar-les mitjançant processos ETL i crear cubs OLAP per explotar-les amb eficàcia des de la plataforma de BI escollida.
Resumo:
En este trabajo final de grado se pretende hacer una valoración objetiva de las herramientas disponibles en el mercado actual para la realización de proyectos de business intelligence.
Resumo:
L' ús de tècniques de la intel·ligència artificial per a la detecció, la diagnòsi i control d' errors
Resumo:
The development of new tools for chemoinformatics, allied to the use of different algorithms and computer programmes for structure elucidation of organic compounds, is growing fast worldwide. Massive efforts in research and development are currently being pursued both by academia and the so-called chemistry software development companies. The demystification of this environment provoked by the availability of software packages and a vast array of publications exert a positive impact on chemistry. In this work, an overview concerning the more classical approaches as well as new strategies on computer-based tools for structure elucidation of organic compounds is presented. Historical background is also taken into account since these techniques began to develop around four decades ago. Attention will be paid to companies which develop, distribute or commercialize software as well as web-based and open access tools which are currently available to chemists.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
Tutkimuksessa selvitettiin, kuinka hyvä tekoäly tietokonepeliin on mahdollista toteuttaa nykytiedolla ja -tekniikalla. Tekoäly rajattiin tarkoittamaan tekoälyn ohjaamia pelihahmoja. Lisäksi yksinkertaisia tekoälytoteutuksia ei huomioitu. Työ toteutettiin tutustumalla aiheeseen liittyvään kirjallisuuteen sekä kehittäjäyhteisön web-sivustojen tietoon. Hyvän tekoälyn kriteereiksi valikoituivat viihdyttävyys ja uskottavuus. Katsaus suosituimpiin toteuttamistekniikoihin ja tekoälyn mahdollisuuksiin osoitti, että teoriassa hyvinkin edistynyt tekoäly on toteutettavissa. Käytännössä tietokoneen rajalliset resurssit, kehittäjien rajalliset taidot ja pelinkehitysprojektien asettamat vaatimukset näyttävät kuitenkin rajoittavan tekoälyn toteuttamista kaupallisessa tuotteessa.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
Clinical decision support systems are useful tools for assisting physicians to diagnose complex illnesses. Schizophrenia is a complex, heterogeneous and incapacitating mental disorder that should be detected as early as possible to avoid a most serious outcome. These artificial intelligence systems might be useful in the early detection of schizophrenia disorder. The objective of the present study was to describe the development of such a clinical decision support system for the diagnosis of schizophrenia spectrum disorders (SADDESQ). The development of this system is described in four stages: knowledge acquisition, knowledge organization, the development of a computer-assisted model, and the evaluation of the system's performance. The knowledge was extracted from an expert through open interviews. These interviews aimed to explore the expert's diagnostic decision-making process for the diagnosis of schizophrenia. A graph methodology was employed to identify the elements involved in the reasoning process. Knowledge was first organized and modeled by means of algorithms and then transferred to a computational model created by the covering approach. The performance assessment involved the comparison of the diagnoses of 38 clinical vignettes between an expert and the SADDESQ. The results showed a relatively low rate of misclassification (18-34%) and a good performance by SADDESQ in the diagnosis of schizophrenia, with an accuracy of 66-82%. The accuracy was higher when schizophreniform disorder was considered as the presence of schizophrenia disorder. Although these results are preliminary, the SADDESQ has exhibited a satisfactory performance, which needs to be further evaluated within a clinical setting.
Resumo:
The Two-Connected Network with Bounded Ring (2CNBR) problem is a network design problem addressing the connection of servers to create a survivable network with limited redirections in the event of failures. Particle Swarm Optimization (PSO) is a stochastic population-based optimization technique modeled on the social behaviour of flocking birds or schooling fish. This thesis applies PSO to the 2CNBR problem. As PSO is originally designed to handle a continuous solution space, modification of the algorithm was necessary in order to adapt it for such a highly constrained discrete combinatorial optimization problem. Presented are an indirect transcription scheme for applying PSO to such discrete optimization problems and an oscillating mechanism for averting stagnation.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
Interior illumination is a complex problem involving numerous interacting factors. This research applies genetic programming towards problems in illumination design. The Radiance system is used for performing accurate illumination simulations. Radiance accounts for a number of important environmental factors, which we exploit during fitness evaluation. Illumination requirements include local illumination intensity from natural and artificial sources, colour, and uniformity. Evolved solutions incorporate design elements such as artificial lights, room materials, windows, and glass properties. A number of case studies are examined, including many-objective problems involving up to 7 illumination requirements, the design of a decorative wall of lights, and the creation of a stained-glass window for a large public space. Our results show the technical and creative possibilities of applying genetic programming to illumination design.
Resumo:
Many real-world optimization problems contain multiple (often conflicting) goals to be optimized concurrently, commonly referred to as multi-objective problems (MOPs). Over the past few decades, a plethora of multi-objective algorithms have been proposed, often tested on MOPs possessing two or three objectives. Unfortunately, when tasked with solving MOPs with four or more objectives, referred to as many-objective problems (MaOPs), a large majority of optimizers experience significant performance degradation. The downfall of these optimizers is that simultaneously maintaining a well-spread set of solutions along with appropriate selection pressure to converge becomes difficult as the number of objectives increase. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs possessing large amounts of decision variables. In this thesis, we explore the challenges of many-objective optimization and propose three new promising algorithms designed to efficiently solve MaOPs. Experimental results demonstrate the proposed optimizers to perform very well, often outperforming state-of-the-art many-objective algorithms.
Resumo:
Dans ce travail, nous explorons la faisabilité de doter les machines de la capacité de prédire, dans un contexte d'interaction homme-machine (IHM), l'émotion d'un utilisateur, ainsi que son intensité, de manière instantanée pour une grande variété de situations. Plus spécifiquement, une application a été développée, appelée machine émotionnelle, capable de «comprendre» la signification d'une situation en se basant sur le modèle théorique d'évaluation de l'émotion Ortony, Clore et Collins (OCC). Cette machine est apte, également, à prédire les réactions émotionnelles des utilisateurs, en combinant des versions améliorées des k plus proches voisins et des réseaux de neurones. Une procédure empirique a été réalisée pour l'acquisition des données. Ces dernières ont fourni une connaissance consistante aux algorithmes d'apprentissage choisis et ont permis de tester la performance de la machine. Les résultats obtenus montrent que la machine émotionnelle proposée est capable de produire de bonnes prédictions. Une telle réalisation pourrait encourager son utilisation future dans des domaines exploitant la reconnaissance automatique de l'émotion.
Resumo:
Il est connu que les problèmes d'ambiguïté de la langue ont un effet néfaste sur les résultats des systèmes de Recherche d'Information (RI). Toutefois, les efforts de recherche visant à intégrer des techniques de Désambiguisation de Sens (DS) à la RI n'ont pas porté fruit. La plupart des études sur le sujet obtiennent effectivement des résultats négatifs ou peu convaincants. De plus, des investigations basées sur l'ajout d'ambiguïté artificielle concluent qu'il faudrait une très haute précision de désambiguation pour arriver à un effet positif. Ce mémoire vise à développer de nouvelles approches plus performantes et efficaces, se concentrant sur l'utilisation de statistiques de cooccurrence afin de construire des modèles de contexte. Ces modèles pourront ensuite servir à effectuer une discrimination de sens entre une requête et les documents d'une collection. Dans ce mémoire à deux parties, nous ferons tout d'abord une investigation de la force de la relation entre un mot et les mots présents dans son contexte, proposant une méthode d'apprentissage du poids d'un mot de contexte en fonction de sa distance du mot modélisé dans le document. Cette méthode repose sur l'idée que des modèles de contextes faits à partir d'échantillons aléatoires de mots en contexte devraient être similaires. Des expériences en anglais et en japonais montrent que la force de relation en fonction de la distance suit généralement une loi de puissance négative. Les poids résultant des expériences sont ensuite utilisés dans la construction de systèmes de DS Bayes Naïfs. Des évaluations de ces systèmes sur les données de l'atelier Semeval en anglais pour la tâche Semeval-2007 English Lexical Sample, puis en japonais pour la tâche Semeval-2010 Japanese WSD, montrent que les systèmes ont des résultats comparables à l'état de l'art, bien qu'ils soient bien plus légers, et ne dépendent pas d'outils ou de ressources linguistiques. La deuxième partie de ce mémoire vise à adapter les méthodes développées à des applications de Recherche d'Information. Ces applications ont la difficulté additionnelle de ne pas pouvoir dépendre de données créées manuellement. Nous proposons donc des modèles de contextes à variables latentes basés sur l'Allocation Dirichlet Latente (LDA). Ceux-ci seront combinés à la méthodes de vraisemblance de requête par modèles de langue. En évaluant le système résultant sur trois collections de la conférence TREC (Text REtrieval Conference), nous observons une amélioration proportionnelle moyenne de 12% du MAP et 23% du GMAP. Les gains se font surtout sur les requêtes difficiles, augmentant la stabilité des résultats. Ces expériences seraient la première application positive de techniques de DS sur des tâches de RI standard.