878 resultados para Data-Information-Knowledge Chain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document is the Online Supplement to ‘Myopic Allocation Policy with Asymptotically Optimal Sampling Rate,’ to be published in the IEEE Transactions of Automatic Control in 2017.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Antes que nada deseo agradecer y felicitar a los organizadores de este Congreso por ofrecerme la posibilidad de compartir con ustedes estas reflexiones y por la excelente idea de organizar un evento tan necesario para nuestro país y la región, por su temática y objetivos. Adicionalmente, la oportunidad que se nos brinda como una instancia de reunión preliminar para la Cumbre de la Sociedad de la Información, debe ser motivo de orgullo, pues yo lo interpreto como un reconocimiento al esfuerzo que se realiza en Costa Rica por estimular el desarrollo del conocimiento informacional para la región. Permítanme compartir estas reflexiones con ustedes, que parten de la creencia de que con ellas pueda contribuir a generar discusión alrededor detemas tan complejos y a la vez tan importantes para el futuro de nuestra profesión.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se realizó un estudio descriptivo, en una muestra probabilística calculada con un universo finito de 682 pacientes; el tamaño de la muestra fue de 245; se calculó en base al 95% de confianza, actitudes buenas del 50% y 5% de error diferencia. Los datos de conocimientos, actitudes y prácticas se obtuvieron por entrevista directa; para la tabulación y análisis de los datos se utilizó el software SPSS, versión 2015. Resultados La edad fluctuó entre 40-85 años, la mediana, 67 años. El 72,25 % fueron mujeres, el 56,32 %, casados, y el 65,31%, tenían instrucción básica. El nivel de conocimientos buenos en nutrición fue del 12,65%, regulares, el 61,23% y malos, el 26,12%. Actitudes buenas, el 10,20%, regulares, el 64,90% y malas, el 24,90%. El 15,51%, tuvo buenas prácticas, regulares, el 58,78%, y malas, el 25,71%. Conclusiones La frecuencia de conocimientos, actitudes y prácticas regulares fueron superiores al 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Um dos efeitos da digitalização do campo de batalha é o uso intensivo de novas tecnologias ao nível tático, de forma a agilizar a gestão e facilitar a compreensão do mesmo, com objetivo de contribuir decisivamente para a obtenção da superioridade de informação durante a condução das operações militares, assumindo particular relevância nesta temática a utilização de sistemas de informação para o comando e controlo. Observando a crescente importância destes sistemas para os exércitos, em particular para os baixos escalões, o presente trabalho, com o tema “Informações, Vigilância e Reconhecimento: Contributo para as Funções de Combate Comando-Missão e Informações”, estuda e analisa o papel dos dados, notícias e informações nessas funções de combate, bem como a sua relação com os sistemas de informação para o comando e controlo. São estudados e definidos os conceitos considerados base para a investigação, à luz da doutrina nacional e NATO, constituindo um suporte essencial para o estudo de sistemas de informação para o comando e controlo utilizados no exército norte-americano (com arquitetura de sistemas considerada como referência nos desenvolvimentos em curso no nosso exército) e para a análise dos sistemas de informação para o comando e controlo utilizados atualmente no Exército Português nas suas forças de manobra, com foco nos baixos escalões, abordando o que se encontra igualmente em desenvolvimento. Do estudo realizado determinaram-se uma série de requisitos operacionais, passíveis de serem integrados num sistema de informação para o comando e controlo de baixos escalões; verificou-se que tipo de dados e notícias eram recolhidos das viaturas utilizadas pela unidade em estudo, neste caso o Grupo de Reconhecimento da Brigada de Intervenção; e relacionaram-se os dados, notícias, informações e funcionalidades presentes com as variáveis de missão e, posteriormente, às funções de combate Comando-Missão e Informações.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A problemática relacionada com a modelação da qualidade da água de albufeiras pode ser abordada de diversos pontos de vista. Neste trabalho recorre-se a metodologias de resolução de problemas que emanam da Área Cientifica da Inteligência Artificial, assim como a ferramentas utilizadas na procura de soluções como as Árvores de Decisão, as Redes Neuronais Artificiais e a Aproximação de Vizinhanças. Actualmente os métodos de avaliação da qualidade da água são muito restritivos já que não permitem aferir a qualidade da água em tempo real. O desenvolvimento de modelos de previsão baseados em técnicas de Descoberta de Conhecimento em Bases de Dados, mostrou ser uma alternativa tendo em vista um comportamento pró-activo que pode contribuir decisivamente para diagnosticar, preservar e requalificar as albufeiras. No decurso do trabalho, foi utilizada a aprendizagem não-supervisionada tendo em vista estudar a dinâmica das albufeiras sendo descritos dois comportamentos distintos, relacionados com a época do ano. ABSTRACT: The problems related to the modelling of water quality in reservoirs can be approached from different viewpoints. This work resorts to methods of resolving problems emanating from the Scientific Area of Artificial lntelligence as well as to tools used in the search for solutions such as Decision Trees, Artificial Neural Networks and Nearest-Neighbour Method. Currently, the methods for assessing water quality are very restrictive because they do not indicate the water quality in real time. The development of forecasting models, based on techniques of Knowledge Discovery in Databases, shows to be an alternative in view of a pro-active behavior that may contribute to diagnose, maintain and requalify the water bodies. ln this work. unsupervised learning was used to study the dynamics of reservoirs, being described two distinct behaviors, related to the time of year.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este ensayo tuvo como finalidad primordial señalar la importancia que encierra, para fines planificadores, el conocimiento de la distribución de los valores del suelo dentro de un área urbana determinada. Asi tomada como el area de estudio; el distrito 1 o el Carmen, y localizada en la porción noroeste de la ciudad capital de Costa Rica, toda una serie de factores o variables que presentan gran relevancia en la fijación de valores para su suelo, fuero fijados y analizados, una vez caracterizada el area bajo análisis. En búsqueda de un modelo que se adapte a tal situación, se realizo una revisión bibliográfica y los principios modelo sobre el valor del suelo fueron presentados. Con los datos e informaciones recibidos en las diferentes instituciones mencionadas a lo largo del trabajo, y con el material recopilado en el campo, se llevo a cabo una cuantificación de los mismos, principalmente mediante el uso de un sistema de clasificación del dato esencial (valor del suelo) y, posteriormente, una serie de correlaciones –regresiones, simples y múltiples-, fue realizada con el fin de comprobarlas hipótesis planteadas y  con ello llegar a un análisis de los resultados obtenidos que permitiese brindar algunas recomendaciones al respecto. Otra finalidad de este consiste en demostrar la importancia del trabajo del geógrafo en el proceso  de la fijación de los valores para el terreno, sea urbano, sea rural, y cualquiera sea el fin que persiga: valores fiscales, valores reales, comerciales y, por qué no, hasta especulativos. SUMARY The main objetive of this investigation is to demonstrate the importance of the distribution of land values in an urban area in regards to the planning process. The chosen study area is the first district (Carmen) located in the northeast section of  the capital city of San José. Once the study area had been chosen, a series of factors and variables relavant to the fixation of land values, were chosen and analyzed. In the search for a model that could be adapted to the situation, a bibliographic revision was made concerning the topic of land values, and a list of possible land values, and a list of possible land value models was presented. In was possible to quantify the data information obtained from the different institutions mentioned in the investigation as well as from the material collected in the field; principally by means of a classification system of essential data ( land values), and later, series of simple and multiple correlation regressions were applied with the primary purpose being to confirm  the main hypothesis and then, arrive at an analysis of the obtained results that permit respective recommendations. Another purpose of the investigation consisted in demonstrating the importance of the geographer in the process of determining urban and rural land values and even for speculative purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Thesis is composed of a collection of works written in the period 2019-2022, whose aim is to find methodologies of Artificial Intelligence (AI) and Machine Learning to detect and classify patterns and rules in argumentative and legal texts. We define our approach “hybrid”, since we aimed at designing hybrid combinations of symbolic and sub-symbolic AI, involving both “top-down” structured knowledge and “bottom-up” data-driven knowledge. A first group of works is dedicated to the classification of argumentative patterns. Following the Waltonian model of argument and the related theory of Argumentation Schemes, these works focused on the detection of argumentative support and opposition, showing that argumentative evidences can be classified at fine-grained levels without resorting to highly engineered features. To show this, our methods involved not only traditional approaches such as TFIDF, but also some novel methods based on Tree Kernel algorithms. After the encouraging results of this first phase, we explored the use of a some emerging methodologies promoted by actors like Google, which have deeply changed NLP since 2018-19 — i.e., Transfer Learning and language models. These new methodologies markedly improved our previous results, providing us with best-performing NLP tools. Using Transfer Learning, we also performed a Sequence Labelling task to recognize the exact span of argumentative components (i.e., claims and premises), thus connecting portions of natural language to portions of arguments (i.e., to the logical-inferential dimension). The last part of our work was finally dedicated to the employment of Transfer Learning methods for the detection of rules and deontic modalities. In this case, we explored a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ground-penetrating radar (GPR) has the potential to provide valuable information on hydrological properties of the vadose zone because of their strong sensitivity to soil water content. In particular, recent evidence has suggested that the stochastic inversion of crosshole GPR data within a coupled geophysical-hydrological framework may allow for effective estimation of subsurface van-Genuchten-Mualem (VGM) parameters and their corresponding uncertainties. An important and still unresolved issue, however, is how to best integrate GPR data into a stochastic inversion in order to estimate the VGM parameters and their uncertainties, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first introduce a fully Bayesian inversion called Markov-chain-Monte-carlo (MCMC) strategy to perform the stochastic inversion of steady-state GPR data to estimate the VGM parameters and their uncertainties. Within this study, the choice of the prior parameter probability distributions from which potential model configurations are drawn and tested against observed data was also investigated. Analysis of both synthetic and field data collected at the Eggborough (UK) site indicates that the geophysical data alone contain valuable information regarding the VGM parameters. However, significantly better results are obtained when these data are combined with a realistic, informative prior. A subsequent study explore in detail the dynamic infiltration case, specifically to what extent time-lapse ZOP GPR data, collected during a forced infiltration experiment at the Arrenaes field site (Denmark), can help to quantify VGM parameters and their uncertainties using the MCMC inversion strategy. The findings indicate that the stochastic inversion of time-lapse GPR data does indeed allow for a substantial refinement in the inferred posterior VGM parameter distributions. In turn, this significantly improves knowledge of the hydraulic properties, which are required to predict hydraulic behaviour. Finally, another aspect that needed to be addressed involved the comparison of time-lapse GPR data collected under different infiltration conditions (i.e., natural loading and forced infiltration conditions) to estimate the VGM parameters using the MCMC inversion strategy. The results show that for the synthetic example, considering data collected during a forced infiltration test helps to better refine soil hydraulic properties compared to data collected under natural infiltration conditions. When investigating data collected at the Arrenaes field site, further complications arised due to model error and showed the importance of also including a rigorous analysis of the propagation of model error with time and depth when considering time-lapse data. Although the efforts in this thesis were focused on GPR data, the corresponding findings are likely to have general applicability to other types of geophysical data and field environments. Moreover, the obtained results allow to have confidence for future developments in integration of geophysical data with stochastic inversions to improve the characterization of the unsaturated zone but also reveal important issues linked with stochastic inversions, namely model errors, that should definitely be addressed in future research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ground-penetrating radar (GPR) geophysical method has the potential to provide valuable information on the hydraulic properties of the vadose zone because of its strong sensitivity to soil water content. In particular, recent evidence has suggested that the stochastic inversion of crosshole GPR traveltime data can allow for a significant reduction in uncertainty regarding subsurface van Genuchten-Mualem (VGM) parameters. Much of the previous work on the stochastic estimation of VGM parameters from crosshole GPR data has considered the case of steady-state infiltration conditions, which represent only a small fraction of practically relevant scenarios. We explored in detail the dynamic infiltration case, specifically examining to what extent time-lapse crosshole GPR traveltimes, measured during a forced infiltration experiment at the Arreneas field site in Denmark, could help to quantify VGM parameters and their uncertainties in a layered medium, as well as the corresponding soil hydraulic properties. We used a Bayesian Markov-chain-Monte-Carlo inversion approach. We first explored the advantages and limitations of this approach with regard to a realistic synthetic example before applying it to field measurements. In our analysis, we also considered different degrees of prior information. Our findings indicate that the stochastic inversion of the time-lapse GPR data does indeed allow for a substantial refinement in the inferred posterior VGM parameter distributions compared with the corresponding priors, which in turn significantly improves knowledge of soil hydraulic properties. Overall, the results obtained clearly demonstrate the value of the information contained in time-lapse GPR data for characterizing vadose zone dynamics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A great deal of attention in the supply chain management literature is devoted to study material and demand information flows and their coordination. But in many situations, supply chains may convey information from different nature, they may be an important channel companies have to deliver knowledge, or specifically, technical information to the market. This paper studies the technical flow and highlights its particular requirements. Drawing upon a qualitative field research, it studies pharmaceutical companies, since those companies face a very specific challenge: consumers do not have discretion over their choices, ethical drugs must be prescribed by physicians to be bought and used by final consumers. Technical information flow is rich, and must be redundant and early delivered at multiple points. Thus, apart from the regular material channel where products and order information flow, those companies build a specialized information channel, developed to communicate to those who need it to create demand. Conclusions can be extended to supply chains where products and services are complex and decision makers must be clearly informed about technology-related information. (C) 2009 Elsevier B.V. All rights reserved.