931 resultados para Computation by Abstract Devices
Resumo:
The quest for renewable energy sources has led to growing attention in the research of organic photovoltaics (OPVs), as a promising alternative to fossil fuels, since these devices have low manufacturing costs and attractive end-user qualities, such as ease of installation and maintenance. Wide application of OPVs is majorly limited by the devices lifetime. With the development of new encapsulation materials, some degradation factors, such as water and oxygen ingress, can almost be excluded, whereas the thermal degradation of the devices remains a major issue. Two aspects have to be addressed to solve the problem of thermal instability: bulk effects in the photoactive layer and interfacial effects at the photoactive layer/charge-transporting layers. In this work, the interface between photoactive layer and electron-transporting zinc oxide (ZnO) in devices with inverted architecture was engineered by introducing polymeric interlayers, based on zinc-binding ligands, such as 3,4-dihydroxybenzene and 8-hydroxyquinoline. Also, a cross-linkable layer of poly(3,4-dimethoxystyrene) and its fullerene derivative were studied. At first, controlled reversible addition-fragmentation chain transfer (RAFT) polymerisation was employed to achieve well-defined polymers in a range of molar masses, all bearing a chain-end functionality for further modifications. Resulting polymers have been fully characterised, including their thermal and optical properties, and introduced as interlayers to study their effect on the initial device performance and thermal stability. Poly(3,4-dihydroxystyrene) and its fullerene derivative were found unsuitable for application in devices as they increased the work function of ZnO and created a barrier for electron extraction. On the other hand, their parental polymer, poly(3,4-dimethoxystyrene), and its fullerene derivative, upon cross-linking, resulted in enhanced efficiency and stability of devices, if compared to control. Polymers based on 8-hydroxyquinoline ligand had a negative effect on the initial stability of the devices, but increased the lifetime of the cells under accelerated thermal stress. Comprehensive studies of the key mechanisms, determining efficiency, such as charge generation and extraction, were performed by using time-resolved electrical and spectroscopic techniques, in order to understand in detail the effect of the interlayers on the device performance. Obtained results allow deeper insight into mechanisms of degradation that limit the lifetime of devices and prompt the design of better materials for the interface stabilisation.
Resumo:
Cloud Computing is a paradigm that enables the access, in a simple and pervasive way, through the network, to shared and configurable computing resources. Such resources can be offered on demand to users in a pay-per-use model. With the advance of this paradigm, a single service offered by a cloud platform might not be enough to meet all the requirements of clients. Ergo, it is needed to compose services provided by different cloud platforms. However, current cloud platforms are not implemented using common standards, each one has its own APIs and development tools, which is a barrier for composing different services. In this context, the Cloud Integrator, a service-oriented middleware platform, provides an environment to facilitate the development and execution of multi-cloud applications. The applications are compositions of services, from different cloud platforms and, represented by abstract workflows. However, Cloud Integrator has some limitations, such as: (i) applications are locally executed; (ii) users cannot specify the application in terms of its inputs and outputs, and; (iii) experienced users cannot directly determine the concrete Web services that will perform the workflow. In order to deal with such limitations, this work proposes Cloud Stratus, a middleware platform that extends Cloud Integrator and offers different ways to specify an application: as an abstract workflow or a complete/partial execution flow. The platform enables the application deployment in cloud virtual machines, so that several users can access it through the Internet. It also supports the access and management of virtual machines in different cloud platforms and provides services monitoring mechanisms and assessment of QoS parameters. Cloud Stratus was validated through a case study that consists of an application that uses different services provided by different cloud platforms. Cloud Stratus was also evaluated through computing experiments that analyze the performance of its processes.
Resumo:
The different characteristics and needs of mobile device users, the situations in which these devices are operated and the limitations and characteristics of these devices are all factors which influence usability and ergonomics; two elements highly required for achieving successful interaction between users and devices. This research aims to identify characteristics of interface design for apps in mobile device applications, focussing on design, visual publishing and content editing, and the actual process of creation of these interfaces, with a view to guarantee quality interaction through touch technology, in observance of service limitations, the opportunities offered by the devices and the application requirements. The study will examine the interface of the mobile device application titled “Brasil 247” which provides news broadcasts using the concept of usability and ergonomics mainly in the field of adaptation, searching and browsing informative articles, as well as clarifying the processes and techniques necessary to carry out interaction tests which seek to evaluate the usability of interface.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Laura Kurgan’s Monochrome Landscapes (2004), first exhibited in the Whitney Museum of American Art in New York City, consists of four oblong Cibachrome prints derived from digital files sourced from the commercial Ikonos and QuickBird satellites. The prints are ostensibly flat, depthless fields of white, green, blue, and yellow, yet the captions provided explain that the sites represented are related to contested military, industrial, and cartographic practices. In Kurgan’s account of Monochrome Landscapes she explains that it is in dialogue with another work from the Whitney by abstract artist Ellsworth Kelly. This article pursues the relationship between formalist abstraction and satellite imaging in order to demonstrate how formalist strategies aimed at producing an immediate retinal response are bound up with contemporary uses of digital information and the truth claims such information can be made to substantiate.
Resumo:
One of the most important components in electrochemical storage devices (batteries and supercapacitors) is undoubtedly the electrolyte. The basic function of any electrolyte in these systems is the transport of ions between the positive and negative electrodes. In addition, electrochemical reactions occurring at each electrode/electrolyte interface are the origin of the current generated by storage devices. In other words, performances (capacity, power, efficiency and energy) of electrochemical storage devices are strongly related to the electrolyte properties, as well as, to the affinity for the electrolyte to selected electrode materials. Indeed, the formulation of electrolyte presenting good properties, such as high ionic conductivity and low viscosity, is then required to enhance the charge transfer reaction at electrode/electrolyte interface (e.g. charge accumulation in the case of Electrochemical Double Layer Capacitor, EDLC). For practical and safety considerations, the formulation of novel electrolytes presenting a low vapor pressure, a large liquid range temperature, a good thermal and chemical stabilities is also required.
This lecture will be focused on the effect of the electrolyte formulation on the performances of electrochemical storage devices (Li-ion batteries and supercapacitors). During which, a summary of the physical, thermal and electrochemical data obtained by our group, recently, on the formulation of novel electrolyte-based on the mixture of an ionic liquid (such as EmimNTf2 and Pyr14NTf2) and carbonate or dinitrile solvents will be presented and commented. The impact of the electrolyte formulation on the storage performances of EDLC and Li-ion batteries will be also discussed to further understand the relationship between electrolyte formulation and electrochemical performances. This talk will also be an opportunity to further discuss around the effects of additives (SEI builder: fluoroethylene carbonate and vinylene carbonate), ionic liquids, structure and nature of lithium salt (LiTFSI vs LiPF6) on the cyclability of negative electrode to then enhance the electrolyte formulation. For that, our recent results on TiSnSb and graphite negative electrodes will be presented and discussed, for example 1,2.
1-C. Marino, A. Darwiche1, N. Dupré, H.A. Wilhelm, B. Lestriez, H. Martinez, R. Dedryvère, W. Zhang, F. Ghamouss, D. Lemordant, L. Monconduit “ Study of the Electrode/Electrolyte Interface on Cycling of a Conversion Type Electrode Material in Li Batteries” J. Phys.chem. C, 2013, 117, 19302-19313
2- Mouad Dahbi, Fouad Ghamouss, Mérièm Anouti, Daniel Lemordant, François Tran-Van “Electrochemical lithiation and compatibility of graphite anode using glutaronitrile/dimethyl carbonate mixtures containing LiTFSI as electrolyte” 2013, 43, 4, 375-385.
Resumo:
Many cloud-based applications employ a data centre as a central server to process data that is generated by edge devices, such as smartphones, tablets and wearables. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effect on Quality-of-Service and Experience. The concept of Edge Computing is predicated on moving some of this computational load towards the edge of the network to harness computational capabilities that are currently untapped in edge nodes, such as base stations, routers and switches. This position paper considers the challenges and opportunities that arise out of this new direction in the computing landscape.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Presentemente, em Portugal, estima-se que a utilização das TIC (Tecnologias da Informação e da Comunicação), pelos designados nativos digitais, corresponde a cerca de 98,S°/o. Considerando que os nativos digitais correspondem aos cidadãos que nasceram somente há 15 anos atrás, significa que toda a atual população escolar vive e convive rodeada de dispositivos e de ferramentas digitais no âmbito das suas rotinas diárias. Contudo, em termos gerais, não se tem verificado a mesma realidade no espaço educativo onde a presença de computadores e de uma ligação à Internet continua a ser muito escassa e incipiente. As Práticas de Ensino Supervisionadas, no âmbito do Mestrado em Educação Pré-escolar e Ensino do 1° Ciclo do Ensino Básico, têm vindo a constituir uma «janela de oportunidades» para a realização de investigações sobre a implementação das TIC no processo de ensino aprendizagem. Apesar de todos os constrangimentos, tem sido possível promover-se uma investigação que se pode considerar inovadora e exploratória pela inexistência de investigações anteriores similares. Para o efeito, é objetivo apresentarem-se os resultados de uma investigação que envolveu a utilização de blogues, no sentido destas ferramentas digitais promoverem novos contextos de aprendizagem e, ainda, com o objetivo de serem uma «ponte» que permitiu a aproximação entre a escola e a família. Um outro exemplo de investigação a apresentar diz respeito à utilização segura da Internet. Dos resultados obtidos foi possível verificar que crianças de 7/8 anos já fazem uma utilização autónoma da internet mas, em vários casos, foi possível averiguar-se que os respetivos pais não têm noção relativamente às utilizações que os seus educandos fazem no chamado ciberespaço. Em termos globais, foi possível apurar-se que as crianças aderem muito facilmente à utilização das tecnologias digitais na sala de aula, os alunos mostraram-se sempre muito envolvidos e motivados. Para terminar, é importante referir que, apesar de todas as vantagens apuradas, quer os alunos quer os pais, continuam a afirmar que a presença dos professores é fundamental e imprescindível no processo de ensino e de aprendizagem.
Resumo:
Title of Thesis: Thesis directed by: ABSTRACT EXAMINING THE IMPLEMENTATION CHALLENGES OF PROJECT-BASED LEARNING: A CASE STUDY Stefan Frederick Brooks, Master of Education, 2016 Professor and Chair Francine Hultgren Teaching and Learning, Policy and Leadership Department Project-based learning (PjBL) is a common instructional strategy to consider for educators, scholars, and advocates who focus on education reform. Previous research on PjBL has focused on its effectiveness, but a limited amount of research exists on the implementation challenges. This exploratory case study examines an attempted project- based learning implementation in one chemistry classroom at a private school that fully supports PjBL for most subjects with limited use in mathematics. During the course of the study, the teacher used a modified version of PjBL. Specifically, he implemented some of the elements of PjBL, such as a driving theme and a public presentation of projects, with the support of traditional instructional methods due to the context of the classroom. The findings of this study emphasize the teacher’s experience with implementing some of the PjBL components and how the inherent implementation challenges affected his practice.
Resumo:
El sector eléctrico está experimentando cambios importantes tanto a nivel de gestión como a nivel de mercado. Una de las claves que están acelerando este cambio es la penetración cada vez mayor de los Sistemas de Generación Distribuida (DER), que están dando un mayor protagonismo al usuario a la hora de plantear la gestión del sistema eléctrico. La complejidad del escenario que se prevé en un futuro próximo, exige que los equipos de la red tenga la capacidad de interactuar en un sistema mucho más dinámico que en el presente, donde la interfaz de conexión deberá estar dotada de la inteligencia necesaria y capacidad de comunicación para que todo el sistema pueda ser gestionado en su conjunto de manera eficaz. En la actualidad estamos siendo testigos de la transición desde el modelo de sistema eléctrico tradicional hacia un nuevo sistema, activo e inteligente, que se conoce como Smart Grid. En esta tesis se presenta el estudio de un Dispositivo Electrónico Inteligente (IED) orientado a aportar soluciones para las necesidades que la evolución del sistema eléctrico requiere, que sea capaz de integrase en el equipamiento actual y futuro de la red, aportando funcionalidades y por tanto valor añadido a estos sistemas. Para situar las necesidades de estos IED se ha llevado a cabo un amplio estudio de antecedentes, comenzando por analizar la evolución histórica de estos sistemas, las características de la interconexión eléctrica que han de controlar, las diversas funciones y soluciones que deben aportar, llegando finalmente a una revisión del estado del arte actual. Dentro de estos antecedentes, también se lleva a cabo una revisión normativa, a nivel internacional y nacional, necesaria para situarse desde el punto de vista de los distintos requerimientos que deben cumplir estos dispositivos. A continuación se exponen las especificaciones y consideraciones necesarias para su diseño, así como su arquitectura multifuncional. En este punto del trabajo, se proponen algunos enfoques originales en el diseño, relacionados con la arquitectura del IED y cómo deben sincronizarse los datos, dependiendo de la naturaleza de los eventos y las distintas funcionalidades. El desarrollo del sistema continua con el diseño de los diferentes subsistemas que lo componen, donde se presentan algunos algoritmos novedosos, como el enfoque del sistema anti-islanding con detección múltiple ponderada. Diseñada la arquitectura y funciones del IED, se expone el desarrollo de un prototipo basado en una plataforma hardware. Para ello se analizan los requisitos necesarios que debe tener, y se justifica la elección de una plataforma embebida de altas prestaciones que incluye un procesador y una FPGA. El prototipo desarrollado se somete a un protocolo de pruebas de Clase A, según las normas IEC 61000-4-30 e IEC 62586-2, para comprobar la monitorización de parámetros. También se presentan diversas pruebas en las que se han estimado los retardos implicados en los algoritmos relacionados con las protecciones. Finalmente se comenta un escenario de prueba real, dentro del contexto de un proyecto del Plan Nacional de Investigación, donde este prototipo ha sido integrado en un inversor dotándole de la inteligencia necesaria para un futuro contexto Smart Grid.
Resumo:
This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in long-stay hospitals, and in some cases, in the homes of patients with a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center. The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient's location and some information about his/her illness to the hospital or care centre
Resumo:
Recent years observed massive growth in wearable technology, everything can be smart: phones, watches, glasses, shirts, etc. These technologies are prevalent in various fields: from wellness/sports/fitness to the healthcare domain. The spread of this phenomenon led the World-Health-Organization to define the term 'mHealth' as "medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices". Furthermore, mHealth solutions are suitable to perform real-time wearable Biofeedback (BF) systems: sensors in the body area network connected to a processing unit (smartphone) and a feedback device (loudspeaker) to measure human functions and return them to the user as (bio)feedback signal. During the COVID-19 pandemic, this transformation of the healthcare system has been dramatically accelerated by new clinical demands, including the need to prevent hospital surges and to assure continuity of clinical care services, allowing pervasive healthcare. Never as of today, we can say that the integration of mHealth technologies will be the basis of this new era of clinical practice. In this scenario, this PhD thesis's primary goal is to investigate new and innovative mHealth solutions for the Assessment and Rehabilitation of different neuromotor functions and diseases. For the clinical assessment, there is the need to overcome the limitations of subjective clinical scales. Creating new pervasive and self-administrable mHealth solutions, this thesis investigates the possibility of employing innovative systems for objective clinical evaluation. For rehabilitation, we explored the clinical feasibility and effectiveness of mHealth systems. In particular, we developed innovative mHealth solutions with BF capability to allow tailored rehabilitation. The main goal that a mHealth-system should have is improving the person's quality of life, increasing or maintaining his autonomy and independence. To this end, inclusive design principles might be crucial, next to the technical and technological ones, to improve mHealth-systems usability.
Resumo:
Depth represents a crucial piece of information in many practical applications, such as obstacle avoidance and environment mapping. This information can be provided either by active sensors, such as LiDARs, or by passive devices like cameras. A popular passive device is the binocular rig, which allows triangulating the depth of the scene through two synchronized and aligned cameras. However, many devices that are already available in several infrastructures are monocular passive sensors, such as most of the surveillance cameras. The intrinsic ambiguity of the problem makes monocular depth estimation a challenging task. Nevertheless, the recent progress of deep learning strategies is paving the way towards a new class of algorithms able to handle this complexity. This work addresses many relevant topics related to the monocular depth estimation problem. It presents networks capable of predicting accurate depth values even on embedded devices and without the need of expensive ground-truth labels at training time. Moreover, it introduces strategies to estimate the uncertainty of these models, and it shows that monocular networks can easily generate training labels for different tasks at scale. Finally, it evaluates off-the-shelf monocular depth predictors for the relevant use case of social distance monitoring, and shows how this technology allows to overcome already existing strategies limitations.
Resumo:
Schizophrenia stands for a long-lasting state of mental uncertainty that may bring to an end the relation among behavior, thought, and emotion; that is, it may lead to unreliable perception, not suitable actions and feelings, and a sense of mental fragmentation. Indeed, its diagnosis is done over a large period of time; continuos signs of the disturbance persist for at least 6 (six) months. Once detected, the psychiatrist diagnosis is made through the clinical interview and a series of psychic tests, addressed mainly to avoid the diagnosis of other mental states or diseases. Undeniably, the main problem with identifying schizophrenia is the difficulty to distinguish its symptoms from those associated to different untidiness or roles. Therefore, this work will focus on the development of a diagnostic support system, in terms of its knowledge representation and reasoning procedures, based on a blended of Logic Programming and Artificial Neural Networks approaches to computing, taking advantage of a novel approach to knowledge representation and reasoning, which aims to solve the problems associated in the handling (i.e., to stand for and reason) of defective information.