55 resultados para 280200 Artificial Intelligence and Signal and Image Processing
Resumo:
One of the main concerns of evolvable and adaptive systems is the need of a training mechanism, which is normally done by using a training reference and a test input. The fitness function to be optimized during the evolution (training) phase is obtained by comparing the output of the candidate systems against the reference. The adaptivity that this type of systems may provide by re-evolving during operation is especially important for applications with runtime variable conditions. However, fully automated self-adaptivity poses additional problems. For instance, in some cases, it is not possible to have such reference, because the changes in the environment conditions are unknown, so it becomes difficult to autonomously identify which problem requires to be solved, and hence, what conditions should be representative for an adequate re-evolution. In this paper, a solution to solve this dependency is presented and analyzed. The system consists of an image filter application mapped on an evolvable hardware platform, able to evolve using two consecutive frames from a camera as both test and reference images. The system is entirely mapped in an FPGA, and native dynamic and partial reconfiguration is used for evolution. It is also shown that using such images, both of them being noisy, as input and reference images in the evolution phase of the system is equivalent or even better than evolving the filter with offline images. The combination of both techniques results in the completely autonomous, noise type/level agnostic filtering system without reference image requirement described along the paper.
Resumo:
NIR Hyperspectral imaging (1000-2500 nm) combined with IDC allowed the detection of peanut traces down to adulteration percentages 0.01% Contrary to PLSR, IDC does not require a calibration set, but uses both expert and experimental information and suitable for quantification of an interest compound in complex matrices. The obtained results shows the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA
Resumo:
This paper is framed within the problem of analyzing the rationality of the components of two classical geometric constructions, namely the offset and the conchoid to an algebraic plane curve and, in the affirmative case, the actual computation of parametrizations. We recall some of the basic definitions and main properties on offsets (see [13]), and conchoids (see [15]) as well as the algorithms for parametrizing their rational components (see [1] and [16], respectively). Moreover, we implement the basic ideas creating two packages in the computer algebra system Maple to analyze the rationality of conchoids and offset curves, as well as the corresponding help pages. In addition, we present a brief atlas where the offset and conchoids of several algebraic plane curves are obtained, their rationality analyzed, and parametrizations are provided using the created packages.
Resumo:
Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.
Resumo:
A new technology is being proposed as a solution to the problem of unintentional facial detection and recognition in pictures in which the individuals appearing want to express their privacy preferences, through the use of different tags. The existing methods for face de-identification were mostly ad hoc solutions that only provided an absolute binary solution in a privacy context such as pixelation, or a bar mask. As the number and users of social networks are increasing, our preferences regarding our privacy may become more complex, leaving these absolute binary solutions as something obsolete. The proposed technology overcomes this problem by embedding information in a tag which will be placed close to the face without being disruptive. Through a decoding method the tag will provide the preferences that will be applied to the images in further stages.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
The reconstruction of the cell lineage tree of early zebrafish embryogenesis requires the use of in-vivo microscopy imaging and image processing strategies. Second (SHG) and third harmonic generation (THG) microscopy observations in unstained zebrafish embryos allows to detect cell divisions and cell membranes from 1-cell to 1K-cell stage. In this article, we present an ad-hoc image processing pipeline for cell tracking and cell membranes segmentation enabling the reconstruction of the early zebrafish cell lineage tree until the 1K-cell stage. This methodology has been used to obtain digital zebrafish embryos allowing to generate a quantitative description of early zebrafish embryogenesis with minute temporal accuracy and μm spatial resolution
Resumo:
Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.
Resumo:
In this paper a Glucose-Insulin regulator for Type 1 Diabetes using artificial neural networks (ANN) is proposed. This is done using a discrete recurrent high order neural network in order to identify and control a nonlinear dynamical system which represents the pancreas? beta-cells behavior of a virtual patient. The ANN which reproduces and identifies the dynamical behavior system, is configured as series parallel and trained on line using the extended Kalman filter algorithm to achieve a quickly convergence identification in silico. The control objective is to regulate the glucose-insulin level under different glucose inputs and is based on a nonlinear neural block control law. A safety block is included between the control output signal and the virtual patient with type 1 diabetes mellitus. Simulations include a period of three days. Simulation results are compared during the overnight fasting period in Open-Loop (OL) versus Closed- Loop (CL). Tests in Semi-Closed-Loop (SCL) are made feedforward in order to give information to the control algorithm. We conclude the controller is able to drive the glucose to target in overnight periods and the feedforward is necessary to control the postprandial period.
Resumo:
Brain-Computer Interfaces are usually tackled from a medical point of view, correlating observed phenomena to physical facts known about the brain. Existing methods of classification lie in the application of deterministic algorithms and depend on certain degree of knowledge about the underlying phenomena so as to process data. In this demo, different architectures for an evolvable hardware classifier implemented on an FPGA are proposed, in line with the objective of generalizing evolutionary algorithms regardless of the application.
Resumo:
La realización de este proyecto está basado en el estudio realizado por Jean Schoentgen en el cual el autor caracterizó el micro temblor vocal por medio del índice y la frecuencia de modulación. En este proyecto se utilizará la herramienta Matlab para el cálculo de estos parámetros y al finalizar se analizarán los datos obtenidos. El proyecto se ha dividido en tres grandes partes. En la primera de ellas se ha explicado brevemente los conceptos básicos de la voz y conceptos importantes tales como el temblor fisiológico, el patológico y el Jitter vocal entre otros, también se han detallado conceptos matemáticos utilizados en el desarrollo del código. Esto se realizó con el fin que el lector tenga claros algunos conceptos importantes antes del desarrollo del código y así pueda entender con más facilidad el estudio realizado en este proyecto, en esta parte no se ha realizado una explicación muy extensa de cada concepto, entendiendo que el lector posee unos conocimientos básicos de ingeniería, por otra parte existen innumerables libros que explican de una manera más precisa cada uno de estos conceptos. En la segunda parte se llevó a cabo el desarrollo del código. Como se mencionó anteriormente se ha utilizado la herramienta Matlab que es muy utilizada en la mayoría de las asignaturas de la carrera obteniendo así un buen dominio de esta, además posee unos toolbox muy útiles que facilitan los cálculos matemáticos. En esta parte se ilustra paso a paso cada etapa de elaboración del código y algunas graficas de la señal de voz a medida que pasa por cada etapa del código. En la última parte se obtienen los datos de todos los cálculos de los registros de voz y se analiza cada uno de ellos a la vez que se comparan con los del estudio de Jean Schoentgen y se analizan las posibles diferencias. ABSTRACT. The Project is based on the search made by Jean Schoentgen, whose research the micro tremor vocal can be established by frequency modulation and modulation index. This project has been carried out in Matlab to calculate the aforementioned parameters and finally, the results were contrasted with the results from Jean Shoetngen’s research. This project consists of three parts: The first of all, to be able to understand this project to future readers .It was explained different basic concepts about the voice such as physiologic tremor, pathological tremor and Jitter. Furthermore, mathematical concepts were explained in detail, due to these were used in the software development. Then, it was focused on software development such as the elaboration of code and different voice signals that were processed. This part was made with Matlab, which is mathematical software with high-level language for numerical computation, visualization, collaborate across disciplines including signal and image processing and application development. At finally, the acquired calculations were contrasted with the results from Jean Schoentgen’s research.
Resumo:
We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder.
Resumo:
Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowl- edge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finally, the paper illustrates the description with an example of an application in the field of photography equipment.
Resumo:
We propose a modular, assertion-based system for verification and debugging of large logic programs, together with several interesting models for checking assertions statically in modular programs, each with different characteristics and representing different trade-offs. Our proposal is a modular and multivariant extensión of our previously proposed abstract assertion checking model and we also report on its implementation in the CiaoPP system. In our approach, the specification of the program, given by a set of assertions, may be partial, instead of the complete specification required by raditional verification systems. Also, the system can deal with properties which cannot always be determined at compile-time. As a result, the proposed system needs to work with safe approximations: all assertions proved correct are guaranteed to be valid and all errors actual errors. The use of modular, context-sensitive static analyzers also allows us to introduce a new distinction between assertions checked in a particular context or checked in general.
Resumo:
Proof-Carrying Code (PCC) is a general approach to mobile code safety in which programs are augmented with a certifícate (or proof). The practical uptake of PCC greatly depends on the existence of a variety of enabling technologies which allow both to prove programs correct and to replace a costly verification process by an efñcient checking procedure on the consumer side. In this work we propose Abstraction-Carrying Code (ACC), a novel approach which uses abstract interpretation as enabling technology. We argüe that the large body of applications of abstract interpretation to program verification is amenable to the overall PCC scheme. In particular, we rely on an expressive class of safety policies which can be defined over different abstract domains. We use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifícate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstract-interpreter. We believe that ACC brings the expressiveness, flexibility and automation which is inherent in abstract interpretation techniques to the área of mobile code safety. We have implemented and benchmarked ACC within the Ciao system preprocessor. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable.