92 resultados para Automatic Peak Detection
Resumo:
A detailed study of voltammetric behavior of ethiofencarb (ETF) is reported using glassy carbon electrode (GCE) and hanging mercury drop electrode (HMDE). With GCE, it is possible to verify that the oxidative mechanism is irreversible, independent of pH, and the maximum intensity current was observed at +1.20 V vs. AgCl/Ag at pH 1.9. A linear calibration line was obtained from 1.0x10-4 to 8.0x10-4 mol L-1 with SWV method. To complete the electrochemical knowledge of ETF pesticide, the reduction was also explored with HMDE. A well-defined peak was observed at –1.00V vs. AgCl/Ag in a large range of pH with higher signal at pH 7.0. Linearity was obtained in 4.2x10-6 and 9.4x10-6 mol L-1 ETF concentration range. An immediate alkaline hydrolysis of ETF was executed, producing a phenolic compound (2-ethylthiomethylphenol) (EMP), and the electrochemical activity of the product was examined. It was deduced that it is oxidized on GCE at +0.75V vs. AgCl/Ag with a maximum peak intensity current at pH 3.2, but the compound had no reduction activity on HMDE. Using the decrease of potential peak, a flow injection analysis (FIA) system was developed connected to an amperometric detector, enabling the determination of EMP over concentration range of 1.0x10-7 and 1.0x10-5 mol L-1 at a sampling rate of 60 h-1. The results provided by FIA methodology were performed by comparison with results from high-performance liquid chromatography (HPLC) technique and demonstrated good agreement with relative deviations lower than 4%. Recovery trials were performed and the obtained values were between 98 and 104%.
Resumo:
The Quinone outside Inhibitors (QoI) are one of the most important and recent fungicide groups used in viticulture and also allowed by Integrated Pest Management. Azoxystrobin, kresoxim-methyl and trifloxystrobin are the main active ingredients for treating downy and powdery mildews that can be present in grapes and wines. In this paper, a method is reported for the analysis of these three QoI-fungicides in grapes and wine. After liquid–liquid extraction and a clean-up on commercial silica cartridges, analysis was by isocratic HPLC with diode array detection (DAD) with a run time of 13 min. Confirmation was by solid-phase micro-extraction (SPME), followed by GC/MS determination. The main validation parameters for the three compounds in grapes and wine were a limit of detection up to 0.073mg kg-1, a precision not exceeding 10.0% and an average recovery of 93% ±38.
Resumo:
A deteção e seguimento de pessoas tem uma grande variedade de aplicações em visão computacional. Embora tenha sido alvo de anos de investigação, continua a ser um tópico em aberto, e ainda hoje, um grande desafio a obtenção de uma abordagem que inclua simultaneamente exibilidade e precisão. O trabalho apresentado nesta dissertação desenvolve um caso de estudo sobre deteção e seguimento automático de faces humanas, em ambiente de sala de reuniões, concretizado num sistema flexível de baixo custo. O sistema proposto é baseado no sistema operativo GNU's Not Unix (GNU) linux, e é dividido em quatro etapas, a aquisição de vídeo, a deteção da face, o tracking e reorientação da posição da câmara. A aquisição consiste na captura de frames de vídeo das três câmaras Internet Protocol (IP) Sony SNC-RZ25P, instaladas na sala, através de uma rede Local Area Network (LAN) também ele já existente. Esta etapa fornece os frames de vídeo para processamento à detecção e tracking. A deteção usa o algoritmo proposto por Viola e Jones, para a identificação de objetos, baseando-se nas suas principais características, que permite efetuar a deteção de qualquer tipo de objeto (neste caso faces humanas) de uma forma genérica e em tempo real. As saídas da deteção, quando é identificado com sucesso uma face, são as coordenadas do posicionamento da face, no frame de vídeo. As coordenadas da face detetada são usadas pelo algoritmo de tracking, para a partir desse ponto seguir a face pelos frames de vídeo subsequentes. A etapa de tracking implementa o algoritmo Continuously Adaptive Mean-SHIFT (Camshift) que baseia o seu funcionamento na pesquisa num mapa de densidade de probabilidade, do seu valor máximo, através de iterações sucessivas. O retorno do algoritmo são as coordenadas da posição e orientação da face. Estas coordenadas permitem orientar o posicionamento da câmara de forma que a face esteja sempre o mais próximo possível do centro do campo de visão da câmara. Os resultados obtidos mostraram que o sistema de tracking proposto é capaz de reconhecer e seguir faces em movimento em sequências de frames de vídeo, mostrando adequabilidade para aplicação de monotorização em tempo real.
Resumo:
Celiac disease (CD) is a gluten-induced autoimmune enteropathy characterized by the presence of antibodies against gliadin (AGA) and anti-tissue transglutaminase (anti-tTG) antibodies. A disposable electrochemical dual immunosensor for the simultaneous detection of IgA and IgG type AGA and antitTG antibodies in real patient’s samples is presented. The proposed immunosensor is based on a dual screen-printed carbon electrode, with two working electrodes, nanostructured with a carbon–metal hybrid system that worked as the transducer surface. The immunosensing strategy consisted of the immobilization of gliadin and tTG (i.e. CD specific antigens) on the nanostructured electrode surface. The electrochemical detection of the human antibodies present in the assayed serum samples was carried out through the antigen–antibody interaction and recorded using alkaline phosphatase labelled anti-human antibodies and a mixture of 3-indoxyl phosphate with silver ions was used as the substrate. The analytical signal was based on the anodic redissolution of enzymatically generated silver by cyclic voltammetry. The results obtained were corroborated with commercial ELISA kits indicating that the developed sensor can be a good alternative to the traditional methods allowing a decentralization of the analyses towards a point-of-care strategy.
Resumo:
Managing programming exercises require several heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. These tools would be too specific to incorporate in an e-Learning platform. Even if they could be provided as pluggable components, the burden of maintaining them would be prohibitive to institutions with few courses in those domains. This work presents a standard based approach for the coordination of a network of e-Learning systems participating on the automatic evaluation of programming exercises. The proposed approach uses a pivot component to orchestrate the interaction among all the systems using communication standards. This approach was validated through its effective use on classroom and we present some preliminary results.
Resumo:
Copper zinc tin sulfide (CZTS) is a promising Earthabundant thin-film solar cell material; it has an appropriate band gap of ~1.45 eV and a high absorption coefficient. The most efficient CZTS cells tend to be slightly Zn-rich and Cu-poor. However, growing Zn-rich CZTS films can sometimes result in phase decomposition of CZTS into ZnS and Cu2SnS3, which is generally deleterious to solar cell performance. Cubic ZnS is difficult to detect by XRD, due to a similar diffraction pattern. We hypothesize that synchrotron-based extended X-ray absorption fine structure (EXAFS), which is sensitive to local chemical environment, may be able to determine the quantity of ZnS phase in CZTS films by detecting differences in the second-nearest neighbor shell of the Zn atoms. Films of varying stoichiometries, from Zn-rich to Cu-rich (Zn-poor) were examined using the EXAFS technique. Differences in the spectra as a function of Cu/Zn ratio are detected. Linear combination analysis suggests increasing ZnS signal as the CZTS films become more Zn-rich. We demonstrate that the sensitive technique of EXAFS could be used to quantify the amount of ZnS present and provide a guide to crystal growth of highly phase pure films.
Resumo:
In the last few years, the number of systems and devices that use voice based interaction has grown significantly. For a continued use of these systems, the interface must be reliable and pleasant in order to provide an optimal user experience. However there are currently very few studies that try to evaluate how pleasant is a voice from a perceptual point of view when the final application is a speech based interface. In this paper we present an objective definition for voice pleasantness based on the composition of a representative feature subset and a new automatic voice pleasantness classification and intensity estimation system. Our study is based on a database composed by European Portuguese female voices but the methodology can be extended to male voices or to other languages. In the objective performance evaluation the system achieved a 9.1% error rate for voice pleasantness classification and a 15.7% error rate for voice pleasantness intensity estimation.
Integration of an automatic storage and retrieval system (ASRS) in a discrete-part automation system
Resumo:
This technical report describes the work carried out in a project within the ERASMUS programme. The objective of this project was the Integration of an Automatic Warehouse in a Discrete-Part Automation System. The discrete-part automation system located at the LASCRI (Critical Systems) laboratory at ISEP was extended with automatic storage and retrieval of the manufacturing parts, through the integration of an automatic warehouse and an automatic guided vehicle (AGV).
Resumo:
It has been shown that in reality at least two general scenarios of data structuring are possible: (a) a self-similar (SS) scenario when the measured data form an SS structure and (b) a quasi-periodic (QP) scenario when the repeated (strongly correlated) data form random sequences that are almost periodic with respect to each other. In the second case it becomes possible to describe their behavior and express a part of their randomness quantitatively in terms of the deterministic amplitude–frequency response belonging to the generalized Prony spectrum. This possibility allows us to re-examine the conventional concept of measurements and opens a new way for the description of a wide set of different data. In particular, it concerns different complex systems when the ‘best-fit’ model pretending to be the description of the data measured is absent but the barest necessity of description of these data in terms of the reduced number of quantitative parameters exists. The possibilities of the proposed approach and detection algorithm of the QP processes were demonstrated on actual data: spectroscopic data recorded for pure water and acoustic data for a test hole. The suggested methodology allows revising the accepted classification of different incommensurable and self-affine spatial structures and finding accurate interpretation of the generalized Prony spectroscopy that includes the Fourier spectroscopy as a partial case.
Resumo:
The demonstration proposal moves from the capabilities of a wireless biometric badge [4], which integrates a localization and tracking service along with an automatic personal identification mechanism, to show how a full system architecture is devised to enable the control of physical accesses to restricted areas. The system leverages on the availability of a novel IEEE 802.15.4/Zigbee Cluster Tree network model, on enhanced security levels and on the respect of all the users' privacy issues.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Área de Especialização de Telecomunicações
Resumo:
A distinção entre miocárdio atordoado e danificado tem sido uma preocupação relevante, no cenário de um enfarte agudo do miocárdio (EAM). A avaliação da viabilidade do miocárdio, pós-enfarte, é de importância vital, no contexto clínico, principalmente numa fase inicial. Actualmente a Ressonância Magnética Cardíaca é o exame de referência para a avaliação de viabilidade do miocárdio. No entanto, é um exame com elevado custo e de difícil acesso. Estudos preliminares demonstraram potencial na utilização de imagens por Tomografia Computorizada para avaliação da área de enfarte, quer em estudos animais quer em humanos. É objectivo desta tese verificar a utilidade de um protocolo de avaliação de viabilidade do miocárdio, com base em imagens de realce tardio (RT) por Tomografia Computorizada, após um procedimento de intervenção coronária percutânea, no contexto de enfarte agudo do miocárdio com elevação do segmento ST (STEMI). Pretende-se igualmente contribuir para a análise da imagem médica do miocárdio, proporcionando métodos de quantificação do RT e software de suporte à decisão médica nesta modalidade de imagem substancialmente recente. São avaliados vários processos para a quantificação do volume de RT, incluindo um método inovador baseado na detecção automática do miocárdio normal. _E ainda proposto um algoritmo para detecção automática do grau de transmuralidade, por segmento do miocárdio, e comparado o seu grau de eficiência face ao diagnóstico médico dos mesmos exames. Apesar do reduzido número de exames utilizado para validação das técnicas descritas nesta tese, os resultados são bastante promissores e podem constituir uma mais-valia no auxilio à gestão do paciente com EAM.
Resumo:
Recent studies of mobile Web trends show a continuous explosion of mobile-friendly content. However, the increasing number and heterogeneity of mobile devices poses several challenges for Web programmers who want to automatically get the delivery context and adapt the content to mobile devices. In this process, the devices detection phase assumes an important role where an inaccurate detection could result in a poor mobile experience for the enduser. In this paper we compare the most promising approaches for mobile device detection. Based on this study, we present an architecture for a system to detect and deliver uniform m-Learning content to students in a Higher School. We focus mainly on the devices capabilities repository manageable and accessible through an API. We detail the structure of the capabilities XML Schema that formalizes the data within the devices capabilities XML repository and the REST Web Service API for selecting the correspondent devices capabilities data according to a specific request. Finally, we validate our approach by presenting the access and usage statistics of the mobile web interface of the proposed system such as hits and new visitors, mobile platforms, average time on site and rejection rate.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.