98 resultados para CNN


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report an effective approach for the construction of a biomimetic sensor of multicopper oxidases by immobilizing a cyclic-tetrameric copper(II) species, containing the ligand (4-imidazolyl)ethylene-2-amino-1-ethylpyridine (apyhist), in the Nafion (R) membrane on a vitreous carbon electrode surface. This complex provides a tetranuclear arrangement of copper ions that allows an effective reduction of oxygen to water, in a catalytic cycle involving four electrons. The electrochemical reduction of oxygen was studied at pH 9.0 buffer solution by using cyclic voltammetry, chronoamperometry, rotating disk electrode voltammetry and scanning electrochemical microscopy techniques. The mediator shows good electrocatalytic ability for the reduction of O(2) at pH 9.0, with reduction of overpotential (350 mV) and increased current response in comparison with results obtained with a bare glassy carbon electrode. The heterogeneous rate constant (k(ME)`) for the reduction of O(2) at the modified electrode was determined by using a Koutecky-Levich plot. In addition, the charge transport rate through the coating and the apparent diffusion coefficient of O(2) into the modifier film were also evaluated. The overall process was found to be governed by the charge transport through the coating, occurring at the interface or at a finite layer at the electrode/coating interface. The proposed study opens up the way for the development of bioelectronic devices based on molecular recognition and self-organization. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the area of video annotation, indexing and retrieval, and shows how a set of tools can be employed, along with domain knowledge, to detect narrative structure in broadcast news. The initial structure is detected using low-level audio visual processing in conjunction with domain knowledge. Higher level processing may then utilize the initial structure detected to direct processing to improve and extend the initial classification.

The structure detected breaks a news broadcast into segments, each of which contains a single topic of discussion. Further the segments are labeled as a) anchor person or reporter, b) footage with a voice over or c) sound bite. This labeling may be used to provide a summary, for example by presenting a thumbnail for each reporter present in a section of the video. The inclusion of domain knowledge in computation allows more directed application of high level processing, giving much greater efficiency of effort expended. This allows valid deductions to be made about structure and semantics of the contents of a news video stream, as demonstrated by our experiments on CNN news broadcasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dr Colleen Murrell was interviewed on ABC local radio in advance of the launch of ABC Australia's 24-hour TV station. In the interview she discusses the reasons for the ABC's new venture in broadcasting and places it in the context of other international stations such as BBC World, CNN and Sky News. Dr Murrell also discusses the cost implications and the ability of the ABC to produce original content from its international correspondents. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This book reveals that ‘fixers’—local experts on whom foreign correspondents rely—play a much more significant role in international television newsgathering than has been documented or understood. Murrell explores the frames though which international reporting has traditionally been analysed and then shows that fixers, who have largely been dismissed by scholars as "logistical aides", are in fact central to the day-to-day decision-making that takes place on-the-road. Murrell looks at why and how fixers are selected and what their significance is to foreign correspondence. She asks if fixers help introduce a local perspective into the international news agenda, or if fixers are simply ‘People Like Us’ (PLU). Also included are excerpts from interviews with TV correspondents and fixers and in-depth case studies of correspondents in Iraq and Indonesia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il tumore al seno si colloca al primo posto per livello di mortalità tra le patologie tumorali che colpiscono la popolazione femminile mondiale. Diversi studi clinici hanno dimostrato come la diagnosi da parte del radiologo possa essere aiutata e migliorata dai sistemi di Computer Aided Detection (CAD). A causa della grande variabilità di forma e dimensioni delle masse tumorali e della somiglianza di queste con i tessuti che le ospitano, la loro ricerca automatizzata è un problema estremamente complicato. Un sistema di CAD è generalmente composto da due livelli di classificazione: la detection, responsabile dell’individuazione delle regioni sospette presenti sul mammogramma (ROI) e quindi dell’eliminazione preventiva delle zone non a rischio; la classificazione vera e propria (classification) delle ROI in masse e tessuto sano. Lo scopo principale di questa tesi è lo studio di nuove metodologie di detection che possano migliorare le prestazioni ottenute con le tecniche tradizionali. Si considera la detection come un problema di apprendimento supervisionato e lo si affronta mediante le Convolutional Neural Networks (CNN), un algoritmo appartenente al deep learning, nuova branca del machine learning. Le CNN si ispirano alle scoperte di Hubel e Wiesel riguardanti due tipi base di cellule identificate nella corteccia visiva dei gatti: le cellule semplici (S), che rispondono a stimoli simili ai bordi, e le cellule complesse (C) che sono localmente invarianti all’esatta posizione dello stimolo. In analogia con la corteccia visiva, le CNN utilizzano un’architettura profonda caratterizzata da strati che eseguono sulle immagini, alternativamente, operazioni di convoluzione e subsampling. Le CNN, che hanno un input bidimensionale, vengono solitamente usate per problemi di classificazione e riconoscimento automatico di immagini quali oggetti, facce e loghi o per l’analisi di documenti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2×2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance (~85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente Tesis investiga el campo del reconocimiento automático de imágenes mediante ordenador aplicado al análisis de imágenes médicas en mamografía digital. Hay un interés por desarrollar sistemas de aprendizaje que asistan a los radiólogos en el reconocimiento de las microcalcificaciones para apoyarles en los programas de cribado y prevención del cáncer de mama. Para ello el análisis de las microcalcificaciones se ha revelado como técnica clave de diagnóstico precoz, pero sin embargo el diseño de sistemas automáticos para reconocerlas es complejo por la variabilidad y condiciones de las imágenes mamográficas. En este trabajo se analizan los planteamientos teóricos de diseño de sistemas de reconocimiento de imágenes, con énfasis en los problemas específicos de detección y clasificación de microcalcificaciones. Se ha realizado un estudio que incluye desde las técnicas de operadores morfológicos, redes neuronales, máquinas de vectores soporte, hasta las más recientes de aprendizaje profundo mediante redes neuronales convolucionales, contemplando la importancia de los conceptos de escala y jerarquía a la hora del diseño y sus implicaciones en la búsqueda de la arquitectura de conexiones y capas de la red. Con estos fundamentos teóricos y elementos de diseño procedentes de otros trabajos en este área realizados por el autor, se implementan tres sistemas de reconocimiento de mamografías que reflejan una evolución tecnológica, culminando en un sistema basado en Redes Neuronales Convolucionales (CNN) cuya arquitectura se diseña gracias al análisis teórico anterior y a los resultados prácticos de análisis de escalas llevados a cabo en nuestra base de datos de imágenes. Los tres sistemas se entrenan y validan con la base de datos de mamografías DDSM, con un total de 100 muestras de entrenamiento y 100 de prueba escogidas para evitar sesgos y reflejar fielmente un programa de cribado. La validez de las CNN para el problema que nos ocupa queda demostrada y se propone un camino de investigación para el diseño de su arquitectura. ABSTRACT This Dissertation investigates the field of computer image recognition applied to medical imaging in mammography. There is an interest in developing learning systems to assist radiologists in recognition of microcalcifications to help them in screening programs for prevention of breast cancer. Analysis of microcalcifications has emerged as a key technique for early diagnosis of breast cancer, but the design of automatic systems to recognize them is complicated by the variability and conditions of mammographic images. In this Thesis the theoretical approaches to design image recognition systems are discussed, with emphasis on the specific problems of detection and classification of microcalcifications. Our study includes techniques ranging from morphological operators, neural networks and support vector machines, to the most recent deep convolutional neural networks. We deal with learning theory by analyzing the importance of the concepts of scale and hierarchy at the design stage and its implications in the search for the architecture of connections and network layers. With these theoretical facts and design elements coming from other works in this area done by the author, three mammogram recognition systems which reflect technological developments are implemented, culminating in a system based on Convolutional Neural Networks (CNN), whose architecture is designed thanks to the previously mentioned theoretical study and practical results of analysis conducted on scales in our image database. All three systems are trained and validated against the DDSM mammographic database, with a total of 100 training samples and 100 test samples chosen to avoid bias and stand for a real screening program. The validity of the CNN approach to the problem is demonstrated and a research way to help in designing the architecture of these networks is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O jornalismo é um dos principais meios de oferta de temas para a discussão e formação da opinião pública, porém depende de um sistema técnico para ser transmitido. Durante mais de cem anos as informações produzidas pela imprensa foram emitidas, armazenadas, transmitidas e recebidas pelos chamados veículos de comunicação de massa que utilizam a rede centralizada cujas características estão na escassez material, produção em série e massificação. Esse sistema separa no tempo e no espaço emissores e receptores criando uma relação desigual de força em que as grandes empresas controlaram o fluxo informativo, definindo quais fatos seriam veiculados como notícia. Em 1995, a internet cuja informação circula sob a tecnologia da rede distribuída, foi apropriada pela sociedade, alterando a forma de produção, armazenamento e transmissão de informação. A tecnologia despertou a esperança de que esta ferramenta poderia proporcionar uma comunicação mais dialógica e democrática. Mas aos poucos pode-se perceber novas empresas se apropriando da tecnologia da rede distribuída sob a qual circula a internet, gerando um novo controle do fluxo informativo. Realizou-se nessa pesquisa um levantamento bibliográfico para estabelecer uma reflexão crítica dos diferentes intermediários entre fato e a notícia tanto da rede centralizada como na rede distribuída, objetivando despertar uma discussão que possa oferecer novas ideias para políticas, bem como alternativas para uma comunicação mais democrática e mais libertária.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O consumidor contemporâneo, inserido em um novo ambiente de comunicação, potencializa suas expressões, capaz de avaliar uma marca ou produto e transmitir sua opinião pelas redes sociais, ou seja, o consumidor expressa suas opiniões e desejos dialogando com seus pares de forma espontânea nas redes sociais on-line. É neste ambiente de participação e interação (ciberespaço) que está nosso objeto de estudo, o boca a boca on-line – a voz do consumidor contemporâneo, também conhecido como uma manifestação informativa pessoal ou uma conversa, a opinion sharing. Proporcionado pelos consumidores nas redes sociais on-line, o boca a boca se fortalece em função das possibilidades de interação, característica da sociedade em rede. Nesse cenário, oobjetivo desta pesquisa é caracterizar o boca a boca on-line como um novo fluxo comunicacional entre consumidores, hoje potencializado pelas novas tecnologias da comunicação, capazes de alterar a percepção da marca e demonstrar o uso, pelas marcas, das redes sociais on-line ainda como um ambiente de comunicação unidirecional. Mediante três casos selecionados por conveniência (dois casos nacionais e um internacional), o corpus de análise de nossa pesquisa se limitou aos 5.084 comentários disponibilizados após publicação de matérias jornalísticas no Portal G1 e nas fanpages (Facebook), ambos relativos aos casos selecionados. Com a Análise de Conteúdo dos posts, identificamos e categorizamos a fala do consumidor contemporâneo, sendo assim possível comprovar que as organizações/marcas se valem da cultura do massivo, não dialogando com seus consumidores, pois utilizam as redes sociais on-line ainda de forma unidirecional, além de não darem a devida atenção ao atual fluxo onde se evidencia a opinião compartilhada dos consumidores da sociedade em rede.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work, the electrochemical properties of single-walled carbon nanotube buckypapers (BPs) were examined in terms of carbon nanotubes nature and preparation conditions. The performance of the different free-standing single wall carbon nanotube sheets was evaluated via cyclic voltammetry of several redox probes in aqueous electrolyte. Significant differences are observed in the electron transfer kinetics of the buckypaper-modified electrodes for both the outer- and inner-sphere redox systems. These differences can be ascribed to the nature of the carbon nanotubes (nanotube diameter, chirality and aspect ratio), surface oxidation degree and type of functionalities. In the case of dopamine, ferrocene/ferrocenium, and quinone/hydroquinone redox systems the voltammetric response should be thought as a complex contribution of different tips and sidewall domains which act as mediators for the electron transfer between the adsorbate species and the molecules in solution. In the other redox systems only nanotube ends are active sites for the electron transfer. It is also interesting to point out that a higher electroactive surface area not always lead to an improvement in the electron transfer rate of various redox systems. In addition, the current densities produced by the redox reactions studied here are high enough to ensure a proper electrochemical signal, which enables the use of BPs in sensing devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

News of the attacks on New York and Washington on September 11th 2001 spread fast, mainly through dramatic images of the events broadcast via a global television media, particularly 24-hour news channels such as BBC News 24 and CNN. Following the initial report many news channels moved to dedicated live coverage of the story. This move, to what Liebes (1998) describes as a 'disaster marathon', entails shifting from the routine, regular news agenda to one where the event and its aftermath become the main story and reference for all other news. In this paper, we draw upon recordings from the BBC News 24 channel on September 11th 2001 during the immediate aftermath of the attacks on the World Trade Centre and Pentagon to argue that the coverage of this event, and other similar types of events, may be characterised as news permeated with strategic and emergent silences. Identifying silence as both concrete and metaphorical, we suggest that there are a number of types of silence found in the coverage and that these not only act to cover for lack of new news, or give emphasis or gravitas, but also that the vacuum created by a lack of news creates an emotional space in which collective shock, grieving or wonder are managed through news presented as phatic communion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This book deals with equations of mathematical physics as the different modifications of the KdV equation, the Camassa-Holm type equations, several modifications of Burger's equation, the Hunter-Saxton equation, conservation laws equations and others. The equations originate from physics but are proposed here for their investigation via purely mathematical methods in the frames of university courses. More precisely, we propose classification theorems for the traveling wave solutions for a sufficiently large class of third order nonlinear PDE when the corresponding profiles develop different kind of singularities (cusps, peaks), existence and uniqueness results, etc. The orbital stability of the periodic solutions of traveling type for mKdV equations are also studied. Of great interest too is the interaction of peakon type solutions of the Camassa-Holm equation and the solvability of the classical and generalized Cauchy problem for the Hunter-Saxton equation. The Riemann problem for special systems of conservation laws and the corresponding -shocks are also considered. As it concerns numerical methods we apply the CNN approach. The book is addressed to a broader audience including graduate students, Ph.D. students, mathematicians, physicist, engineers and specialists in the domain of PDE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.