957 resultados para Data-representation
Resumo:
Electrocardiogram (ECG) biometrics are a relatively recent trend in biometric recognition, with at least 13 years of development in peer-reviewed literature. Most of the proposed biometric techniques perform classifi-cation on features extracted from either heartbeats or from ECG based transformed signals. The best representation is yet to be decided. This paper studies an alternative representation, a dissimilarity space, based on the pairwise dissimilarity between templates and subjects' signals. Additionally, this representation can make use of ECG signals sourced from multiple leads. Configurations of three leads will be tested and contrasted with single-lead experiments. Using the same k-NN classifier the results proved superior to those obtained through a similar algorithm which does not employ a dissimilarity representation. The best Authentication EER went as low as 1:53% for a database employing 503 subjects. However, the employment of extra leads did not prove itself advantageous.
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Resumo: A insuficiência cardíaca, já denominada de epidemia do século XXI é, de entre as doenças cardiovasculares, a única cuja incidência e prevalência continuam a crescer, apesar dos imensos progressos feitos na área da terapêutica nas últimas duas décadas. Caracteriza-se por elevada mortalidade – superior à do conjunto das neoplasias malignas -, grande morbilidade, consumo de recursos e custos exuberantes. É um dos problemas mais graves de Saúde Pública dos Países industrializados, cujo manejo deverá constituir uma prioridade para os Serviços Nacionais de Saúde. Todavia, o reconhecimento universal da gravidade desta situação tem originado poucas soluções concretas para conter a epidemia, cujo protagonismo não cessa de aumentar. É possível hoje prevenir, tratar de forma a retardar a evolução da doença ou até revertê-la, desde que diagnosticada atempadamente. Qualquer atitude nestas áreas pressupõe um diagnóstico correcto, precoce e completo da situação, sem o qual não haverá um tratamento adequado. O diagnóstico tem preocupado bem menos os investigadores e os clínicos que a terapêutica. É, contudo, comprovadamente difícil a todos os níveis dos Cuidados de Saúde e constitui certamente a primeira barreira ao controlo da situação. OBJECTIVOS: À luz do conhecimento actual e da nossa própria experiência, propusemo-nos analisar os problemas do diagnóstico da insuficiência cardíaca e a forma como eles se repercutem no manejo da doença e na saúde das populações. Foram objectivos desta dissertação avaliar como a evolução dos modelos de insuficiência cardíaca e de disfunção ventricular influenciaram a definição e os critérios de diagnóstico da doença ao longo do tempo; as consequências geradas pela falta de consenso quanto à definição e aos critérios de diagnóstico nas diferentes fases de evolução desta entidade; discutir o papel da clínica e dos exames complementares no diagnóstico da síndrome e nas estratégias de rastreio da disfunção cardíaca; apontar alguns caminhos e possíveis metodologias para o manejo da doença de forma a que possamos, no futuro, diagnosticar melhor para melhor prevenir, tratar e conter a epidemia. METODOLOGIA: A metodologia utilizada neste trabalho decorre directamente da actividade assistencial diária e da investigação clínica gerada no interesse pelos problemas com que nos deparámos, ao longo dos anos, na área da insuficiência cardíaca. A par com o estudo epidemiológico da insuficiência cardíaca em Portugal, desenvolvemos um protocolo original para a avaliação da qualidade do diagnóstico no ambulatório e do papel da clínica e dos diferentes exames complementares no diagnóstico da síndrome. Avaliámos os problemas do diagnóstico da insuficiência cardíaca em meio hospitalar através de um inquérito endereçado aos Directores de Serviço, pelo Grupo de Estudo de Insuficiência Cardíaca da Sociedade Portuguesa de Cardiologia. Analisámos a qualidade do diagnóstico da insuficiência cardíaca codificado à data da alta hospitalar. Após a criação de uma área específica, vocacionada para o internamento de doentes com insuficiência cardíaca, avaliámos o seu impacto no diagnóstico e tratamento da síndrome. Também testámos o desempenho dos peptídeos natriuréticos no diagnóstico dos diferentes tipos de insuficiência cardíaca sintomática, em meio hospitalar. Os resultados parciais da investigação clínica foram sendo comunicados à comunidade científica e publicados em revistas da especialidade. Discutimos, nesta dissertação, os artigos publicados e em publicação, à luz do estado actual da arte na área do diagnóstico. Reflectimos sobre as consequências das dificuldades no diagnóstico da insuficiência cardíaca e apontamos possíveis caminhos para implementar o rastreio. RESULTADOS: Em 1982, muito no início da nossa actividade clínica, cientes da complexidade da insuficiência cardíaca e do desafio que a sua abordagem constituía para os clínicos,empenhávamo-nos no desenvolvimento de uma classificação fisiopatológica original da insuficiência cardíaca, que foi tema para a Tese de Doutoramento da Professora Doutora Fátima Ceia em 1989. sistemático da doença, melhorar os cuidados prestados aos doentes e diminuir os custos envolvidos no manejo da síndrome. No artigo 1 – Insuficiência cardíaca: novos conceitos fisiopatológicos e implicações terapêuticas – publicado em 1984, descrevemos, à luz do conhecimento da época, a insuficiência cardíaca como uma doença sistémica, resultado da interacção entre os múltiplos mecanismos de compensação da disfunção cardíaca. Desenvolvemos “uma classificação fisiopatológica com implicações terapêuticas” original, onde delineámos os diferentes tipos de insuficiência cardíaca, as suas principais características clínicas, hemodinâmicas, funcionais e anatómicas e propuzemos terapêutica individualizada de acordo com a definição e o diagnóstico dos diferentes tipos de insuficiência cardíaca. Em 1994, no artigo 2 – A insuficiência cardíaca e o clínico no fim do século vinte – salientamos a forma como os diferentes mecanismos de compensação interagem, influenciam a evolução da doença no tempo, produzem síndromes diferentes e fundamentam a actuação terapêutica. Discutimos a evolução da definição da doença de acordo com o melhor conhecimento da sua fisiopatologia e etiopatogenia. Sublinhamos a necessidade de desenvolver estratégias para a prevenção da doença, o diagnóstico precoce e o tratamento atempado. Ainda no primeiro capítulo: Insuficiência cardíaca: da fisiopatologia à clínica – um modelo em constante evolução – revisitámos os sucessivos modelos fisiopatológicos da insuficiência cardíaca: cardio-renal, hemodinâmico, neuro-hormonal e imuno-inflamatório e a sua influência na definição da síndrome e nos critérios de diagnóstico. Analisámos a evolução do conceito de disfunção cardíaca que, à dicotomia da síndrome em insuficiência cardíaca por disfunção sistólica e com função sistólica normal, contrapõe a teoria do contínuo na evolução da doença. Esta última, mais recente, defende que estas duas formas de apresentação não são mais do que fenótipos diferentes, extremos, de uma mesma doença que origina vários cenários, desde a insuficiência cardíaca com fracção de ejecção normal à disfunção sistólica ventricular grave No capítulo II - O diagnóstico da insuficiência cardíaca: problemas e consequências previsíveis - analisamos as consequências da falta de critérios de diagnóstico consensuais para a insuficiência cardíaca em todo o seu espectro, ao longo do tempo. As dificuldades de diagnóstico reflectem-se nos resultados resultados dos estudos epidemiológicos. Vivemos essa dificuldade quando necessitámos de definir critérios de diagnóstico exequíveis no ambulatório, abrangendo todos os tipos de insuficiência cardíaca e de acordo com as Recomendações, para o programa EPICA –EPidemiologia da Insuficiência Cardíaca e Aprendizagem – desenhado para os Cuidados Primários de Saúde. No artigo 3 – Epidemiologia da insuficiência cardíaca e Aprendizagem – desenhado para os Cuidados Primários de Saúde. No artigo 3 – Epidemiologia da insuficiência cardíaca – discutimos as consequências dos grandes estudos epidemiológicos terem adoptado ao longo dos anos definições e critérios de diagnóstico muito variáveis,conduzindo a valores de prevalência e incidência da doença por vezes também muito diferentes. O problema agudiza-se quando se fala em insuficiência cardíaca com fracção de ejecção normal ou com disfunção diastólica, ou ainda em rastreio da disfunção cardíaca assintomática, situações para as quais tem sido extraordinariamente difícil consensualizar critérios de diagnóstico e estratégias. É notória a ausência de grandes estudos de terapêutica no contexto da insuficiência cardíaca com fracção de ejecção normal ou com disfunção diastólica que, à falta de Recomendações terapêuticas baseadas na evidência, continuamos a tratar de acordo com a fisiopatologia. Assim, discrepâncias provavelmente mais relacionadas com os critérios de diagnóstico utilizados do que com diferenças reais entre as populações, dificultam o nosso entendimento quanto ao real peso da insuficiência cardíaca e da disfunção ventricular assintomática. Também comprometerão certamente a correcta alocação de recursos para necessidades que, na realidade, conhecemos mal. O artigo 4 – Prévalence de l’ insuffisance cardiaque au Portugal – apresenta o desenho dos estudos EPICA e EPICA-RAM. O EPICA foi dos primeiros estudos a avaliar a prevalência da insuficiência cardíaca sintomática global, na comunidade, de acordo com os critérios da Sociedade Europeia de Cardiologia. Definimos critérios ecocardiográficos de disfunção cardíaca para todos os tipos de insuficiência cardíaca, nomeadamente para as situações com fracção de ejecção normal, numa época em que ainda não havia na literatura Recomendações consensuais. No artigo 5 – Prevalence of chronic heart failure in Southwestern Europe: the EPICA study - relatamos a prevalência da insuficiência cardíaca em Portugal con-supra-diagnosticada em 8,3%. A codificação hospitalar falhou uma percentagem significativa de doentes com insuficiência cardíaca, minimizando assim o peso da síndrome, com eventual repercussão na alocação dos recursos necessários ao seu manejo no hospital e para a indispensável interface com os Cuidados Primários de Saúde. No artigo 8 – Tratamento da insuficiência cardíaca em hospitais portugueses: resultados de um inquérito – todos os inquiridos relataram dificuldades no diagnóstico atempado da insuficiência cardíaca. Os Directores dos Serviços de Cardiologia reclamam mais recursos humanos vocacionados e estruturas hospitalares especializadas no diagnóstico e tratamento da síndrome, enquanto que os Directores dos Serviços de Medicina necessitam de facilidades de acesso aos métodos complementares de diagnóstico como a ecocardiografia e de maior apoio do Cardiologista. As dificuldades no diagnóstico da insuficiência cardíaca,a todos os níveis de cuidados, acarretam assim consequências epidemiológicas, sócio-económicas e financeiras nefastas para o doente individual, a planificação do Sistema Nacional de Saúde e para a Saúde Pública No capítulo III relembramos a importância do diagnóstico completo da insuficiência cardíaca que, para além do diagnóstico sindromático e anatomo-funcional, deverá incluir o diagnóstico etiológico, e das comorbilidades. Muitos destes aspectos podem comprometer a interpretação dos exames complementares de diagnóstico e, não raramente, as indicações dos fármacos que influenciam a sobrevida dos doentes, a estratégia terapêutica e o prognóstico da síndrome Conscientes das dificuldades no diagnóstico da insuficiência cardíaca nos Cuidados Primários de Saúde e do papel preponderante dos especialistas em Medicina Familiar na contenção da epidemia, propusemo-nos, como objectivos secundários do estudo EPICA (artigo 5), investigar a acuidade diagnóstica dos instrumentos à disposição daqueles colegas, na prática clínica diária: a clínica e os exames complementares de diagnóstico de primeira linha. O artigo 10 – The diagnosis of heart failure in primary care: value of symptoms and signs - documenta o valor limitado dos sinais, sintomas e dados da história pregressa, quando usados isoladamente, no diagnóstico da síndrome. Todos têm baixa sensibilidade para o diagnóstico. Têm maior valor preditor os associados às situações congestivas, mais graves: a dispneia paroxística nocturna (LR 35,5), a ortopneia (LR 39,1), a dificuldade respiratória para a marcha em plano horizontal (LR 25,8), o ingurgitamento jugular > 6 cm com hepatomegalia e edema dos membros inferiores (LR 130,3), que estão raramente presentes na população de insuficientes cardíacos do ambulatório (sensibilidade <10%). O galope ventricular (LR 30,0), a taquicardia >110ppm (LR 26,7) e os fervores crepitantes (LR 23,3) também estão associados ao diagnóstico, mas são também pouco frequentes na população estudada (sensibilidade < 36%). São ainda preditores do diagnóstico o tratamento prévio com digitálico (LR 24,9) e/ou com diurético (LR 10,6), uma história prévia de edema pulmonar agudo (LR 54,2) ou de doença das artérias coronárias (LR 7,1). No artigo 11- Aetiology, comorbidity and drug therapy of chronic heart failure in the real world: the EPICA substudy - confirmámos que a hipertensão arterial é, de entre os factores de risco e/ou etiológicos, a causa mais frequente de insuficiência cardíaca no ambulatório, em Portugal (80%). Trinta e nove por cento dos doentes do estudo EPICA têm história de doença coronária e 15% de fibrilhação auricular. Quantificámos a comorbilidade e analisámos a sua potencial influência no facto da prescrição terapêutica estar aquém das Recomendações internacionais em Portugal, como aliás em toda a Europa. No artigo 12 - The value of electrocardiogram and X-ray for confirming or refuting a suspected diagnosis of heart failure in the community – demonstrámos que os dados do ECG e do RX do tórax não permitem predizer o diagnóstico de insuficiência cardíaca na comunidade; 25% dos doentes com insuficiência cardíaca objectiva tinham ECG ou RX do tórax normais. No artigo 13 - Evaluation of the performance and concordance of clinical questionnaires for heart failure in primary care - comparámos sete questionários e sistemas de pontuação habitualmente utilizados nos grandes estudos, para o diagnóstico da insuficiência cardíaca. Mostraram ter, na sua maioria, uma concordância razoável ou boa entre si. Foram muito específicos (>90%) mas pouco sensíveis. Aumentaram a probabilidade do diagnóstico de 4,3% pré-teste para 25 a 30% pós-teste. Revelaram-se um melhor instrumento para a exclusão da causa cardíaca dos sintomas do que para o diagnóstico da síndrome O artigo 14 - Epidemiologia da insuficiência cardíaca em Portugal continental: novos dados do estudo EPICA – compara as características dos doentes com suspeita clínica, não comprovada, de insuficiência cardíaca (falsos positivos), com os casos de insuficiência cardíaca. Os primeiros são mais idosos, mais mulheres, com mais excesso de peso, menos história de doença das artérias coronárias. Confirma ainda que a clínica, o ECG e o Rx tórax não permitem diferenciar os doentes com insuficiência cardíaca por disfunção sistólica ventricular daqueles que têm fracção de ejecção normal. Perante o desafio do diagnóstico da insuficiência cardíaca com fracção de ejecção normal, as dificuldades de acesso à ecocardiografia na comunidade e os custos acrescidos do exame, pretendemos averiguar no artigo 15 - The diagnostic challenge of heart failure with preserved systolic function in primary care setting: an EPICA-RAM sub-study - o desempenho do BNP no rastreio dos doentes com a suspeita clínica do diagnóstico, a enviar para ecocardiografia. Testámos o desempenho do teste como preditor do diagnóstico clínico da insuficiência cardíaca com função sistólica preservada, bem como dos indicadores ecocardiográficos de disfunção diastólica utilizados no estudo: dilatação da aurícula esquerda e hipertrofia ventricular esquerda. O teste apenas foi bom preditor da dilatação da aurícula esquerda, mas não do diagnóstico clínico deste tipo de insuficiência cardíaca, nem da presença de hipertrofia ventricular esquerda diagnosticada por ecocardiografia (área abaixo da curva ROC: 0,89, 0,56 e 0,54 respectivamente). Concluímos que, isoladamente, não será um bom método de rastreio da doença na comunidade, nem poderá substituir o ecocardiograma no doente com a suspeita clínica do diagnóstico, pelo menos nas fases precoces, pouco sintomáticas da doença. Estudámos e comparámos o desempenho dos peptídeos natriuréticos do tipo B - BNP e NT-proBNP - no diagnóstico da insuficiência cardíaca sintomática, por disfunção sistólica e com fracção de ejecção preservada, no internamento hospitalar. Avaliámos doentes e voluntários normais, de forma a estabelecermos os cut-off do nosso laboratório. Relatámos os resultados deste trabalho no artigo 16 – Valor comparativo do BNP e do NT-proBNP no diagnóstico da insuficiência cardía-ca. Ambos os testes tiveram um excelente desempenho no diagnóstico da insuficiência cardíaca sintomática, em meio hospitalar, mas nenhum foi capaz de diferenciar a insuficiência cardíaca com disfunção sistólica ventricular da que tem fracção de ejecção normal Revimos, à luz do conhecimento actual, o desempenho dos diferentes exames complementares, nomeadamente dos peptídeos natriuréticos e da ecocardiografia, no diagnóstico da insuficiência cardíaca sintomática global, por disfunção sistólica ventricular e com fracção de ejecção normal e discutimos os critérios mais recentemente propostos e as últimas Recomendações internacionais Discutimos as estratégias propostas para o rastreio da disfunção ventricular assintomática que é, na comunidade, pelo menos tão frequente quanto a sintomática. Existe evidência de que tratar precocemente a disfunção ventricular sistólica assintomática se traduz em benefícios reais no prognóstico e, tal como no caso da disfunção sistólica sintomática, é custo-eficiente. Autilização do método padrão para o rastreio da disfunção cardíaca na população obrigaria à realização de ecocardiograma a todos os indivíduos, o que é técnica e economicamente incomportável. Vários estudos têm vindo a testar diversas estratégias alternativas, na procura de uma metodologia que seja, também ela, custo-eficiente. Os autores são unânimes no aspecto em que nenhum exame, quando avaliado isoladamente, foi útil para o rastreio da disfunção cardíaca. Contudo apontam para o ECG e/ou os peptídeos natriuréticos, integrados ou não em esquemas de pontuação clínica, como testes úteis para o pré-rastreio para ecocardiografia. Permitem diminuir os pedidos de ecocardiograma e os custos do rastreio, que se torna tão custo-efectivo quanto o do cancro da mama ou do colo do útero. Alguns autores preconizam ainda a avaliação qualitativa da disfunção cardíaca por ecocardiograma portátil, no contexto de ECG anómalo ou de peptídeo natriurético elevado, antes da referenciação para o ecocardiograma completo. Apontam esta estratégia como sendo a mais custo-eficiente para o rastreio da disfunção cardíaca. Finalmente, tecemos alguns comentários finais quanto a perspectivas de futuro para o manejo da insuficiência cardíaca. É premente estabelecer uma definição precisa e universal da síndrome e critérios de diagnóstico consensuais, claros, objectivos, simples e reprodutíveis para todo o espectro da insuficiência cardíaca, para que possamos num futuro próximo avaliar de forma correcta a extensão do problema, organizar cuidados médicos eficientes e acessíveis a todos e melhorar o prognóstico dos doentes, numa política imprescindível e inevitável de contenção dos custos. Perante os problemas de diagnóstico da síndrome no ambulatório, consideramos ser necessário implementar programas de formação continuada e facilitar o diálogo e a colaboração entre Cuidados Primários de Saúde e Unidades especializadas no manejo da doença, à imagem do que fizemos pontualmente aquando do programa EPICA e do que está a ser desenvolvido em vários países europeus e nos Estados Unidos da América, sob a forma de redes alargada de prestação de cuidados, para a insuficiência cardíaca. As clínicas de insuficiência cardíaca, a laborar sobretudo em meio hospitalar, já deram provas quanto à maior conformidade do diagnóstico (e tratamento) de acordo com as Recomendações, assim como na melhoria da qualidade de vida e sobrevida dos doentes. No artigo 17 - Implementar as Recomendações na prática clínica: benefícios de uma Unidade de Insuficiência Cardíaca Aguda - relatamos a nossa experiência quanto à melhoria da qualidade dos cuidados prestados, nas áreas do diagnóstico e tratamento, numa unidade funcional dedicada ao internamento dos doentes com insuficiência cardíaca aguda. Defendemos que estas áreas específicas de internamento se devem articular com outras,nomeadamente hospitais de dia de insuficiência cardíaca, podendo ou devendo até ser diferentes na sua estrutura e recursos, de acordo com as necessidades das populações no seio das quais são implementadas. Cabe-lhes um papel determinante na interacção com os Cuidados Primários de Saúde, na formação médica continuada e de outros profissionais de saúde e na recepção e orientação dos doentes referenciados para a especialidade.São ainda necessários esforços redobrados para a identificação e controlo dos factores de risco e para o estabelecimento de estratégias de rastreio da disfunção ventricular na comunidade. Tal é passível de ser feito e é custo-eficiente, mas exige a colaboração de técnicos de saúde, investigadores e poder político para avaliar das necessidades reais, implementar e controlar a qualidade destas estratégias, sem as quais não conseguiremos conter a epidemia. SUMMARY: Despite there has been substantial progress in the treatment of heart failure over the last several decades, it is the only cardiovascular disorder that continues to increase in both prevalence and incidence. Characterised by very poor survival and quality of life heart failure is responsible for among the highest healthcare costs for single conditions in developed countries. Heart failure is therefore becoming an increasing concern to healthcare worldwide and must be a priority to National Health Services. It is already called the epidemic of the 21 st century. A correct diagnosis is the cornerstone leading to effective management of the syndrome. An early, accurate and complete diagnosis has become crucial with the identification of therapies that can delay or reverse disease progression and improve both morbidity and mortality. Diagnostic methods may need to encompass screening strategies, as well as symptomatic case identification. Until now, investigation has been over focused on pharmacological treatment; relatively little work has been done on assessing diagnostic tools. This is actually a difficult condition to diagnose at all levels of care, and misdiagnosis must be the first barrier to the control of the epidemic. AIMS Considering current and up-dated knowledge and ourown experience we analyse the problems in diagnosing heart failure and cardiac dysfunction and how they affect patient’s clinical outcome and public health care. It was our aim to analyse how increasing knowledge about cardiac dysfunction influenced the concept of heart failure, its definition and diagnostic criteria; the problems resulting from the use of non consensual definitions and diagnostic criteria; the role of clinical data and diagnostic tests on the diagnosis of the syndrome and on the screening for cardiac dysfunction in the community; to discuss best strategies to enhance diagnostic management of heart failure in all its spectrum, in order to halt the epidemic in the near future. METHODS: The investigation on which the present dissertation is based was developed progressively, along the years, during our every-day clinical practice. Various original clinical investigations and review papers, related to challenges in heart failure management and especially to diagnosis, were presented in scientific meetings and/or published gradually as partial results were obtained. The EPICA Programme (epidemiology of heart failure and awareness), a large-scale epidemiological study on heart failure in Portugal, addressed as secondary endpoints, problems of heart failure misdiagnosis in primary care and the value of clinics and different diagnostic tests to confirme or refute the diagnosis of the syndrome suspected on clinical grounds. But problems on the diagnosis of heart failure are not confined to primary care. Therefore, under the auspices of the Working Group of Heart Failure of the Portuguese Society of Cardiology, a survey on the management of heart failure at hospital was addressed to the heads of Portuguese Cardiology and Internal Medicine Wards. Compliance with Guidelines on diagnosis and treatment of heart failure, perceived difficulties and requests to a better management of the syndrome were ascertained. We have then explored the validity of a coded diagnosis of heart failure at death/discharge from the Department of Medicine of S. Francisco Xavier Hospital, and the rate of misdiagnosis. Gains on compliance with Guidelines on the diagnosis and treatment of heart failure, before and after the implementation of an acute heart failure unit in this Department were assessed. We also compared the performance of type-B natriuretic peptides – BNP and NT-proBNP – on systolic and diastolic heart failure diagnosis, in order to implement the more adequate test. In this thesis we discuss our published papers against the state of the art on heart failure diagnosis, and actual consequences of misdiagnosing. We revisit the accuracy of the different diagnostic testes to a definite diagnosis of the disease. Finally we analyse the different ways of screening for cardiac TESE3 AF 6/9/08 12:25 PM Page 309 310 Summary dysfunction and the more cost-efficient strategies to enhance heart failure diagnosis and management. RESULTS Since 1982, at the very beginning of our clinical activity, already aware of the complexity of the management of heart failure, we were involved in the development of an original pathophysiological heart failure classification, theme of Professor Fátima Ceia Doctoral Thesis discussed in 1989. Paper 1 - Heart Failure. New pathophysiological approach to therapy – published in 1984, described heart failure as a systemic disease resulting from the interaction of the different compensatory mechanisms. We proposed a new dynamic, pathophysiological and aetiological approach to the diagnosis of heart failure syndromes, based on clinics and conventional non-invasive assessment with drug management implications. In 1994, in paper 2 – Heart failure and the physician - towards the XXI century – we discussed the way how the compensatory mechanisms interact, produce the different heart failure syndromes and affect the evolution of the disease. Changing definitions according to the knowledge of the pathophysiology of heart failure at that time were revisited. The need for a universally accepted definition leading to early and accurate diagnosis and treatment of the syndrome was pointed-out. We called for strategies to prevent heart failure. In an up-dated review titled: Heart failure: from pathophysiology to clinics – a model in constant evolution – we revisit the changing pathophysiological models of heart failure – cardio-renal, haemodynamic, neuro-hormonal and imuno-inflamatory models - and their influence on the definition of the syndrome. Traditional dicotomization of heart failure in systolic and diastolic dysfunction is discussed. Rather than being considered as separate diseases with a distinct pathophysiology, systolic and diastolic heart failure may be merely different clinical presentations within a phenotypic spectrum of one and the same disease. Implications for the definition and diagnosis of heart failure are self evident. In chapter II – The diagnosis of heart failure: problems and foreseeable consequences - we analyse epidemiological, clinical and financial consequences of non consensual definition and diagnostic criteria of heart failure for individual patients, Healthcare Systems and Public Health. Problems resulting from the absence of a universally accepted definition of heart failure are clearly illustrated by current epidemiological data and were revisited in paper 3 – Epidemiology of heart failure. In various epidemiological studies measured prevalence and incidence of the syndrome diverge significantly. This worrying variation is certainly more due to different definitions and used diagnostic criteria than true differences between populations. We faced these difficulties when we had to design the EPICA programme, a large population-based study where we had to define simple, effective and easy to obtain diagnostic criteria of heart failure, for the whole spectrum of the disease, in primary care setting. The problem grew when we focused on heart failure with normal ejection function where diagnostic criteria were far from consensual. Therefore large trials on heart failure with normal ejection fraction and consensual evidence-based Guidelines on diagnosis and treatment of diastolic heart failure are still missing. Paper 4 – Prevalence of heart failure in Portugal - presents the design of the EPICA Programme. The EPICA study was one of the first large epidemiological studies addressing the prevalence of global heart failure, in the community, according to the European Guidelines for the diagnosis of the syndrome. We had to define simple, precise echocardiographic criteria to confirm a suspected diagnosis of heart failure on clinical grounds, in all its spectrum. At that time, Guidelines for heart failure with normal ejection fraction where far from consensual and non applicable to the ambulatory. In paper 5 - Prevalence of heart failure in Southwestern Europe: the EPICA study - we reported the prevalence of heart failure in mainland Portugal. From 5434 attendants of primary care centres, representative of the Portuguese population above 25 years, 551 had heart failure, leading to a prevalence of global heart failure of 4.35%, increasing sharply with age in both genders; 1.36% had systolic dysfunction and 1.7% normal ejection fraction. TESE3 AF 6/9/08 12:25 PM Page 310 Summary 311 In paper 6 – Epidemiology of heart failure in primary care in Madeira: the EPICA-RAM study - we report an overall prevalence of heart failure of 4.69%, with systolic dysfunction in 0.76% and with a normal ejection fraction in 2.74% of the cases. Discrepancies in the prevalence of the different types of heart failure between mainland and Madeira are probably related to different Public Health Care organization. Both studies showed that only half of the patients with a suspected diagnosis of heart failure on clinical grounds had the diagnosis confirmed by objective evidence of cardiac dysfunction. It’s therefore probable that unnecessary drugs were prescribed to patients who didn’t need them while others, who would benefit, were not correctly treated for heart failure. Paper 7 – Diagnosis of heart failure in primary care – is a review of the state of the art of the diagnosis of heart failure in primary care setting. It focused on main challenges faced by primary care physicians, namely difficulties on the access to imaging and strategies to screen for cardiac dysfunction. General practitioners awareness and training on the diagnosis and treatment of the syndrome are crucial to halt the epidemic. But problems on the diagnosis of heart failure are not exclusive of primary care. Heart failure is the first cause of hospitalization of patients above 65 years in medical wards, and accounts for more than 70% of the costs with the syndrome. In paper 9 – Validity of a diagnosis of heart failure: implications of misdiagnosing – we reported a prevalence of heart failure in patients hospitalized in our Medicine Department, during a six month period, of 17%. The diagnosis was actually sub-coded at death /discharge. The accuracy of the death / discharge coded diagnosis was 72.2%; the syndrome was under-diagnosed in 21.1% of the cases and over-diagnosed in 8.3%. The discharge codes failed a significant percentage of heart failure cases, biased the actual burden of the syndrome and compromise the allocation of resources to manage in-hospital heart failure and to develop specialised programmes of interaction with primary care. In paper 8 – Treatment of heart failure in Portuguese hospitals: results of a questionnaire – everybody reported difficulties in the management of heart failure. Heads of Cardiology Wards needed more specialised physicians and nurses as well as specific heart failure units for the management of the syndrome, and Heads of Internal Medicine Wards demand more facilities, easier access to echocardiography, and support from heart failure specialised cardiologists. Difficulties in the diagnosis of heart failure at all levels of care, have huge epidemiological, clinical and economic consequences for the individual patient, National Health Services and Public Health. In chapter III, we revisit the relevance of a complete diagnosis of heart failure. An appraisal based on symptoms alone is clearly an incomplete and inaccurate representation of the severity of cardiovascular disease. Determination of cardiac status requires evaluation of composite etiologic, anatomic, and physiologic diagnoses. Functional class and comorbidities must complement the diagnosis, leading to the more appropriate and individualized treatment. Aware of the uncertainty of the diagnosis of heart failure in primary care setting and of the role of General Practitioners in the management of the syndrome, we have evaluated in pre-specified substudies of the EPICA programme, the accuracy of clinics and tests available to the diagnosis of heart failure in the community. Paper 10 – The diagnosis of heart failure in primary care: value of symptoms and signs – confirmed that symptoms and signs and clinical history have limited value in diagnosing heart failure when used alone. The signs and symptoms that best predicted a diagnosis of heart failure were those associated with more severe disease. Among current symptoms, the history of paroxysmal nocturnal dyspnoea (LR 35.5), orthopnea (LR 39.1) and dyspnoea when walking on the flat (LR 25.8) were associated with a diagnosis of heart failure. However, these symptoms were not frequent within this population (sensitivity < 36%). Jugular pressure > 6 cm with hepatic enlargement, and oedema of the lower limbs (LR 130.3), a ventricular gallop (LR 30.0), a heart rate above 110 bpm (LR 26.7), and rales (LR 23.3), were all associated with a diagnosis of heart failure but TESE3 AF 6/9/08 12:25 PM Page 311 312 Summary were infrequent findings (sensitivity < 10%). Prior use of digoxin (LR 24.9) and/or diuretics (LR 10.6), an history of coronary artery disease (LR 7.1) or of pulmonary oedema (LR 54.2) were also associated with a greater likelihood of having heart failure. In paper 11 – Aetiology, comorbidity and drug therapy of chronic heart failure in the real world: the EPICA substudy – aetiological features and therapy relevant comorbidities were analysed. Hypertension was the more frequent risk factor/aetiology of heart failure in the community in Portugal (about 80%). Thirty nine percent had an history of coronary artery disease, and 15% had atrial fibrillation. In paper 12 – The value of electrocardiogram and X-ray for confirming or refuting a suspected diagnosis of heart failure in the community – we reported that ECG and X-ray features are not sufficient to allow heart failure to be reliably predicted in the community. Twenty five percent of patients with heart failure had a normal ECG or chest X-ray. In paper 13 – Evaluation of the performance and concordance of clinical questionnaires for heart failure in the primary care – we compared the accuracy of seven clinical questionnaires and scores for the diagnosis of heart failure in the community, and their concordance. Concordance was good between most of the questionnaires. Their low sensibility impairs their usefulness as diagnostic instruments, but their high specificity (>90%) makes them useful for the identification of patients with symptoms and signs from non-cardiac cause. In paper 14 – Epidemiology of heart failure in mainland Portugal: new data from the EPICA study -characteristics of patients with a definite diagnosis of heart failure and of those in whom the diagnosis of heart failure suspected on clinical grounds was excluded (false positive) were compared. The laters were older, more frequently women, had excessive weight, and a history of coronary artery disease was less frequent. Clinics, ECG and chest X-ray could not distinguish patients with heart failure due to systolic dysfunction from those with normal ejection fraction. Considering the limited and costly access to echocardiography in the community we address in paper 15 - the diagnostic challenge of heart failure with preserved systolic function in primary care: an EPICA-RAM substudy. The performance of BNP as a predictor of a diagnosis of heart failure with preserved systolic function according to ESC Guidelines, left ventricular hypertrophy and dilated left atria by echocardiography was tested. BNP was a good predictor of a dilated left atria, but not of the diagnosis of heart failure with preserved systolic function or of left ventricular hypertrophy (AUC: 0.89, 0.56, and 0.54 respectively). We conclude that BNP measurement alone was not a suitable screening test for heart failure with normal ejection fraction in the community, at least in patients with no or mild symptoms.In paper 16 – Comparative value of BNP and NTproBNP on the diagnosis of heart failure – we first established normal values and cut-offs for our laboratory.Then we assess the diagnostic accuracy of both peptides for the in-hospital diagnosis of heart failure due to systolic dysfunction and with normal ejection fraction. BNP and NT-proBNP had an excellent and similar accuracy to the diagnosis of both types of symptomatic heart failure, but none could distinguish patients with systolic heart failure from those with normal ejection fraction. We revisited the role of the various tests on the diagnosis of heart failure with systolic dysfunction, and with normal ejection fraction and discussed the more recent International Guidelines. There is a great piece of evidence that early treatment of asymptomatic left ventricular systolic dysfunction is cost-effective. Therefore, several screening strategies were investigated. ECG and type B natriuretic peptides measurements, alone or as part of clinical scores, allowed cost-effective community-based screening for left ventricular systolic dysfunction, especially in high-risk subjects. A programme including hand-held echocardiography, following NT-proBNP or ECG pre-screening prior to traditional echocardiogram was the most cost-effective.Screening strategies for left ventricular dysfunction proved no more costly than existing screening programmes such as those for cervical or breast cancer. Conversely, as far as we know, there is no proven strategy to efficiently screen for diastolic dysfunction in the community.Finally we discuss perspectives for heart failure TESE3 AF 6/9/08 12:25 PM Page 312 Summary 313 management in the near future. Simple, reliable and consensual diagnostic procedures are crucial to evaluate the actual burden of the disease, to comply with Guidelines and to reduce healthcare utilisation and costs. As the management of the syndrome in primary care has been hampered by perceived difficulties in diagnosis, improving diagnostic skills is essential and remains a continuous challenge for primary care clinicians. Moreover, patients may require more investigations and treatments that may not be available or very familiar to General Practitioners. Shared care is therefore necessary. Disease management programmes when available and accessible, are the preferred choice to address this issue. This multidisciplinary model of care delivered in specialized heart failure clinics, heart failure day hospitals and many other heart failure care stru-ctures, have shown success in improving quality of life, and reducing morbi-mortality and costs. In paper 17 - Translating Guidelines into clinical practice: benefits of an acute heart failure unit - we report a better compliance with Guidelines on diagnosis and treatment of heart failure after the implementation of a specialized heart failure unit in our Internal Medicine Department. We defend the implementation of heart failure programme management networks to provide optimal care for both patients and health care providers. They may consist of different structures to better address the needs of the referred patient, the referral physician and the regional health care system, and should have a crucial role in transition between primary and secondary care. Managing heart failure requires resources across the entire spectrum of care. Strategies to prevent heart failure include both primary and secondary prevention, and should encompass risk factors control and screening strategies for cardiac dysfunction in the community. Screening for high risk patients and, at least, for patients with asymptomatic systolic dysfunction is cost effective. Therefore, to improve heart failure outcomes and halt the epidemic, this will require shared efforts from investigators, clinicians and politicians. Health care strategy with adequate funding are imperative for successfull heart failure management. RÉSUMÉ: L’insuffisance cardiaque, déjà appelée d’épidémie du XXIeme siècle, est un problème de Santé Publique partout en Europe. Malgré les immenses progrès faits dans le domaine du traitement, dans les deux dernières décennies, l’insuffisance cardiaque est parmi les maladies cardiovasculaires la seule dont l’incidence et prévalence ne cessent d’augmenter. Ses principales caractéristiques sont une mortalité très élevée -supérieure à celle de l’ensemble des cancers - et un impact économique considérable sur les Systèmes de Santé. La prise en charge des insuffisants cardiaques doit ainsi être envisagée comme une priorité absolue. Toutefois, et bien que la sévérité de la situation soit universellement reconnue, Gouvernements et Systèmes de Santé n’ont pris que très peu de mesures concrètes, visant à freiner l’épidémie qui ne cesse de croître. Nous pouvons aujourd’hui prévenir et, sinon guérir l’insuffisance cardiaque, du moins la traiter de façon à freiner la progression de la maladie, ainsi nous soyons capables de faire le diagnostique à temps. Toute attitude térapêutique présume un diagnostique précoce et complet de la situation, sans lequel nulle attitude correcte ne pourra être prise. OBJECTIFS: Nous nous proposons analyser les problèmes du diagnostique de l’insuffisance cardiaque, à la lumière des connaissances actuelles et de notre propre expérience. Parmi les objectifs de ce travail, nous avons évalué la façon d’ont l’évolution des concepts d’insuffisance et de dysfonction cardiaque a influencé la définition et les critères de diagnostique, au cours des temps, et les conséquences du manque de consensus quant à la définition et aux critères de diagnostique pour les différentes phases d’évolution de la maladie. Nous avons discuté le rôle des symptômes, signaux et examens complémentaires dans le diagnostique de l'insuffisance cardiaque et dans les stratégies de screening de la dysfonction cardíaque. Finalement nous avons discuté quelques chemins et possibles stratégies à envisager pour la prise en charge de ces malades pour que, dans un future proche, nous soyons capables de mieux les traiter, mais aussi de mieux prévenir la maladie de façon à freiner l’épidémie. MÉTHODOLOGIE: La méthodologie utilisée pour ce travail dérive directement de l’expérience acquise dans la prise en charge des malades, et de l’investigation gérée par les difficultés perçues quant au diagnostique de l’insuffisance cardiaque, au long des années. Quand de l’élaboration de l’étude EPICA née de la nécessité d’obtenir des données épidémiologiques nationales en ce qui concerne l’insuffisance cardiaque au Portugal, nous avons conçu, selon un dessin original, un protocole d’investigation qui nous a permis d’évaluer la qualité du diagnostique de l’insuffisance cardiaque réalisé par les médecins de famille ainsi que le rôle des symptômes, des signaux, des données de l´histoire clinique, de l’électrocardiogramme e de la radiographie du thorax, dans le diagnostique de l’ insuffisance dans l’ambulatoire. Nous avons aussi investigué la qualité du diagnostique établi pendant l’hospitalisation. Nous avons déterminé la réelle prévalence de l’insuffisance cardíaque hospitalisée dans notre service au long de six mois et celle qui a été codifiée au moment de la sortie de l´hôpital. Nous avons encore comparé la qualité do diagnostique avant et après l’ouverture d’une unité d’insuffisance cardiaque et la performance des différents peptides natriurétiques dans le diagnostique du syndrome. Sous la forme de réponse à un questionnaire, qui leur a été adressé par le Groupe de Travail d’insuffisance cardiaque de la Société Portugaise de Cardiologie, sur la prise en charge de l’insuffisance cardiaque, les Directeurs des Services de Cardiologie et Médicine Interne de tout le Pays se sont prononcés sur à leurs difficultés, en ce qui concerne le diagnostique et le traitement de l’insuffisance cardiaque. Les résultats des investigations partielles ont été communiqués à la communauté scientifique et publiés dans les journaux de la spécialité, au long de ces dernières années. Cette dissertation est constituée par les papiers publiés et en publication auxquels nous avons additionné une révision de l’état actuel de l’art du diagnostique de l’insuffisance cardiaque, ainsi q’une réflexion sur les 317 TESE3 AF 6/9/08 12:25 PM Page 317 318 Résumé conséquences des difficultés éprouvées au diagnostique de la maladie et sur la manière d’améliorer la prise en charge de l’insuffisance cardiaque.RÉSULTATS: En 1982, l’hors de notre début d’activité, nous avons eu très tôt la perception de la complexité de l’insuffisance cardiaque et du défi que constituait, pour les cliniciens, la prise en charge de ces malades. Nous avons participé au développement d’une classification physiopathologique originale qui a servi de base pour le doctorat de la Professeur Fátima Ceia en 1989. L’article 1 – Insuffisance cardiaque : nouveaux concepts physiopathologiques et leurs applications thérapeutiques – publié en 1984, nous décrivons déjà l’insuffisance cardiaque comme une maladie systémique, résultat de l’interaction des différents mécanismes de compensation de la dysfonction cardiaque. Nous proposons « une classification physiopathologique avec application thérapeutique » originale, où nous définissons les différents types d’insuffisance cardiaque et leurs caractéristiques cliniques, hémodynamiques, fonctionnelles et anatomiques et proposons un traitement individualisé d’accord avec la définition et le diagnostique de chacun de ces différents types d’insuffisance cardiaque. En 1994, l’article 2 – L’insuffisance cardiaque et le clinicien à la fin du XXème siècle – fait une description détaillée de comment les différents mécanismes de compensation interagissent, influencent l’évolution de la maladie, produisent les différents syndromes et justifient le choix du type de traitement. Nous discutons l’évolution de la définition de la maladie d’accord avec l’évolution de l’investigation et une meilleure connaissance de la physiopathologie de la dysfonction cardiaque. Nous soulignons la nécessité du diagnostique et du traitement précoces et quant urgent il est de développer des stratégies capables de prévenir la maladie. Les investigateurs défendent aussi l’existence d’un continu entre l’insuffisance cardiaque à fraction d’éjection normale e celle qui s’accompagne de dysfonction systolique ventriculaire. Ce concept défend l’existence de plusieurs syndromes d’insuffisance cardiaque qui ne représenteront que des phénotypes différents d’une même maladie. Des nouvelles Recommandations pour le diagnostique et exclusion de l’insuffisance cardiaque à fraction d’éjection normale / dysfonction diastolique surgissent. Nous revisitons ces nouveaux concepts dans le chapitre: L’insuffisance cardiaque: de la physiopathologie à la clinique - un modèle en constante évolution. Au chapitre II – Le diagnostique de l’insuffisance cardiaque: problèmes et conséquences prévisibles - nous analysons les conséquences du manque de critères de diagnostique consensuels pour l’insuffisance cardiaque au long de tout son spectre. Les difficultés avec le diagnostique se répercutent sur les résultats des grandes études épidémiologiques. Nous avons senti cette difficulté quand, lors de l’élaboration du programme EPICA – ÉPidémiologie de l’Insuffisance Cardiaque et Apprentissage - nous avons voulu définir les critères pour le diagnostique de l’insuffisance cardiaque de tous les types, applicables à l’ambulatoire et d’accord avec les Recommandations Internationales. L’article 3 - Épidémiologie de l’insuffisance cardiaque – analyse les conséquences des différentes définitions et critères de diagnostique utilisés dans les grandes études épidémiologiques qui, au long des années, ont publié des prévalences et incidences très variables de l’insuffisance cardiaque. Ce problème s’aggrave encore quand il s’agit de l’épidémiologie de l’insuffisance cardiaque à fraction d’éjection normale ou dysfonction diastolique, ou des stratégies pour le screening de la dysfonction cardiaque asymptomatique, situations à définitions et critères encore moins consensuels. L’inexistence de Recommandations appuyées sur l’évidence, pour le traitement de l’insufisance cardiaque à fraction d’éjection normale ou à dysfonction diastolique, est une autre des conséquences de ces difficultés. C’est ainsi que des différences de méthodologie, de définitions et de critères de diagnostique, plutôt que des différences réelles entre les populations, difficultent notre connaissance quant à la réelle surcharge que l’insuffisance cardiaque et la dysfonction cardiaque imposent au Système National de Santé. Il est ainsi difficile de prévoir les recours nécessaires, à attribuer à une situation qui est mal connue. L’ article 4 – Prévalence de l’insuffisance cardiaque au Portugal – présente le dessin des études EPICA et EPICA-RAM. EPICA a été l’une des premières études TESE3 AF 6/9/08 12:25 PM Page 318 Résumé 319 à évaluer la prévalence de l’insuffisance cardiaque symptomatique globale, de l’ambulatoire, suivant les Recommandations de la Société Européenne de Cardiologie pour le diagnostique de l’insuffisance cardiaque. Nous y définissons des critères echocardiographiques précis pour tous les types d’insuffisance cardiaque, notamment celle à fraction d’éjection normale, alors qu’à l’époque il n’y avait pas encore de Recommandations consensuelles pour le diagnostic de cette situation. L’article 5 – Prevalence of chronic heart failure in Southwestern Europe : the EPICA study - relate la prévalence de l’insuffisance cardiaque au Portugal continental en 1998. Dans une population de 5434 individus âgés de plus 25 ans, représentative de la population portugaise nous avons identifié 551 cas d’insuffisance cardiaque, correspondant à une prévalence de 4,3%, qui augmente avec l´âge, chez les deux genres ; chez 1,3% la dysfonction ventriculaire est systolique, alors que 1,75% ont une fraction d’éjection normale. L’article 6 – Epidemiology of chronic heart failure in Primary Care in the Autonomic Region of Madeira: the EPICA-RAM study – a suivi le même protocole d’investigation et relate une prévalence de l’insuffisance cardiaque globale de 4,69%, 0,76 % à dysfonction ventriculaire systolique et 2,74% à fraction d’éjection normale. Ces deux études confirment que quand le diagnostique est suspecté par la clinique il ne se confirme objectivement qu’en la moitié des cas, ce qui fait supposer que beaucoup de malades seront sous médication inappropriée pour l’insuffisance cardiaque alors que d’autres, qui auraient tout intérêt à la faire, en seront probablement privés. L’article 7 – Diagnosis of chronic heart failure in Primary Care - revoit l’état de l’art quant au diagnostique de l’insuffisance cardiaque dans la communauté et discute les principaux défis auxquels les médecins de famille sont soumis, notamment les difficultés d’accès aux examens complémentaires de diagnostique et le screening de la dysfonction cardiaque asymptomatique dans la population en général. Mais les problèmes de diagnostique de l’insuffisance cardiaque, se posent transversalement à tous les niveaux, à l’hôpital comme chez le médecin de famille. Bien que l’insuffisance cardiaque soit la première cause d’hospitalisation après les 65 ans, responsable pour la plupart des coûts consommés par le syndrome, le diagnostique y est sous-estimé. L’article 9 – Validity of a diagnosis of heart failure : implications of misdiagnosing – démontre que l’insuffisance cardiaque a été la première cause d’hospitalisation dans notre service, pendant une période de six mois, ayant une prévalence de 17% et a été largement sous codifiée. La sous codification du diagnostique ne fait que diminuer le vrai poids du syndrome, menant à l’allocation incorrecte de recours pour la prise en charge de l’insuffisance cardiaque à l´hôpital et pour l’établissement de programmes capables de faire l’indispensable interface avec l’ambulatoire. En réponse au questionnaire sur la prise en charge de l’insuffisance cardiaque, que nous résumons dans l’article 8 – Traitement de l’insuffisance cardiaque dans les hôpitaux portugais : résultats d’un questionnaire - les Directeurs des Services de Médicine Interne ont relaté leurs difficultés d’accès à l’échocardiographie en temps utile et réclamé plus de collaboration du cardiologue; les Directeurs des Services de Cardiologie demandent plus de spécialistes et de structures vocationnées pour le diagnostique et traitement de l’insuffisance cardiaque. Les difficultés posées par le diagnostique de l’insuffisance cardiaque à tous les niveaux de soins, entraînent des conséquences épidémiologiques, socioéconomiques et financières néfastes pour le patient, la planification du Système National de Santé et la Santé Publique. Au chapitre III nous rappelons l’importance du diagnostique complet de l’insuffisance cardiaque. Au diagnostique anatomique, fonctionnel et du syndrome, il faut absolument joindre l’étiologie, la classe fonctionnelle e les comorbidités qui conditionnent souvent l’interprétation des testes de diagnostique, le traitement et le pronostique. Conscients des difficultés éprouvées para les médecins de famille, pour diagnostiquer correctement et en temps utile l’insuffisance cardiaque dans l’ambulatoire, et du rôle de ces Spécialistes en ce qui concerne la contention de l’épidémie, nous nous sommes proposés, comme objectifs secondaires de l’étude EPICA,d’investiguer la performance des instruments de diagnostique disponibles et à portée de ces cliniciens. L’article 10 – The diagnosis of heart failure in primary TESE3 AF 6/9/08 12:25 PM Page 319 320 Résumé care: value of symptoms and signs – documente les limitations des symptômes, signaux et des données cliniques, quand utilisés de forme isolée, pour le diagnostique de l’insuffisance cardiaque. Ils sont tous peu sensibles et ceux qui ont la plus grande valeur prédictive sont ceux qui s’associent aux formes congestives, plus graves, de la maladie: la dyspnée paroxysmale nocturne (LR 35,5), l’orthopnée (LR 39,1), la difficulté respiratoire pendant la marche en plan horizontal (LR 25,8), l’ ingurgitation jugulaire > 6 cm accompagnée d’ hépatomégalie e d’oedème des membres inférieurs (LR 130,3), le galop ventriculaire (LR 30,0), la tachycardie >110ppm (LR 26,7) et les crépitations pulmonaires (LR 23,3) sont ainsi associés au diagnostique, mais sont très peu fréquents chez les insuffisants cardiaques tout venant de l’ambulatoire. Un traitement antérieur avec du diurétique (LR 10,6) ou de la digoxine (LR 24,9), ou encore un épisode antérieur d’oédeme pulmonaire aigu (LR 54,2), sont d’autres prédicteurs du diagnostique. L’article 11 – Aetiology, comorbidity and drug therapy of chronic heart failure in the real world: the EPICA substudy – confirme que l´hypertension artérielle est, d’entre tous les facteurs de risque, la principale étiologie de l’insuffisance cardiaque dans l’ambulatoire au Portugal (80%). Trente neuf pourcent des malades inclus dans l’étude EPICA avaient une histoire de maladie coronarienne et 15% de fibrillation auriculaire. Nous avons encore analysé la comorbidité et son influence sur la prescription, en sachant que la prescription des médicaments recommandés pour l’insuffisance cardiaque est, au Portugal comme d’une forme générale en Europe, bien inférieur au désirable. L’article 12 - The value X- ray for confirming or refuting a suspected diagnosis of heart failure in the community – démontre que les données de l’électrocardiogramme e de la radiographie du thorax, par sois même, ne prédisent pas correctement le diagnostique de l’insuffisance cardiaque dans l’ambulatoire; 25% des insuffisants cardiaques inclus dans EPICA avaient un électrocardiogramme où une radiographie du thorax normal. Al’article 13 - Evaluation of the performance and concordance of clinical questionnaires for heart failure in primary care – nous avons comparé sept questionnaires ou scores cliniques habituellement utilisés pour le diagnostique de l’insuffisance cardiaque dans les grandes études épidémiologiques et de médicaments. Ils ont démontré avoir une concordance à peine raisonnable à bonne entre eux, et être très spécifiques (>90%) pour le diagnostique mais peu sensibles. Ils augmentent la probabilité du diagnostique de 4,3% prétest vers 25 à 30% post-test et se révèlent ainsi des instruments plus utiles dans l’exclusion d’une cause cardiaque pour les symptômes que pour le diagnostique de l’insuffisance cardiaque. L’article 14 – Épidémiologie de l’insuffisance cardiaque au Portugal continental : nouvelles données de l’étude EPICA – compare les caractéristiques des malades qui, ayant une clinique compatible avec le syndrome, ont été inclus dans EPICA mais n’avaient pas de dysfonction cardiaque objective (faux positifs), avec ceux qui ont eu leur diagnostique objectivement confirmé. Les premiers étaient plus âgés, il y avait plus de femmes, plus de poids excessif, moins de maladie coronarienne. L’investigation confirme encore que les données de l’électrocardiogramme e de la radiographie du torax ne distinguent pas les insuffisants cardiaques qui ont une dysfonction systolique ventriculaire de ceux qui ont une fraction d’éjection normale. Face au défi du diagnostique de l’insuffisance cardiaque à fraction d’éjection normale, aux difficultés d’accès à l’échocardiographie dans l’ambulatoire, au prix de l’examen et aux critères encore peu consensuels pour le diagnostique de cette situation, nous avons analysé et publié à l’article 15 – The diagnostic challenge of heart failure with preserved systolic function in primary care setting: an EPICA-RAM substudy - la valeur des peptides natriurétiques du type B, NTproBNP, comme test de triage des malades qui, parmi ceux qui présentent une clinique compatible avec le syndrome, devront confirmer objectivement le diagnostique par échocardiographie. Ainsi, nous avons évalué la performance du test comme prédicteur : du diagnostique d’insuffisance cardiaque à fraction d’éjection normale, selon les Recommandations internationales, d’hypertrophie ventriculaire gauche et de dilatation de l’auricule gauche. Le NT-proBNP n’à été bon prédicteur que de ce dernier paramètre, ce qui nous fait conclure que le test ne permet pas de trier les malades de façon à diminuer les nécessités d’échocardiographie face à une hypothèse clinique d’insuffisance cardiaque, du moins en ce qui concerne les cas peu évolués, fréquemment asymptomatiques, de TESE3 AF 6/9/08 12:25 PM Page 320 Résumé 321 l’ambulatoire. Nous avons aussi comparé la performance des peptides natriurétiques du type B - BNP et NT-proBNP – quant au diagnostique de l’insuffisance cardiaque symptomatique à dysfonction ventriculaire systolique et à fraction d’éjection normale, traitée à l’hôpital. Les résultats de cette investigation sont révélés dans l’article 16 – Comparative value of BNP and NT-proBNP for the diagnosis of heart failure. Les deux tests ont démontré une performance excelente et comparable dans le diagnostique du syndrome, mais aucun n’a été capable de distinguer les deux types d’insuffisance cardiaque. Nous avons revu et discuté l’état de l’art quant au rôle des différents examens complémentaires, notamment des peptides natriurétiques et de l’échocardiographie, dans le diagnostique des différents types d’insuffisance et de dysfonction cardiaque, ainsi que les toutes dernières Recommandations internationales. Nous avons analysé les stratégies proposées pour le screening de la dysfonction ventriculaire asymptomatique, qui est au moins aussi fréquente dans l’ambulatoire que l’insuffisance cardiaque symptomatique. Par ailleurs, l’évidence montre que le traitement précoce de la dysfonction ventriculaire asymptomatique, est efficace et diminue les coûts. Le gold standard pour le screening de la dysfonction ventriculaire imposerait la réalisation d’un échocardiogramme à toute la population, ce qui est incomportable. Plusieurs stratégies ont été investiguées, ces dernières années, à la recherche de celle qui sera la plus efficace tout en épargnant le plus possible. Tous affirment que aucun examen isolé ne pourra être suffisant pour ce screening. Par contre, l’électrocardiogramme et/ou les peptides natriurétiques, incorporés ou non en scores cliniques, sont souvent évoqués comme testes efficaces pour le pré-screening des patients à envoyer à l’échocardiographie. Son utilisation diminue le nombre ’échocardiogrammes nécessaires et la dépense, tout en étant au moins aussi efficace que le screening du cancer du sein ou du colle de l’utérus, exige un investissement qui n’est en rien supérieur. Quelques auteurs ont démontré que l'exécution d’un échocardiogramme qualitatif, fait avec un échocardiographe portable, après l’ECG ou la détermination du BNP/ NT-proBNP et avant l’échocardiogramme complet, améliore encore la stratégie pour le screening de la dysfonction cardiaque. Finalement nous terminons avec quelques commentaires concernant les perspectives futures pour la prise en charge de l’insuffisanc e cardiaque. Il est absolument urgent et primordial d’établir d’une définition précise et universelle, ainsi que de critères de diagnostique objectifs, simples et reproductibles, applicables à tout le spectre de l’insuffisance cardiaque, de façon à ce que, dans un futur proche, nous soyons capables de connaître le véritable poids de l’insuffisance cardiaque, d’organiser une prise en charge le plus efficace possible tout en respectant l’inévitable contention des dépenses publiques. Les problèmes de diagnostique de l’ambulatoire exigent que les médecins de famille disposent de programmes de formation continus et que le dialogue avec l’hôpital et les spécialistes soit facilité, tel que nous l’avons fait, de forme programmée, systématiquement,pendant le programme EPICA. Les cliniques d’insuffisance cardiaque et les programmes structurés de prise en charge de l’insuffisance cardiaque ont démontré leur efficacité. Ils permettent une meilleure implémentation des Recommandations de diagnóstique et traitement, améliorent la qualité de vie et la survie des insuffisants cardiaques qui y sont suivis. Dans l’article 17 - Translating Guidelines into clinical practice : benefits of an acute heart failure unit - nous rendons compte de notre expérience en ce qui concerne les gains obtenus quant au diagnostic et traitement des insuffisants cardiaques hospitalisés dans notre service avant et après l’ouverture d’une unité d’insuffisance cardiaque et qui nous a permi d’amelliorer la qualité des soins prêtés à ces malades. Nous défendons que ces unités spécialement vocationnées pour la prise en charge de l’insuffisance cardiaque doivent se multiplier, s’intégrer en programmes plus vastes d’organisation de soins à prêter aux insuffisants cardiaques, qui incluent notamment l´hôpital de jour et adopter des structures variables d’accord avec les nécessités des populations qu’elles servent. Ces programmes de prise en charge de l’insuffisance cardiaque pourront assumer un rôle déterminant dans la formation scientifique des médecins, spécialement des médecins de famille, dans l’interface entre les soins primaires et l’hôpital et dans la référentiation des insuffisants cardiaques. Tous les efforts pour identifier et corriger précocement les facteurs de risque cardiovasculaire et développer TESE3 AF 6/9/08 12:25 PM Page 321 Résumé des stratégies pour le screening de la dysfonction cardiaque doivent être multipliés comme stratégies de prévention. Tout cela est possible, efficace à un pris semblable à celui d’autres programmes déjà en cours, mais exige la collaboration de tous, population, professionnels de santé, investigateurs et pouvoir politique qui viabilise l’évaluation des nécessités, le montage de ces programmes multidisciplinaires, et en contrôle la qualité, de façon à ce que très vite nous puissions contrôler cette épidémie.
Resumo:
The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.
Resumo:
A new algorithm for the velocity vector estimation of moving ships using Single Look Complex (SLC) SAR data in strip map acquisition mode is proposed. The algorithm exploits both amplitude and phase information of the Doppler decompressed data spectrum, with the aim to estimate both the azimuth antenna pattern and the backscattering coefficient as function of the look angle. The antenna pattern estimation provides information about the target velocity; the backscattering coefficient can be used for vessel classification. The range velocity is retrieved in the slow time frequency domain by estimating the antenna pattern effects induced by the target motion, while the azimuth velocity is calculated by the estimated range velocity and the ship orientation. Finally, the algorithm is tested on simulated SAR SLC data.
Resumo:
Forest fires dynamics is often characterized by the absence of a characteristic length-scale, long range correlations in space and time, and long memory, which are features also associated with fractional order systems. In this paper a public domain forest fires catalogue, containing information of events for Portugal, covering the period from 1980 up to 2012, is tackled. The events are modelled as time series of Dirac impulses with amplitude proportional to the burnt area. The time series are viewed as the system output and are interpreted as a manifestation of the system dynamics. In the first phase we use the pseudo phase plane (PPP) technique to describe forest fires dynamics. In the second phase we use multidimensional scaling (MDS) visualization tools. The PPP allows the representation of forest fires dynamics in two-dimensional space, by taking time series representative of the phenomena. The MDS approach generates maps where objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to better understand forest fires behaviour.
Resumo:
In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.
Resumo:
This study identifies predictors and normative data for quality of life (QOL) in a sample of Portuguese adults from general population. A cross-sectional correlational study was undertaken with two hundred and fifty-five (N = 255) individuals from Portuguese general population (mean age 43 years, range 25–84 years; 148 females, 107 males). Participants completed the European Portuguese version of the World Health Organization Quality of Life short-form instrument and the European Portuguese version of the Center for Epidemiologic Studies Depression Scale. Demographic information was also collected. Portuguese adults reported their QOL as good. The physical, psychological and environmental domains predicted 44 % of the variance of QOL. The strongest predictor was the physical domain and the weakest was social relationships. Age, educational level, socioeconomic status and emotional status were significantly correlated with QOL and explained 25 % of the variance of QOL. The strongest predictor of QOL was emotional status followed by education and age. QOL was significantly different according to: marital status; living place (mainland or islands); type of cohabitants; occupation; health. The sample of adults from general Portuguese population reported high levels of QOL. The life domain that better explained QOL was the physical domain. Among other variables, emotional status best predicted QOL. Further variables influenced overall QOL. These findings inform our understanding on adults from Portuguese general population QOL and can be helpful for researchers and practitioners using this assessment tool to compare their results with normative data
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the linear unmixing. The key to linear unmixing is to find the set of reference substances, also called endmembers, that are representative of a given scene. This paper presents the vertex component analysis (VCA) a new method to unmix linear mixtures of hyperspectral sources. The algorithm is unsupervised and exploits a simple geometric fact: endmembers are vertices of a simplex. The algorithm complexity, measured in floating points operations, is O (n), where n is the sample size. The effectiveness of the proposed scheme is illustrated using simulated data.
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Tecnologias do Conhecimento e Decisão
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
In this paper, a new parallel method for sparse spectral unmixing of remotely sensed hyperspectral data on commodity graphics processing units (GPUs) is presented. A semi-supervised approach is adopted, which relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. This method is based on the spectral unmixing by splitting and augmented Lagrangian (SUNSAL) that estimates the material's abundance fractions. The parallel method is performed in a pixel-by-pixel fashion and its implementation properly exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for simulated and real hyperspectral datasets reveal significant speedup factors, up to 1 64 times, with regards to optimized serial implementation.