956 resultados para diagnostic and prognostic algorithms developmen
Resumo:
INTRODUCTION: Guidelines for the treatment of patients in severe hypothermia and mainly in hypothermic cardiac arrest recommend the rewarming using the extracorporeal circulation (ECC). However,guidelines for the further in-hospital diagnostic and therapeutic approach of these patients, who often suffer from additional injuries—especially in avalanche casualties, are lacking. Lack of such algorithms may relevantly delay treatment and put patients at further risk. Together with a multidisciplinary team, the Emergency Department at the University Hospital in Bern, a level I trauma centre, created an algorithm for the in-hospital treatment of patients with hypothermic cardiac arrest. This algorithm primarily focuses on the decision-making process for the administration of ECC. THE BERNESE HYPOTHERMIA ALGORITHM: The major difference between the traditional approach, where all hypothermic patients are primarily admitted to the emergency centre, and our new algorithm is that hypothermic cardiac arrest patients without obvious signs of severe trauma are taken to the operating theatre without delay. Subsequently, the interdisciplinary team decides whether to rewarm the patient using ECC based on a standard clinical trauma assessment, serum potassium levels, core body temperature, sonographic examinations of the abdomen, pleural space, and pericardium, as well as a pelvic X-ray, if needed. During ECC, sonography is repeated and haemodynamic function as well as haemoglobin levels are regularly monitored. Standard radiological investigations according to the local multiple trauma protocol are performed only after ECC. Transfer to the intensive care unit, where mild therapeutic hypothermia is maintained for another 12 h, should not be delayed by additional X-rays for minor injuries. DISCUSSION: The presented algorithm is intended to facilitate in-hospital decision-making and shorten the door-to-reperfusion time for patients with hypothermic cardiac arrest. It was the result of intensive collaboration between different specialties and highlights the importance of high-quality teamwork for rare cases of severe accidental hypothermia. Information derived from the new International Hypothermia Registry will help to answer open questions and further optimize the algorithm.
Resumo:
A high percentage of oesophageal adenocarcinomas show an aggressive clinical behaviour with a significant resistance to chemotherapy. Heat-shock proteins (HSPs) and glucose-regulated proteins (GRPs) are molecular chaperones that play an important role in tumour biology. Recently, novel therapeutic approaches targeting HSP90/GRP94 have been introduced for treating cancer. We performed a comprehensive investigation of HSP and GRP expression including HSP27, phosphorylated (p)-HSP27((Ser15)), p-HSP27((Ser78)), p-HSP27((Ser82)), HSP60, HSP70, HSP90, GRP78 and GRP94 in 92 primary resected oesophageal adenocarcinomas by using reverse phase protein arrays (RPPA), immunohistochemistry (IHC) and real-time quantitative RT-PCR (qPCR). Results were correlated with pathologic features and survival. HSP/GRP protein and mRNA expression was detected in all tumours at various levels. Unsupervised hierarchical clustering showed two distinct groups of tumours with specific protein expression patterns: The hallmark of the first group was a high expression of p-HSP27((Ser15, Ser78, Ser82)) and low expression of GRP78, GRP94 and HSP60. The second group showed the inverse pattern with low p-HSP27 and high GRP78, GRP94 and HSP60 expression. The clinical outcome for patients from the first group was significantly improved compared to patients from the second group, both in univariate analysis (p = 0.015) and multivariate analysis (p = 0.029). Interestingly, these two groups could not be distinguished by immunohistochemistry or qPCR analysis. In summary, two distinct and prognostic relevant HSP/GRP protein expression patterns in adenocarcinomas of the oesophagus were detected by RPPA. Our approach may be helpful for identifying candidates for specific HSP/GRP-targeted therapies.
Resumo:
Currently, there are no molecular biomarkers that guide treatment decisions for patients with head and neck squamous cell carcinoma (HNSCC). Several retrospective studies have evaluated TP53 in HNSCC, and results have suggested that specific mutations are associated with poor outcome. However, there exists heterogeneity among these studies in the site and stage of disease of the patients reviewed, the treatments rendered, and methods of evaluating TP53 mutation. Thus, it remains unclear as to which patients and in which clinical settings TP53 mutation is most useful in predicting treatment failure. In the current study, we reviewed the records of a cohort of patients with advanced, resectable HNSCC who received surgery and post-operative radiation (PORT) and had DNA isolated from fresh tumor tissue obtained at the time of surgery. TP53 mutations were identified using Sanger sequencing of exons 2-11 and the associated splice regions of the TP53 gene. We have found that the group of patients with either non-disruptive or disruptive TP53 mutations had decreased overall survival, disease-free survival, and an increased rate of distant metastasis. When examined as an independent factor, disruptive mutation was strongly associated with the development of distant metastasis. As a second aim of this project, we performed a pilot study examining the utility of the AmpliChip® p53 test as a practical method for TP53 sequencing in the clinical setting. AmpliChip® testing and Sanger sequencing was performed on a separate cohort of patients with HNSCC. Our study demonstrated the ablity of the AmpliChip® to call TP53 mutation from a single formalin-fixed paraffin-embedded slide. The results from AmpliChip® testing were identical with the Sanger method in 11 of 19 cases, with a higher rate of mutation calls using the AmpliChip® test. TP53 mutation is a potential prognostic biomarker among patients with advanced, resectable HNSCC treated with surgery and PORT. Whether this subgroup of patients could benefit from the addition of concurrent or induction chemotherapy remains to be evaluated in prospective clinical trials. Our pilot study of the p53 AmpliChip® suggests this could be a practical and reliable method of TP53 analysis in the clinical setting.
Resumo:
BACKGROUND The treatment and outcomes of patients with human immunodeficiency virus (HIV)-associated Hodgkin lymphoma (HL) continue to evolve. The International Prognostic Score (IPS) is used to predict the survival of patients with advanced-stage HL, but it has not been validated in patients with HIV infection. METHODS This was a multi-institutional, retrospective study of 229 patients with HIV-associated, advanced-stage, classical HL who received doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD) plus combination antiretroviral therapy. Their clinical characteristics were presented descriptively, and multivariate analyses were performed to identify the factors that were predictive of response and prognostic of progression-free survival (PFS) and overall survival (OS). RESULTS The overall and complete response rates to ABVD in patients with HIV-associated HL were 91% and 83%, respectively. After a median follow-up of 5 years, the 5-year PFS and OS rates were 69% and 78%, respectively. In multivariate analyses, there was a trend toward an IPS score >3 as an adverse factor for PFS (hazard ratio [HR], 1.49; P=.15) and OS (HR, 1.84; P=.06). A cluster of differentiation 4 (CD4)-positive (T-helper) cell count <200 cells/μL was associated independently with both PFS (HR, 2.60; P=.002) and OS (HR, 2.04; P=.04). The CD4-positive cell count was associated with an increased incidence of death from other causes (HR, 2.64; P=.04) but not with death from HL-related causes (HR, 1.55; P=.32). CONCLUSIONS The current results indicate excellent response and survival rates in patients with HIV-associated, advanced-stage, classical HL who receive ABVD and combination antiretroviral therapy as well as the prognostic value of the CD4-positive cell count at the time of lymphoma diagnosis for PFS and OS. Cancer 2014. © 2014 American Cancer Society.
Resumo:
We present new algorithms for M-estimators of multivariate scatter and location and for symmetrized M-estimators of multivariate scatter. The new algorithms are considerably faster than currently used fixed-point and related algorithms. The main idea is to utilize a second order Taylor expansion of the target functional and to devise a partial Newton-Raphson procedure. In connection with symmetrized M-estimators we work with incomplete U-statistics to accelerate our procedures initially.
Resumo:
Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.
Resumo:
Intrahepatic cholangiocarcinomas are the second most common primary liver malignancies with an increasing incidence over the past decades. Due to a lack of early symptoms and their aggressive oncobiological behavior, the diagnostic approach is challenging and the outcome remains unsatisfactory with a poor prognosis. Thus, a consistent staging system for a comparison between different therapeutic approaches is needed, but independent predictors for worse survival are still controversial. Currently, four different staging systems are primarily used, which differ in the way they determine the 'T' category. Furthermore, different nomograms and prognostic models have been recently proposed and may be helpful in providing additional information for predicting the prognosis and therefore be helpful in approaching an adequate treatment strategy. This review will discuss the diagnostic approach to intrahepatic cholangiocarcinoma as well as compare and contrast the most current staging systems and prognostic models.
Resumo:
In this dissertation, the cytogenetic characteristics of bone marrow cells from 41 multiple myeloma patients were investigated. These cytogenetic data were correlated with the total DNA content as measured by flow cytometry. Both the cytogenetic information and DNA content were then correlated with clinical data to determine if diagnosis and prognosis of multiple myeloma could be improved.^ One hundred percent of the patients demonstrated abnormal chromosome numbers per metaphase. The average chromosome number per metaphase ranged from 42 to 49.9, with a mean of 44.99. The percent hypodiploidy ranged from 0-100% and the percent hyperdiploidy from 0-53%. Detailed cytogenetic analyses were very difficult to perform because of the paucity of mitotic figures and the poor chromosome morphology. Thus, detailed chromosome banding analysis on these patients was impossible.^ Thirty seven percent of the patients had normal total DNA content, whereas 63% had abnormal amounts of DNA (one patient with less than normal amounts and 25 patients with greater than normal amounts of DNA).^ Several clinical parameters were used in the statistical analyses: tumor burden, patient status at biopsy, patient response status, past therapy, type of treatment and percent plasma cells. Only among these clinical parameters were any statistically significant correlations found: pretreatment tumor burden versus patient response, patient biopsy status versus patient response and past therapy versus patient response.^ No correlations were found between percent hypodiploid, diploid, hyperdiploid or DNA content, and the patient response status, nor were any found between those patients with: (a) normal plasma cells, low pretreatment tumor mass burden and more than 50% of the analyzed metaphases with 46 chromosomes; (b) normal amounts of DNA, low pretreatment tumor mass burden and more than 50% of the metaphases with 46 chromosomes; (c) normal amounts of DNA and normal quantities of plasma cells; (d) abnormal amounts of DNA, abnormal amounts of plasma cells, high pretreatment tumor mass burden and less than 50% of the metaphases with 46 chromosomes.^ Technical drawbacks of both cytogenetic and DNA content analysis in these multiple myeloma patients are discussed along with the lack of correlations between DNA content and chromosome number. Refined chromosome banding analysis awaits technical improvements before we can understand which chromosome material (if any) makes up the "extra" amounts of DNA in these patients. None of the correlations tested can be used as diagnostic or prognostic aids for multiple myeloma. ^
Resumo:
The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
We describe the case of a patient with a T-lymphoblastic lymphoma whose disseminated mucormycosis was diagnosed with delay, and we address the diagnostic and therapeutic decision-making process and review the diagnostic workup of patients with potential IFD. The diagnosis was delayed despite a suggestive radiological presentation of the patient's pulmonary lesion. The uncommon risk profile (T-lymphoblastic lymphoma, short neutropenic phases) wrongly led to a low level of suspicion. The diagnosis was also hampered by the lack of indirect markers for infections caused by Mucorales, the low sensitivity of both fungal culture and panfungal PCR, and the limited availability of species-specific PCR. A high level of suspicion of IFD is needed, and aggressive diagnostic procedures should be promptly initiated even in apparently low-risk patients with uncommon presentations. The extent of the analytical workup should be decided on a case-by-case base. Diagnostic tests such as the galactomannan and β-D-glucan test and/or PCR on biological material followed by sequencing should be chosen according to their availability and after evaluation of their specificity and sensitivity. In high-risk patients, preemptive therapy with a broad-spectrum mould-active antifungal agent should be started before definitive diagnostic findings become available.
Resumo:
B-type natriuretic peptide (BNP) is the first biomarker of proven value in screening for left ventricular dysfunction. The availability of point-of-care testing has escalated clinical interest and the resultant research is defining a role for BNP in the investigation and treatment of critically ill patients. This review was undertaken with the aim of collecting and assimilating current evidence regarding the use of BNP assay in the evaluation of myocardial dysfunction in critically ill humans. The information is presented in a format based upon organ system and disease category. BNP assay has been studied in a spectrum of clinical conditions ranging from acute dyspnoea to subarachnoid haemorrhage. Its role in diagnosis, assessment of disease severity, risk stratification and prognostic evaluation of cardiac dysfunction appears promising, but requires further elaboration. The heterogeneity of the critically ill population appears to warrant a range of cut-off values. Research addressing progressive changes in BNP concentration is hindered by infrequent assay and appears unlikely to reflect the critically ill patient's rapidly changing haemodynamics. Multi-marker strategies may prove valuable in prognostication and evaluation of therapy in a greater variety of illnesses. Scant data exist regarding the use of BNP assay to alter therapy or outcome. It appears that BNP assay offers complementary information to conventional approaches for the evaluation of cardiac dysfunction. Continued research should augment the validity of BNP assay in the evaluation of myocardial function in patients with life-threatening illness.
Resumo:
Prostate-specific antigen (PSA) is important in tumour detection, monitoring disease progression and tumour recurrence. however, PSA is not a cancerspecific marker as levels can also be elevated in benign prostatic disease. A number of different mRNA transcripts of PSA have also been identified in prostatic tissue, but have not been fully characterized (PSA 424, PSA 525, Schulz transcript). Tissue specimens from transurethral resection of the prostate (TURP) or radical prostatectomy were obtained from 17 men with BPH and 15 men with prostate cancer. Total RNA was extracted, and reverse-transcriptionpolymerase chain reaction (RT-PCR) and Southern analysis carried out using transcript-specific primers and probes to determine which mRNA PSA transcripts were expressed. Real-time PCR was performed to determine transcript levels between the two groups using transcript-specific primers and SYBR green fluorescence. Values obtained were normalized to a standard housekeeping gene, B2-microglobulin. Transcripts amplified by RT-PCR and real-time PCR were confirmed by DNA sequencing. Our results show that the transcripts were present in some, but not all, BPH and cancer samples indicating that they are not specific to either BPH or cancer. Analysis of real-time PCR normalized values using a Student’s t -test, shows that there is a significant difference between the two groups for PSA 424, but not wild-type PSA, PSA 525 or the Schulz transcript. Although a larger cohort of samples is needed to further confirm these results, these findings suggest that mRNA levels of PSA 424 may have some utility as a diagnostic or prognostic marker in prostate cancer detection.
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.