463 resultados para Mortalitiy registries


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stomach cancer is the fourth most common cancer in the world, and ranked 16th in the US in 2008. The age-adjusted rates among Hispanics were 2.8 times that of non-Hispanic Whites in 1998-2002. In spite of that, previous research has found that Hispanics with non-cardia adenocarcinoma of the stomach have a slightly better survival than non-Hispanic Whites. However, such previous research did not include a comparison with African-Americans, and it was limited to data released for the years 1973-2000 in the nine original Surveillance, Epidemiology, and End Results Cancer Registries. This finding was interpreted as related to the Hispanic Paradox, a phenomenon that refers to the fact that Hispanics in the USA tend to paradoxically have substantially better health than other ethnic groups in spite of what their aggregate socio-economic indicators would predict. We extended such research to the SEER 17 Registry, 1973-2005, with varying years of diagnosis per registry, and compared the survival of non-cardia adenocarcinoma of the stomach according to ethnicity (Hispanics, non-Hispanic Whites and African-Americans), while controlling for age, gender, marital status, stage of disease and treatment using Cox regression survival analysis. We found that Hispanic ethnicity by itself did not confer an advantage on survival from non-cardia adenocarcinoma of the stomach, but that being born abroad was independently associated with the apparent 'Hispanic Paradox' previously reported, and that such advantage was seen among foreign born persons across all race/ethnic groups.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction. Cancer registries provide information about treatment initiation but not the full course of treatment. In an effort to identify patient reported reasons for discontinuing cancer treatment, patients with prostate, breast, and colorectal cancer were identified from Alabama State Cancer Registry (ASCR) -Alabama Medicare linked database for interview. This study has two specific aims: (1) determine whether the ASCR-Medicare database accurately reflects patients’ treatment experiences in terms of whether they started and completed treatment when compared to patient self-report and (2) determine which patient demographic and health care system factors are related to treatment completion as defined by patient self-report. ^ Methods. The ASCR-Medicare claims dataset supplemented patient interview responses to identify treatment initiation and completion among prostate, breast, and colorectal cancer patients in Alabama from 1999-2003. Kappa statistic was used to test for concordance of treatment initiation and completion between patient self-report and Medicare claims data. Patients who reported not completing treatment were asked questions to ascertain reasons for treatment discontinuation. Logistic regression models were constructed to explore the association of patient and tumor characteristics with discontinuation of radiation and chemotherapy. ^ Results. Overall, there was a fair agreement across all cancer sites about whether one had surgery (Kappa=.382). There was fair agreement between self-report and Medicare claims data for starting radiation treatment (Kappa=.278). For starting chemotherapy there was moderate agreement (Kappa=.414). There was no agreement for completing treatment for radiation and chemotherapy between the self-report and claims data. Patients most often reported doctor’s recommendation (40% for radiation treatment and 21.4% for chemotherapy) and side effects (30% for radiation treatment and 42.8% for chemotherapy) for discontinuing treatment. Females were less likely to complete radiation than males (OR=.24, 95% CI=.11–.50). Stage I patients were more likely to drop radiation treatment than stage III patients (OR=3.34, 95% CI=1.12–9.95). Younger patients were more likely to discontinue chemotherapy than older patients (OR=2.84 95%, CI=1.08–7.69) and breast cancer patients were less likely to discontinue chemotherapy than colorectal patients (OR=.13, 95% CI=.04–.46). ^ Conclusion. This study reveals that patients recall starting treatment more accurately than completing treatment and that there are several demographic and tumor characteristics that influence treatment discontinuation. Providing patients with treatment summaries and survivorship plans can help patients their follow-up care when there are gaps in treatment recall and discontinuation of treatment.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Lynch Syndrome (LS) is a familial cancer syndrome with a high prevalence of colorectal and endometrial carcinomas among affected family members. Clinical criteria, developed from information obtained from familial colorectal cancer registries, have been generated to identify individuals at elevated risk for having LS. In 2007, the Society of Gynecologic Oncology (SGO) codified criteria to assist in identifying women presenting with gynecologic cancers at elevated risk for having LS. These criteria have not been validated in a population-based setting. Materials and Methods: We retrospectively identified 412, unselected endometrial cancer cases. Clinical and pathologic information were obtained from the electronic medical record, and all tumors were tested for expression of the DNA mismatch repair proteins through immunohistochemistry. Tumors exhibiting loss of MSH2, MSH6 and PMS2 were designated as probable Lynch Syndrome (PLS). For tumors exhibiting immunohistochemical loss of MLH1, we used the PCR-based MLH1 methylation assay to delineate PLS tumors from sporadic tumors. Samples lacking methylation of the MLH1 promoter were also designated as PLS. The sensitivity and specificity for SGO criteria for detecting PLS tumors was calculated. We compared clinical and pathologic features of sporadic tumors and PLS tumors. A simplified cost-effectiveness analysis was also performed comparing the direct costs of utilizing SGO criteria vs. universal tumor testing. Results: In our cohort, 43/408 (10.5%) of endometrial carcinomas were designated as PLS. The sensitivity and specificity of SGO criteria to identify PLS cases were 32.7 and 77%, respectively. Multivariate analysis of clinical and pathologic parameters failed to identify statistically significant differences between sporadic and PLS tumors with the exception of tumors arising from the lower uterine segment. These tumors were more likely to occur in PLS tumors. Cost-effectiveness analysis showed clinical criteria and universal testing strategies cost $6,235.27/PLS case identified and $5,970.38/PLS case identified, respectively. Conclusions: SGO 5-10% criteria successfully identify PLS cases among women who are young or have significant family history of LS related tumors. However, a larger proportion of PLS cases occurring at older ages with less significant family history are not detected by this screening strategy. Compared to SGO clinical criteria, universal tumor testing is a cost effective strategy to identify women presenting with endometrial cancer who are at elevated risk for having LS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A partir de los nuevos paradigmas que conocemos como Novelas Históricas Posmodernas, el escritor mexicano retoma esa extensa línea temática que se origina en el acontecimiento histórico que convulsionó, a principios del siglo XX, la extensa geografía mexicana. El objetivo de este trabajo es abarcar el estudio de las tres novelas de Solares que se ocupan de textualizar episodios de la Revolución: Madero, el otro, de 1989, La noche de Ángeles, de 1991, Columbus, 1996. La elección de las figuras manifiesta la intención de devolverles su verdadera voz desde esa mirada que sólo permite la ficción, en tanto los hechos contados surgieron “más de lo simbólicamente verdadero que de lo históricamente exacto". La atención estará en destacar la presencia de voces que, desde una particular focalización, se ocuparán de insertar en el discurso modalizaciones diversas. Estas voces habrán de problematizar, con mayor o menor intensidad según los diversos registros, impugnando, cuestionando o rescatando ciertos actos oscuros del pasado, llevados a cabo por quienes protagonizaron la gesta revolucionaria. El tono apelativo característico explicita, por otra parte, el lugar desde el cual la historia debe ser leída.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nos proponemos conmemorar aquí los cien años del nacimiento de uno de los destacados intelectuales del siglo XX, cuya obra fuera ampliamente reconocida en nuestro ámbito cultural: nos referimos al escritor venezolano Arturo Uslar Pietri (1906- 2001) quien dedicara su larga vida a transitar diversos caminos del saber. Publicó su primer artículo periodístico a los catorce años; el último, el de la despedida, en su columna del diario El Nacional, aparecería en enero de 1998. Habían transcurrido setenta y ocho años. Cumplirá este aspecto relevante de su producción desde la mirada que le imprime el comunicador social, tarea que se suma a su labor de educador, en los escritos de la columna semanal que titulara Pizarrón. Dejará registrados así destacados artículos de opinión que darán a conocer las ideas de nuestros grandes hombres, entre los que se encuentra el propio Uslar, al tiempo que contribuyen a formar la opinión pública desde las columnas de la prensa y de este modo procura testimoniar tanto nuestra historia política como los registros más destacados de nuestra cultura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo constituye un avance de la investigación sobre el negro porteño a partir de la segunda década post-revolucionaria, a través de la consulta de los distintos registros de los Protocolos Notariales, conservados en el Archivo General de la Nación. En síntesis, el artículo pretende ilustrar aspectos que quizá no hayan merecido la necesaria atención por parte de los investigadores de la problemática afroporteña durante el período independiente; nos referimos al análisis del fenómeno de la manumisión, preciada o graciosa, de la libertad futura condicional, de la figura del negro propietario de bienes inmuebles y de esclavos, como así también de los testamentos de las personas "de color". Parece útil, entonces, intentar estos estudios que nos podrán iluminar sobre aspectos poco o nada conocidos de los morenos y pardos de Buenos Aires durante el siglo XIX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo constituye un avance de la investigación sobre el negro porteño a partir de la segunda década post-revolucionaria, a través de la consulta de los distintos registros de los Protocolos Notariales, conservados en el Archivo General de la Nación. En síntesis, el artículo pretende ilustrar aspectos que quizá no hayan merecido la necesaria atención por parte de los investigadores de la problemática afroporteña durante el período independiente; nos referimos al análisis del fenómeno de la manumisión, preciada o graciosa, de la libertad futura condicional, de la figura del negro propietario de bienes inmuebles y de esclavos, como así también de los testamentos de las personas "de color". Parece útil, entonces, intentar estos estudios que nos podrán iluminar sobre aspectos poco o nada conocidos de los morenos y pardos de Buenos Aires durante el siglo XIX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo constituye un avance de la investigación sobre el negro porteño a partir de la segunda década post-revolucionaria, a través de la consulta de los distintos registros de los Protocolos Notariales, conservados en el Archivo General de la Nación. En síntesis, el artículo pretende ilustrar aspectos que quizá no hayan merecido la necesaria atención por parte de los investigadores de la problemática afroporteña durante el período independiente; nos referimos al análisis del fenómeno de la manumisión, preciada o graciosa, de la libertad futura condicional, de la figura del negro propietario de bienes inmuebles y de esclavos, como así también de los testamentos de las personas "de color". Parece útil, entonces, intentar estos estudios que nos podrán iluminar sobre aspectos poco o nada conocidos de los morenos y pardos de Buenos Aires durante el siglo XIX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apart from providing semantics and reasoning power to data, ontologies enable and facilitate interoperability across heterogeneous systems or environments. A good practice when developing ontologies is to reuse as much knowledge as possible in order to increase interoperability by reducing heterogeneity across models and to reduce development effort. Ontology registries, indexes and catalogues facilitate the task of finding, exploring and reusing ontologies by collecting them from different sources. This paper presents an ontology catalogue for the smart cities and related domains. This catalogue is based on curated metadata and incorporates ontology evaluation features. Such catalogue represents the first approach within this community and it would be highly useful for new ontology developments or for describing and annotating existing ontologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introdução: Estatísticas sobre a ocorrência de casos novos de câncer são fundamentais para o planejamento e monitoramento das ações de controle da doença. No estado de São Paulo, a incidência de câncer é obtida indiretamente por meio de estimativas oficiais (para o estado como um todo e sua capital) e, de forma direta, em municípios cobertos por Registro de Câncer de Base Populacional (RCBP). Existem, atualmente, três RCBP ativos (São Paulo, Jaú e Santos), um inativo (Barretos) e um em reimplantação (Campinas). Dado o desconhecimento do panorama da incidência de câncer em áreas não cobertas por RCBP, este estudo teve como objetivo estimar a incidência de câncer, calcular taxas brutas e padronizadas por idade, específicas por sexo e localização primária do tumor para as 17 Redes Regionais de Atenção à Saúde (RRAS) de São Paulo e municípios, em 2010. Método: Utilizou-se como estimador da incidência de câncer a razão Incidência/Mortalidade (I/M), por sexo, grupo etário quinquenal dos 0 aos 80 anos e localização primária do tumor. O numerador da razão foi formado pelo número agregado de casos novos entre 2006-2010, em dois RCBP ativos (Jaú e São Paulo, respectivamente, com cobertura correspondente a 0,3 por cento e 27,3 por cento da população estadual). No denominador, o número de óbitos oficial nas respectivas áreas e período. O número estimado de casos novos resultou da multiplicação das I/M pelo número de óbitos por câncer registrados em 2010 para o conjunto de municípios formadores de cada uma das RRAS ou para cada município. O método de referência foi aquele utilizado no Globocan series, da Agência Internacional de Pesquisa contra o Câncer. O ajuste por idade das taxas de incidência ocorreu pelo método direto, tendo como padrão a população mundial. Resultados: Estimaram-se 53.476 casos novos de câncer para o sexo masculino e 55.073 casos para o feminino (excluindo-se os casos de câncer de pele não melanoma), com taxas padronizadas de 261/100.000 e 217/100.000, respectivamente. No sexo masculino, a RRAS 6 apresentou para todos os cânceres a maior taxa de incidência padronizada (285/100.000), e a RRAS 10, a menor (207/100.000). Os cânceres mais incidentes em homens foram próstata (77/100.000), cólon/reto/anus (27/100.000) e traqueia/brônquio/pulmão (16/100.000). Entre as mulheres, as taxas de incidência padronizadas por idade foram de 170/100.000 (RRAS 11) a 252/100.000 (RRAS 07); o câncer de mama foi o mais incidente (58/100.000), seguido pelos tumores de cólon/reto/anus (23/100.000) e de colo uterino (9/100.000). Conclusões: Os resultados apontaram diferentes padrões de incidência com taxas que ultrapassaram a magnitude estadual. Dados provenientes de RCBP locais podem ser usados na obtenção indireta de estimativas regionais e locais. Neste estudo, as taxas de incidência apresentadas podem estar sub ou superestimadas refletindo a qualidade, completitude e padrões observados no RCBP de maior representatividade considerado na análise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introducción: La confianza en la capacidad de evitar algunas muertes o diferir su aparición es el fundamento de toda política de salud, uno de cuyos principales resultados debe ser reducir las muertes evitables, y controlar las condiciones que aumentan el riesgo de morir. Objetivos: Establecer variaciones en la tendencia de la mortalidad evitable (ME) registrada en Colombia entre 1985 y 2002, como indicadoras del impacto efectivo que las reformas en la política sanitaria pudieran haber tenido sobre sus determinantes. Métodos: Estudio de la ME con base en los registros oficiales de defunción y en las proyecciones censales de Colombia entre 1985-2002. Para determinar la evitabilidad, se aplicó un inventario de causas de ME (ICME) ajustado a las condiciones epidemiológicas del país durante el período que se analiza. Resultados: De las muertes registradas, 75.3% se clasificaron como evitables. Se identificaron siete patrones de tendencia que reflejan, de manera particular, los efectos de las políticas públicas sobre los determinantes de la mortalidad. Conclusiones: En general, la ME viene disminuyendo en Colombia desde 1985 en la población general y entre los hombres, sin variaciones significativas durante el período. Las variaciones en la tendencia de las tasas ajustadas de varios grupos de causas hacen pensar en un deterioro en el control de sus determinantes, especialmente desde 1990. Los cambios aplicados en las políticas públicas durante los últimos años no se reflejaron en un mejor control de las muertes evitables, aunque el gasto en salud aumentó de modo muy notable en el país.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Femicide, defined as the killings of females by males because they are females, is becoming recognized worldwide as an important ongoing manifestation of gender inequality. Despite its high prevalence or widespread prevalence, only a few countries have specific registries about this issue. This study aims to assemble expert opinion regarding the strategies which might feasibly be employed to promote, develop and implement an integrated and differentiated femicide data collection system in Europe at both the national and international levels. Concept mapping methodology was followed, involving 28 experts from 16 countries in generating strategies, sorting and rating them with respect to relevance and feasibility. The experts involved were all members of the EU-Cost-Action on femicide, which is a scientific network of experts on femicide and violence against women across Europe. As a result, a conceptual map emerged, consisting of 69 strategies organized in 10 clusters, which fit into two domains: “Political action” and “Technical steps”. There was consensus among participants regarding the high relevance of strategies to institutionalize national databases and raise public awareness through different stakeholders, while strategies to promote media involvement were identified as the most feasible. Differences in perceived priorities according to the level of human development index of the experts’ countries were also observed.