191 resultados para GIB-aren mintz ereduak
Resumo:
Study objective. This was a secondary data analysis of a study designed and executed in two phases in order to investigate several questions: Why aren't more investigators conducting successful cross-border research on human health issues? What are the barriers to conducting this research? What interventions might facilitate cross-border research? ^ Methods. Key informant interviews and focus groups were used in Phase One, and structured questionnaires in Phase Two. A multi-question survey was created based on the findings of focus groups and distributed to a wider circle of researchers and academics for completion. The data was entered and analyzed using SPSS software. ^ Setting. El Paso, TX located on the U.S-Mexico Border. ^ Participants. Individuals from local academic institutions and the State Department of Health. ^ Results. From the transcribed data of the focus groups, eight major themes emerged: Political Barriers, Language/Cultural Barriers, Differing Goals, Geographic Issues, Legal Barriers, Technology/Material Issues, Financial Barriers, and Trust Issues. Using these themes, the questionnaire was created. ^ The response rate for the questionnaires was 47%. The largest obstacles revealed by this study were identifying a funding source for the project (47% agreeing or strongly agreeing), difficulties paying a foreign counterpart (33% agreeing or strongly agreeing) and administrative changes in Mexico (31% agreeing or strongly agreeing). ^ Conclusions. Many U.S. investigators interested in cross-border research have been discouraged in their efforts by varying barriers. The majority of respondents in the survey felt financial issues and changes in Mexican governments were the most significant obstacles. While some of these barriers can be overcome simply by collaboration among motivated groups, other barriers may be more difficult to remove. Although more evaluation of this research question is warranted, the information obtained through this study is sufficient to support creation of a Cross-Border Research Resource Manual to be used by individuals interested in conducting research with Mexico. ^
Resumo:
Accurate screening for anemia at Red Cross blood donor clinics is essential to maintain a safe national blood supply. Despite the importance of identifying anemia correctly by measurement of hemoglobin or hematocrit (hemoglobin/hematocrit) there is no consensus regarding the efficacy of the current two stage screening method which uses the Readacrit$\sp{\rm tm}$ microhematocrit in conjunction with copper sulfate.^ A cross-sectional study was implemented in which hemoglobin/hematocrit was measured, with the present method and four new devices, on 504 prospective blood donors at a Canadian Red Cross permanent blood donor clinic in London, Canada. Concurrently gathered, venous and capillary blood samples were tested by each device and compared to Coulter S IV$\sp{\rm tm}$ determined venous standard readings. Instrument hemoglobin/hematocrit means were statistically calibrated to the standard ones in order to appraise systematic deviations from the standard. Classification analysis was employed to assess concordance between each instrument and the standard when classifying prospective donors as anemic or non-anemic. This was done both when each instrument was used alone (single stage) and when copper sulfate was used as a preliminary screen (two stage) and simulated over a range of anemia prevalences. The Hemoximeter$\sp{\rm tm}$ and Compur M1000$\sp{\rm tm}$ devices had the highest correlations of hemoglobin measurements with the standard ones for both capillary (n.s.) and venous blood (p $<$.05). Analysis of variance (anova) also showed them to be the most accurate (p $<$.05), as did both single and two stage classification analysis, therefore, they are both recommended. There was a smaller difference between instruments for two stage than for single stage screening; therefore instrument choice is less crucial for the former. The present method was adequate for two stage screening as tested but simulations showed that it would discriminate poorly in populations with a higher prevalence of anemia. The Stat-crit and Readacrit, which measure hematocrit, became less accurate at crucial low hematocrit levels. In light of this finding and the introduction of new, effective and easy to use hemoglobin measuring instruments, the continued use of hematocrit as a surrogate for hemoglobin, is not recommended. ^
Resumo:
An important health issue in the United States today is the large number of people who have problems accessing needed health care because they lack health insurance coverage. Providing health insurance coverage for the working uninsured is a particularly significant challenge in Texas, which has the highest percentage of uninsured in the nation. In response to the low rate of employer-sponsored coverage in the Houston area and the growing numbers of uninsured, the Harris County Health Care Alliance (HCHA) developed and implemented the Harris County 3-Share Plan. A 3-Share Plan is not insurance, but provides health coverage in the form of a benefits package to employers who subscribe to the program and offer it to their employees. ^ A cross sectional study design was conducted to describe 3-Share employer and employee participants and evaluate their outcomes after its first year of operation. Between September and December 2011, 85% of employers enrolled in the 3-Share Plan completed a survey about the affordability of the 3-Share Plan, their satisfaction with the Plan, and the Plan's impact on employee recruitment, retention, productivity, and absenteeism. Forty-five percent of employees enrolled in the 3-Share Plan responded to a survey asking about the affordability of the 3-Share plan, accessibility of health care, availability of providers on the plan, health plan availability, utilization of primary care providers and the ER, and satisfaction with the plan. ^ A summary of the findings shows employers and employees say that they joined the plan because of the low-cost, and once they had participated in the Plan, the majority of employers and employees found that it is affordable for them. The majority of employees say they are getting access easily and without delay, but for those who aren't able to get access, or are delayed, the main cause is related to non-financial barriers to care. Ultimately, employees are satisfied with the 3-Share, and they plan to continue with health coverage under the 3-Share Plan. The 3-Share Plan will keep people in a system of care, and promote health, which will benefit the individuals, the businesses and the community of Harris County.^
Resumo:
An important health issue in the United States today is the large number of people who have problems accessing needed health care because they lack health insurance coverage. Providing health insurance coverage for the working uninsured is a particularly significant challenge in Texas, which has the highest percentage of uninsured in the nation. In response to the low rate of employer-sponsored coverage in the Houston area and the growing numbers of uninsured, the Harris County Health Care Alliance (HCHA) developed and implemented the Harris County 3-Share Plan. A 3-Share Plan is not insurance, but provides health coverage in the form of a benefits package to employers who subscribe to the program and offer it to their employees. ^ A cross sectional study design was conducted to describe 3-Share employer and employee participants and evaluate their outcomes after its first year of operation. Between September and December 2011, 85% of employers enrolled in the 3-Share Plan completed a survey about the affordability of the 3-Share Plan, their satisfaction with the Plan, and the Plan's impact on employee recruitment, retention, and productivity. Forty-five percent of employees enrolled in the 3-Share Plan responded to a survey asking about the affordability of the 3-Share plan, accessibility of providers on the plan, satisfaction, and utilization of primary care providers and the ER. ^ A summary of the findings shows employers and employees say that they joined the plan because of the low-cost, and once they had participated in the Plan, the majority of employers and employees found that it is affordable for them. The majority of employees say they are getting access easily and without delay, but for those who aren't able to get access, or are delayed, the main cause is related to non-financial barriers to care. Ultimately, employees are satisfied with the 3-Share, and they plan to continue with health coverage under the 3-Share Plan. The 3-Share Plan will keep people in a system of care, and promote health, which will benefit the individuals, the businesses and the community of Harris County.^
Resumo:
La metodología del número de la curva (NC) es la más empleada para transformar la precipitación total en precipitación efectiva. De esta manera se constituye en una herramienta de gran valor para realizar estudios hidrológicos en cuencas hidrográficas, fundamentalmente cuando hay una deficiencia de registros extensos y confiables. Esta metodología requiere del conocimiento del tipo y uso de suelo de la cuenca en estudio y registros pluviográficos. En el presente trabajo se aplicó el procesamiento de imágenes LANDSAT para la zonificación de la vegetación y uso del suelo en la cuenca del Arroyo Pillahuinco Grande (38° LS y 61° 15' LW), ubicada sobre el sistema serrano de La Ventana, en el sudoeste de la provincia de Buenos Aires, Argentina. El análisis de su interrelación generó los valores de NC y coeficiente de escorrentía (CE). El procesamiento digital de la base de datos raster georreferenciada se realizó con aplicación de herramientas de sistema de información geográfica (Idrisi Kilimanjaro). El análisis de regresión múltiple efectuado a las variables generó un R2 que explica el 89,77 % de la variabilidad de CE (a < 0,01). Los resultados se exponen a nivel diagnóstico y zonificación del NC, donde la mayor influencia de la escorrentía se relaciona con las variables cobertura vegetal y uso del suelo.
Resumo:
A mediados del siglo XVIII los grandes comerciantes de distintos espacios hispanoamericanos, acumulan suficientes caudales que les permiten comprar títulos de nobleza, distinciones o formar mayorazgos que relumbren sus nombres y perpetúen sus bienes adquiridos. Este proceso es mayormente evidente en los espacios mexicanos y peruanos; pero no se conocen casos concretos para el espacio rioplatense. Como planteó José Torre Revello, esto no implica que los comerciantes rioplatenses no intentasen ennoblecerse. El presente estudio de caso detalla como Don Vicente de Azcuénaga intenta fundar un mayorazgo en la ciudad de Buenos Aires a favor de su primogénito Miguel. A través de este estudio basado en las "probanzas" se puede observar como la familia Azcuénaga pretende resaltar su nombre frente al resto de sus contemporáneos, pero las relaciones entre padre e hijo nos conducen a la vez a replantearnos interrogantes referentes a las tradiciones de acumulación y conservación de patrimonios
Resumo:
A mediados del siglo XVIII los grandes comerciantes de distintos espacios hispanoamericanos, acumulan suficientes caudales que les permiten comprar títulos de nobleza, distinciones o formar mayorazgos que relumbren sus nombres y perpetúen sus bienes adquiridos. Este proceso es mayormente evidente en los espacios mexicanos y peruanos; pero no se conocen casos concretos para el espacio rioplatense. Como planteó José Torre Revello, esto no implica que los comerciantes rioplatenses no intentasen ennoblecerse. El presente estudio de caso detalla como Don Vicente de Azcuénaga intenta fundar un mayorazgo en la ciudad de Buenos Aires a favor de su primogénito Miguel. A través de este estudio basado en las "probanzas" se puede observar como la familia Azcuénaga pretende resaltar su nombre frente al resto de sus contemporáneos, pero las relaciones entre padre e hijo nos conducen a la vez a replantearnos interrogantes referentes a las tradiciones de acumulación y conservación de patrimonios
Resumo:
A mediados del siglo XVIII los grandes comerciantes de distintos espacios hispanoamericanos, acumulan suficientes caudales que les permiten comprar títulos de nobleza, distinciones o formar mayorazgos que relumbren sus nombres y perpetúen sus bienes adquiridos. Este proceso es mayormente evidente en los espacios mexicanos y peruanos; pero no se conocen casos concretos para el espacio rioplatense. Como planteó José Torre Revello, esto no implica que los comerciantes rioplatenses no intentasen ennoblecerse. El presente estudio de caso detalla como Don Vicente de Azcuénaga intenta fundar un mayorazgo en la ciudad de Buenos Aires a favor de su primogénito Miguel. A través de este estudio basado en las "probanzas" se puede observar como la familia Azcuénaga pretende resaltar su nombre frente al resto de sus contemporáneos, pero las relaciones entre padre e hijo nos conducen a la vez a replantearnos interrogantes referentes a las tradiciones de acumulación y conservación de patrimonios
Resumo:
Durante la última década la investigación en nanomedicina ha generado gran cantidad de datos, heterogéneos, distribuidos en múltiples fuentes de información. El uso de las Tecnologías de la Información y la Comunicación (TIC) puede facilitar la investigación médica a escala nanométrica, proporcionando mecanismos y herramientas que permitan gestionar todos esos datos de una manera inteligente. Mientras que la informática biomédica comprende el procesamiento y gestión de la información generada desde el nivel de salud pública y aplicación clínica hasta el nivel molecular, la nanoinformática extiende este ámbito para incluir el “nivel nano”, ocupándose de gestionar y analizar los resultados generados durante la investigación en nanomedicina y desarrollar nuevas líneas de trabajo en este espacio interdisciplinar. En esta nueva área científica, la nanoinformática (que podría consolidarse como una auténtica disciplina en los próximos años), elGrupo de Informática Biomédica (GIB) de la Universidad Politécnica de Madrid (UPM) participa en numerosas iniciativas, que se detallan a continuación.
Resumo:
European public administrations must manage citizens' digital identities, particularly considering interoperability among different countries. Owing to the diversity of electronic identity management (eIDM) systems, when users of one such system seek to communicate with governments using a different system, both systems must be linked and understand each other. To achieve this, the European Union is working on an interoperability framework. This article provides an overview of eIDM systems' current state at a pan-European level. It identifies and analyzes issues on which agreement exists, as well as those that aren't yet resolved and are preventing the adoption of a large-scale model.
Resumo:
Ed. bilingüe: euskera-español
Resumo:
El trabajo ha sido realizado dentro del marco de los proyectos EURECA (Enabling information re-Use by linking clinical REsearch and Care) e INTEGRATE (Integrative Cancer Research Through Innovative Biomedical Infrastructures), en los que colabora el Grupo de Informática Biomédica de la UPM junto a otras universidades e instituciones sanitarias europeas. En ambos proyectos se desarrollan servicios e infraestructuras con el objetivo principal de almacenar información clínica, procedente de fuentes diversas (como por ejemplo de historiales clínicos electrónicos de hospitales, de ensayos clínicos o artículos de investigación biomédica), de una forma común y fácilmente accesible y consultable para facilitar al máximo la investigación de estos ámbitos, de manera colaborativa entre instituciones. Esta es la idea principal de la interoperabilidad semántica en la que se concentran ambos proyectos, siendo clave para el correcto funcionamiento del software del que se componen. El intercambio de datos con un modelo de representación compartido, común y sin ambigüedades, en el que cada concepto, término o dato clínico tendrá una única forma de representación. Lo cual permite la inferencia de conocimiento, y encaja perfectamente en el contexto de la investigación médica. En concreto, la herramienta a desarrollar en este trabajo también está orientada a la idea de maximizar la interoperabilidad semántica, pues se ocupa de la carga de información clínica con un formato estandarizado en un modelo común de almacenamiento de datos, implementado en bases de datos relacionales. El trabajo ha sido desarrollado en el periodo comprendido entre el 3 de Febrero y el 6 de Junio de 2014. Se ha seguido un ciclo de vida en cascada para la organización del trabajo realizado en las tareas de las que se compone el proyecto, de modo que una fase no puede iniciarse sin que se haya terminado, revisado y aceptado la fase anterior. Exceptuando la tarea de documentación del trabajo (para la elaboración de esta memoria), que se ha desarrollado paralelamente a todas las demás. ----ABSTRACT--- The project has been developed during the second semester of the 2013/2014 academic year. This Project has been done inside EURECA and INTEGRATE European biomedical research projects, where the GIB (Biomedical Informatics Group) of the UPM works as a partner. Both projects aim is to develop platforms and services with the main goal of storing clinical information (e.g. information from hospital electronic health records (EHRs), clinical trials or research articles) in a common way and easy to access and query, in order to support medical research. The whole software environment of these projects is based on the idea of semantic interoperability, which means the ability of computer systems to exchange data with unambiguous and shared meaning. This idea allows knowledge inference, which fits perfectly in medical research context. The tool to develop in this project is also "semantic operability-oriented". Its purpose is to store standardized clinical information in a common data model, implemented in relational databases. The project has been performed during the period between February 3rd and June 6th, of 2014. It has followed a "Waterfall model" of software development, in which progress is seen as flowing steadily downwards through its phases. Each phase starts when its previous phase has been completed and reviewed. The task of documenting the project‟s work is an exception; it has been performed in a parallel way to the rest of the tasks.
Resumo:
Durante los últimos años, el imparable crecimiento de fuentes de datos biomédicas, propiciado por el desarrollo de técnicas de generación de datos masivos (principalmente en el campo de la genómica) y la expansión de tecnologías para la comunicación y compartición de información ha propiciado que la investigación biomédica haya pasado a basarse de forma casi exclusiva en el análisis distribuido de información y en la búsqueda de relaciones entre diferentes fuentes de datos. Esto resulta una tarea compleja debido a la heterogeneidad entre las fuentes de datos empleadas (ya sea por el uso de diferentes formatos, tecnologías, o modelizaciones de dominios). Existen trabajos que tienen como objetivo la homogeneización de estas con el fin de conseguir que la información se muestre de forma integrada, como si fuera una única base de datos. Sin embargo no existe ningún trabajo que automatice de forma completa este proceso de integración semántica. Existen dos enfoques principales para dar solución al problema de integración de fuentes heterogéneas de datos: Centralizado y Distribuido. Ambos enfoques requieren de una traducción de datos de un modelo a otro. Para realizar esta tarea se emplean formalizaciones de las relaciones semánticas entre los modelos subyacentes y el modelo central. Estas formalizaciones se denominan comúnmente anotaciones. Las anotaciones de bases de datos, en el contexto de la integración semántica de la información, consisten en definir relaciones entre términos de igual significado, para posibilitar la traducción automática de la información. Dependiendo del problema en el que se esté trabajando, estas relaciones serán entre conceptos individuales o entre conjuntos enteros de conceptos (vistas). El trabajo aquí expuesto se centra en estas últimas. El proyecto europeo p-medicine (FP7-ICT-2009-270089) se basa en el enfoque centralizado y hace uso de anotaciones basadas en vistas y cuyas bases de datos están modeladas en RDF. Los datos extraídos de las diferentes fuentes son traducidos e integrados en un Data Warehouse. Dentro de la plataforma de p-medicine, el Grupo de Informática Biomédica (GIB) de la Universidad Politécnica de Madrid, en el cuál realicé mi trabajo, proporciona una herramienta para la generación de las necesarias anotaciones de las bases de datos RDF. Esta herramienta, denominada Ontology Annotator ofrece la posibilidad de generar de manera manual anotaciones basadas en vistas. Sin embargo, aunque esta herramienta muestra las fuentes de datos a anotar de manera gráfica, la gran mayoría de usuarios encuentran difícil el manejo de la herramienta , y pierden demasiado tiempo en el proceso de anotación. Es por ello que surge la necesidad de desarrollar una herramienta más avanzada, que sea capaz de asistir al usuario en el proceso de anotar bases de datos en p-medicine. El objetivo es automatizar los procesos más complejos de la anotación y presentar de forma natural y entendible la información relativa a las anotaciones de bases de datos RDF. Esta herramienta ha sido denominada Ontology Annotator Assistant, y el trabajo aquí expuesto describe el proceso de diseño y desarrollo, así como algunos algoritmos innovadores que han sido creados por el autor del trabajo para su correcto funcionamiento. Esta herramienta ofrece funcionalidades no existentes previamente en ninguna otra herramienta del área de la anotación automática e integración semántica de bases de datos. ---ABSTRACT---Over the last years, the unstoppable growth of biomedical data sources, mainly thanks to the development of massive data generation techniques (specially in the genomics field) and the rise of the communication and information sharing technologies, lead to the fact that biomedical research has come to rely almost exclusively on the analysis of distributed information and in finding relationships between different data sources. This is a complex task due to the heterogeneity of the sources used (either by the use of different formats, technologies or domain modeling). There are some research proyects that aim homogenization of these sources in order to retrieve information in an integrated way, as if it were a single database. However there is still now work to automate completely this process of semantic integration. There are two main approaches with the purpouse of integrating heterogeneous data sources: Centralized and Distributed. Both approches involve making translation from one model to another. To perform this task there is a need of using formalization of the semantic relationships between the underlying models and the main model. These formalizations are also calles annotations. In the context of semantic integration of the information, data base annotations consist on defining relations between concepts or words with the same meaning, so the automatic translation can be performed. Depending on the task, the ralationships can be between individuals or between whole sets of concepts (views). This paper focuses on the latter. The European project p-medicine (FP7-ICT-2009-270089) is based on the centralized approach. It uses view based annotations and RDF modeled databases. The data retireved from different data sources is translated and joined into a Data Warehouse. Within the p-medicine platform, the Biomedical Informatics Group (GIB) of the Polytechnic University of Madrid, in which I worked, provides a software to create annotations for the RDF sources. This tool, called Ontology Annotator, is used to create annotations manually. However, although Ontology Annotator displays the data sources graphically, most of the users find it difficult to use this software, thus they spend too much time to complete the task. For this reason there is a need to develop a more advanced tool, which would be able to help the user in the task of annotating p-medicine databases. The aim is automating the most complex processes of the annotation and display the information clearly and easy understanding. This software is called Ontology Annotater Assistant and this book describes the process of design and development of it. as well as some innovative algorithms that were designed by the author of the work. This tool provides features that no other software in the field of automatic annotation can provide.