904 resultados para Clinical Trials Design


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The challenges regarding seamless integration of distributed, heterogeneous and multilevel data arising in the context of contemporary, post-genomic clinical trials cannot be effectively addressed with current methodologies. An urgent need exists to access data in a uniform manner, to share information among different clinical and research centers, and to store data in secure repositories assuring the privacy of patients. Advancing Clinico-Genomic Trials (ACGT) was a European Commission funded Integrated Project that aimed at providing tools and methods to enhance the efficiency of clinical trials in the -omics era. The project, now completed after four years of work, involved the development of both a set of methodological approaches as well as tools and services and its testing in the context of real-world clinico-genomic scenarios. This paper describes the main experiences using the ACGT platform and its tools within one such scenario and highlights the very promising results obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este trabajo aborda la metodología seguida para llevar a cabo el proyecto de investigación PRONAF (Clinical Trials Gov.: number NCT01116856.) Background: At present, scientific consensus exists on the multifactorial etiopatogenia of obesity. Both professionals and researchers agree that treatment must also have a multifactorial approach, including diet, physical activity, pharmacology and/or surgical treatment. These two last ones should be reserved for those cases of morbid obesities or in case of failure of the previous ones. The aim of the PRONAF study is to determine what type of exercise combined with caloric restriction is the most appropriate to be included in overweigth and obesity intervention programs, and the aim of this paper is to describe the design and the evaluation methods used to carry out the PRONAF study. Methods/design: One-hundred nineteen overweight (46 males) and 120 obese (61 males) subjects aged 18–50 years were randomly assigned to a strength training group, an endurance training group, a combined strength + endurance training group or a diet and physical activity recommendations group. The intervention period was 22 weeks (in all cases 3 times/wk of training for 22 weeks and 2 weeks for pre and post evaluation). All subjects followed a hypocaloric diet (25-30% less energy intake than the daily energy expenditure estimated by accelerometry). 29–34% of the total energy intake came from fat, 14–20% from protein, and 50–55% from carbohydrates. The mayor outcome variables assesed were, biochemical and inflamatory markers, body composition, energy balance, physical fitness, nutritional habits, genetic profile and quality of life. 180 (75.3%) subjects finished the study, with a dropout rate of 24.7%. Dropout reasons included: personal reasons 17 (28.8%), low adherence to exercise 3 (5.1%), low adherence to diet 6 (10.2%), job change 6 (10.2%), and lost interest 27 (45.8%). Discussion: Feasibility of the study has been proven, with a low dropout rate which corresponds to the estimated sample size. Transfer of knowledge is foreseen as a spin-off, in order that overweight and obese subjects can benefit from the results. The aim is to transfer it to sports centres. Effectiveness on individual health-related parameter in order to determine the most effective training programme will be analysed in forthcoming publications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RESUMEN Las enfermedades cardiovasculares constituyen en la actualidad la principal causa de mortalidad en el mundo y se prevé que sigan siéndolo en un futuro, generando además elevados costes para los sistemas de salud. Los dispositivos cardiacos implantables constituyen una de las opciones para el diagnóstico y el tratamiento de las alteraciones del ritmo cardiaco. La investigación clínica con estos dispositivos alcanza gran relevancia para combatir estas enfermedades que tanto afectan a nuestra sociedad. Tanto la industria farmacéutica y de tecnología médica, como los propios investigadores, cada día se ven involucrados en un mayor número de proyectos de investigación clínica. No sólo el incremento en su volumen, sino el aumento de la complejidad, están generando mayores gastos en las actividades asociadas a la investigación médica. Esto está conduciendo a las compañías del sector sanitario a estudiar nuevas soluciones que les permitan reducir los costes de los estudios clínicos. Las Tecnologías de la Información y las Comunicaciones han facilitado la investigación clínica, especialmente en la última década. Los sistemas y aplicaciones electrónicos han proporcionado nuevas posibilidades en la adquisición, procesamiento y análisis de los datos. Por otro lado, la tecnología web propició la aparición de los primeros sistemas electrónicos de adquisición de datos, que han ido evolucionando a lo largo de los últimos años. Sin embargo, la mejora y perfeccionamiento de estos sistemas sigue siendo crucial para el progreso de la investigación clínica. En otro orden de cosas, la forma tradicional de realizar los estudios clínicos con dispositivos cardiacos implantables precisaba mejorar el tratamiento de los datos almacenados por estos dispositivos, así como para su fusión con los datos clínicos recopilados por investigadores y pacientes. La justificación de este trabajo de investigación se basa en la necesidad de mejorar la eficiencia en la investigación clínica con dispositivos cardiacos implantables, mediante la reducción de costes y tiempos de desarrollo de los proyectos, y el incremento de la calidad de los datos recopilados y el diseño de soluciones que permitan obtener un mayor rendimiento de los datos mediante la fusión de datos de distintas fuentes o estudios. Con este fin se proponen como objetivos específicos de este proyecto de investigación dos nuevos modelos: - Un modelo de recuperación y procesamiento de datos para los estudios clínicos con dispositivos cardiacos implantables, que permita estructurar y estandarizar estos procedimientos, con el fin de reducir tiempos de desarrollo Modelos de Métrica para Sistemas Electrónicos de Adquisición de Datos y de Procesamiento para Investigación Clínica con Dispositivos Cardiacos Implantables de estas tareas, mejorar la calidad del resultado obtenido, disminuyendo en consecuencia los costes. - Un modelo de métrica integrado en un Sistema Electrónico de Adquisición de Datos (EDC) que permita analizar los resultados del proyecto de investigación y, particularmente del rendimiento obtenido del EDC, con el fin de perfeccionar estos sistemas y reducir tiempos y costes de desarrollo del proyecto y mejorar la calidad de los datos clínicos recopilados. Como resultado de esta investigación, el modelo de procesamiento propuesto ha permitido reducir el tiempo medio de procesamiento de los datos en más de un 90%, los costes derivados del mismo en más de un 85% y todo ello, gracias a la automatización de la extracción y almacenamiento de los datos, consiguiendo una mejora de la calidad de los mismos. Por otro lado, el modelo de métrica posibilita el análisis descriptivo detallado de distintos indicadores que caracterizan el rendimiento del proyecto de investigación clínica, haciendo factible además la comparación entre distintos estudios. La conclusión de esta tesis doctoral es que los resultados obtenidos han demostrado que la utilización en estudios clínicos reales de los dos modelos desarrollados ha conducido a una mejora en la eficiencia de los proyectos, reduciendo los costes globales de los mismos, disminuyendo los tiempos de ejecución, e incrementando la calidad de los datos recopilados. Las principales aportaciones de este trabajo de investigación al conocimiento científico son la implementación de un sistema de procesamiento inteligente de los datos almacenados por los dispositivos cardiacos implantables, la integración en el mismo de una base de datos global y optimizada para todos los modelos de dispositivos, la generación automatizada de un repositorio unificado de datos clínicos y datos de dispositivos cardiacos implantables, y el diseño de una métrica aplicada e integrable en los sistemas electrónicos de adquisición de datos para el análisis de resultados de rendimiento de los proyectos de investigación clínica. ABSTRACT Cardiovascular diseases are the main cause of death worldwide and it is expected to continue in the future, generating high costs for health care systems. Implantable cardiac devices have become one of the options for diagnosis and treatment of cardiac rhythm disorders. Clinical research with these devices has acquired great importance to fight against these diseases that affect so many people in our society. Both pharmaceutical and medical technology companies, and also investigators, are involved in an increasingly number of clinical research projects. The growth in volume and the increase in medical research complexity are contributing to raise the expenditure level associated with clinical investigation. This situation is driving health care sector companies to explore new solutions to reduce clinical trial costs. Information and Communication Technologies have facilitated clinical research, mainly in the last decade. Electronic systems and software applications have provided new possibilities in the acquisition, processing and analysis of clinical studies data. On the other hand, web technology contributed to the appearance of the first electronic data capture systems that have evolved during the last years. Nevertheless, improvement of these systems is still a key aspect for the progress of clinical research. On a different matter, the traditional way to develop clinical studies with implantable cardiac devices needed an improvement in the processing of the data stored by these devices, and also in the merging of these data with the data collected by investigators and patients. The rationale of this research is based on the need to improve the efficiency in clinical investigation with implantable cardiac devices, by means of reduction in costs and time of projects development, as well as improvement in the quality of information obtained from the studies and to obtain better performance of data through the merging of data from different sources or trials. The objective of this research project is to develop the next two models: • A model for the retrieval and processing of data for clinical studies with implantable cardiac devices, enabling structure and standardization of these procedures, in order to reduce the time of development of these tasks, to improve the quality of the results, diminish therefore costs. • A model of metric integrated in an Electronic Data Capture system (EDC) that allow to analyze the results of the research project, and particularly the EDC performance, in order to improve those systems and to reduce time and costs of the project, and to get a better quality of the collected clinical data. As a result of this work, the proposed processing model has led to a reduction of the average time for data processing by more than 90 per cent, of related costs by more than 85 per cent, and all of this, through automatic data retrieval and storage, achieving an improvement of quality of data. On the other hand, the model of metrics makes possible a detailed descriptive analysis of a set of indicators that characterize the performance of each research project, allowing inter‐studies comparison. This doctoral thesis results have demonstrated that the application of the two developed models in real clinical trials has led to an improvement in projects efficiency, reducing global costs, diminishing time in execution, and increasing quality of data collected. The main contributions to scientific knowledge of this research work are the implementation of an intelligent processing system for data stored by implantable cardiac devices, the integration in this system of a global and optimized database for all models of devices, the automatic creation of an unified repository of clinical data and data stored by medical devices, and the design of a metric to be applied and integrated in electronic data capture systems to analyze the performance results of clinical research projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La diabetes mellitus es el conjunto de alteraciones provocadas por un defecto en la cantidad de insulina secretada o por un aprovechamiento deficiente de la misma. Es causa directa de complicaciones a corto, medio y largo plazo que disminuyen la calidad y las expectativas de vida de las personas con diabetes. La diabetes mellitus es en la actualidad uno de los problemas más importantes de salud. Ha triplicado su prevalencia en los últimos 20 anos y para el año 2025 se espera que existan casi 300 millones de personas con diabetes. Este aumento de la prevalencia junto con la morbi-mortalidad asociada a sus complicaciones micro y macro-vasculares convierten la diabetes en una carga para los sistemas sanitarios, sus recursos económicos y sus profesionales, haciendo de la enfermedad un problema individual y de salud pública de enormes proporciones. De momento no existe cura a esta enfermedad, de modo que el objetivo terapéutico del tratamiento de la diabetes se centra en la normalización de la glucemia intentando minimizar los eventos de hiper e hipoglucemia y evitando la aparición o al menos retrasando la evolución de las complicaciones vasculares, que constituyen la principal causa de morbi-mortalidad de las personas con diabetes. Un adecuado control diabetológico implica un tratamiento individualizado que considere multitud de factores para cada paciente (edad, actividad física, hábitos alimentarios, presencia de complicaciones asociadas o no a la diabetes, factores culturales, etc.). Sin embargo, a corto plazo, las dos variables más influyentes que el paciente ha de manejar para intervenir sobre su nivel glucémico son la insulina administrada y la dieta. Ambas presentan un retardo entre el momento de su aplicación y el comienzo de su acción, asociado a la absorción de los mismos. Por este motivo la capacidad de predecir la evolución del perfil glucémico en un futuro cercano, ayudara al paciente a tomar las decisiones adecuadas para mantener un buen control de su enfermedad y evitar situaciones de riesgo. Este es el objetivo de la predicción en diabetes: adelantar la evolución del perfil glucémico en un futuro cercano para ayudar al paciente a adaptar su estilo de vida y sus acciones correctoras, con el propósito de que sus niveles de glucemia se aproximen a los de una persona sana, evitando así los síntomas y complicaciones de un mal control. La aparición reciente de los sistemas de monitorización continua de glucosa ha proporcionado nuevas alternativas. La disponibilidad de un registro exhaustivo de las variaciones del perfil glucémico, con un periodo de muestreo de entre uno y cinco minutos, ha favorecido el planteamiento de nuevos modelos que tratan de predecir la glucemia utilizando tan solo las medidas anteriores de glucemia o al menos reduciendo significativamente la información de entrada a los algoritmos. El hecho de requerir menor intervención por parte del paciente, abre nuevas posibilidades de aplicación de los predictores de glucemia, haciéndose viable su uso en tiempo real, como sistemas de ayuda a la decisión, como detectores de situaciones de riesgo o integrados en algoritmos automáticos de control. En esta tesis doctoral se proponen diferentes algoritmos de predicción de glucemia para pacientes con diabetes, basados en la información registrada por un sistema de monitorización continua de glucosa así como incorporando la información de la insulina administrada y la ingesta de carbohidratos. Los algoritmos propuestos han sido evaluados en simulación y utilizando datos de pacientes registrados en diferentes estudios clínicos. Para ello se ha desarrollado una amplia metodología, que trata de caracterizar las prestaciones de los modelos de predicción desde todos los puntos de vista: precisión, retardo, ruido y capacidad de detección de situaciones de riesgo. Se han desarrollado las herramientas de simulación necesarias y se han analizado y preparado las bases de datos de pacientes. También se ha probado uno de los algoritmos propuestos para comprobar la validez de la predicción en tiempo real en un escenario clínico. Se han desarrollado las herramientas que han permitido llevar a cabo el protocolo experimental definido, en el que el paciente consulta la predicción bajo demanda y tiene el control sobre las variables metabólicas. Este experimento ha permitido valorar el impacto sobre el control glucémico del uso de la predicción de glucosa. ABSTRACT Diabetes mellitus is the set of alterations caused by a defect in the amount of secreted insulin or a suboptimal use of insulin. It causes complications in the short, medium and long term that affect the quality of life and reduce the life expectancy of people with diabetes. Diabetes mellitus is currently one of the most important health problems. Prevalence has tripled in the past 20 years and estimations point out that it will affect almost 300 million people by 2025. Due to this increased prevalence, as well as to morbidity and mortality associated with micro- and macrovascular complications, diabetes has become a burden on health systems, their financial resources and their professionals, thus making the disease a major individual and a public health problem. There is currently no cure for this disease, so that the therapeutic goal of diabetes treatment focuses on normalizing blood glucose events. The aim is to minimize hyper- and hypoglycemia and to avoid, or at least to delay, the appearance and development of vascular complications, which are the main cause of morbidity and mortality among people with diabetes. A suitable, individualized and controlled treatment for diabetes involves many factors that need to be considered for each patient: age, physical activity, eating habits, presence of complications related or unrelated to diabetes, cultural factors, etc. However, in the short term, the two most influential variables that the patient has available in order to manage his/her glycemic levels are administered insulin doses and diet. Both suffer from a delay between their time of application and the onset of the action associated with their absorption. Therefore, the ability to predict the evolution of the glycemic profile in the near future could help the patient to make appropriate decisions on how to maintain good control of his/her disease and to avoid risky situations. Hence, the main goal of glucose prediction in diabetes consists of advancing the evolution of glycemic profiles in the near future. This would assist the patient in adapting his/her lifestyle and in taking corrective actions in a way that blood glucose levels approach those of a healthy person, consequently avoiding the symptoms and complications of a poor glucose control. The recent emergence of continuous glucose monitoring systems has provided new alternatives in this field. The availability of continuous records of changes in glycemic profiles (with a sampling period of one or five minutes) has enabled the design of new models which seek to predict blood glucose by using automatically read glucose measurements only (or at least, reducing significantly the data input manually to the algorithms). By requiring less intervention by the patient, new possibilities are open for the application of glucose predictors, making its use feasible in real-time applications, such as: decision support systems, hypo- and hyperglycemia detectors, integration into automated control algorithms, etc. In this thesis, different glucose prediction algorithms are proposed for patients with diabetes. These are based on information recorded by a continuous glucose monitoring system and incorporate information of the administered insulin and carbohydrate intakes. The proposed algorithms have been evaluated in-silico and using patients’ data recorded in different clinical trials. A complete methodology has been developed to characterize the performance of predictive models from all points of view: accuracy, delay, noise and ability to detect hypo- and hyperglycemia. In addition, simulation tools and patient databases have been deployed. One of the proposed algorithms has additionally been evaluated in terms of real-time prediction performance in a clinical scenario in which the patient checked his/her glucose predictions on demand and he/she had control on his/her metabolic variables. This has allowed assessing the impact of using glucose prediction on glycemic control. The tools to carry out the defined experimental protocols were also developed in this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El trabajo se enmarca dentro de los proyecto INTEGRATE y EURECA, cuyo objetivo es el desarrollo de una capa de interoperabilidad semántica que permita la integración de datos e investigación clínica, proporcionando una plataforma común que pueda ser integrada en diferentes instituciones clínicas y que facilite el intercambio de información entre las mismas. De esta manera se promueve la mejora de la práctica clínica a través de la cooperación entre instituciones de investigación con objetivos comunes. En los proyectos se hace uso de estándares y vocabularios clínicos ya existentes, como pueden ser HL7 o SNOMED, adaptándolos a las necesidades particulares de los datos con los que se trabaja en INTEGRATE y EURECA. Los datos clínicos se representan de manera que cada concepto utilizado sea único, evitando ambigüedades y apoyando la idea de plataforma común. El alumno ha formado parte de un equipo de trabajo perteneciente al Grupo de Informática de la UPM, que a su vez trabaja como uno de los socios de los proyectos europeos nombrados anteriormente. La herramienta desarrollada, tiene como objetivo realizar tareas de homogenización de la información almacenada en las bases de datos de los proyectos haciendo uso de los mecanismos de normalización proporcionados por el vocabulario médico SNOMED-CT. Las bases de datos normalizadas serán las utilizadas para llevar a cabo consultas por medio de servicios proporcionados en la capa de interoperabilidad, ya que contendrán información más precisa y completa que las bases de datos sin normalizar. El trabajo ha sido realizado entre el día 12 de Septiembre del año 2014, donde comienza la etapa de formación y recopilación de información, y el día 5 de Enero del año 2015, en el cuál se termina la redacción de la memoria. El ciclo de vida utilizado ha sido el de desarrollo en cascada, en el que las tareas no comienzan hasta que la etapa inmediatamente anterior haya sido finalizada y validada. Sin embargo, no todas las tareas han seguido este modelo, ya que la realización de la memoria del trabajo se ha llevado a cabo de manera paralela con el resto de tareas. El número total de horas dedicadas al Trabajo de Fin de Grado es 324. Las tareas realizadas y el tiempo de dedicación de cada una de ellas se detallan a continuación:  Formación. Etapa de recopilación de información necesaria para implementar la herramienta y estudio de la misma [30 horas.  Especificación de requisitos. Se documentan los diferentes requisitos que ha de cumplir la herramienta [20 horas].  Diseño. En esta etapa se toman las decisiones de diseño de la herramienta [35 horas].  Implementación. Desarrollo del código de la herramienta [80 horas].  Pruebas. Etapa de validación de la herramienta, tanto de manera independiente como integrada en los proyectos INTEGRATE y EURECA [70 horas].  Depuración. Corrección de errores e introducción de mejoras de la herramienta [45 horas].  Realización de la memoria. Redacción de la memoria final del trabajo [44 horas].---ABSTRACT---This project belongs to the semantic interoperability layer developed in the European projects INTEGRATE and EURECA, which aims to provide a platform to promote interchange of medical information from clinical trials to clinical institutions. Thus, research institutions may cooperate to enhance clinical practice. Different health standards and clinical terminologies has been used in both INTEGRATE and EURECA projects, e.g. HL7 or SNOMED-CT. These tools have been adapted to the projects data requirements. Clinical data are represented by unique concepts, avoiding ambiguity problems. The student has been working in the Biomedical Informatics Group from UPM, partner of the INTEGRATE and EURECA projects. The tool developed aims to perform homogenization tasks over information stored in databases of the project, through normalized representation provided by the SNOMED-CT terminology. The data query is executed against the normalized version of the databases, since the information retrieved will be more informative than non-normalized databases. The project has been performed from September 12th of 2014, when initiation stage began, to January 5th of 2015, when the final report was finished. The waterfall model for software development was followed during the working process. Therefore, a phase may not start before the previous one finishes and has been validated, except from the final report redaction, which has been carried out in parallel with the others phases. The tasks that have been developed and time for each one are detailed as follows:  Training. Gathering the necessary information to develop the tool [30 hours].  Software requirement specification. Requirements the tool must accomplish [20 hours].  Design. Decisions on the design of the tool [35 hours].  Implementation. Tool development [80 hours].  Testing. Tool evaluation within the framework of the INTEGRATE and EURECA projects [70 hours].  Debugging. Improve efficiency and correct errors [45 hours].  Documenting. Final report elaboration [44 hours].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En los últimos años ha habido un gran aumento de fuentes de datos biomédicos. La aparición de nuevas técnicas de extracción de datos genómicos y generación de bases de datos que contienen esta información ha creado la necesidad de guardarla para poder acceder a ella y trabajar con los datos que esta contiene. La información contenida en las investigaciones del campo biomédico se guarda en bases de datos. Esto se debe a que las bases de datos permiten almacenar y manejar datos de una manera simple y rápida. Dentro de las bases de datos existen una gran variedad de formatos, como pueden ser bases de datos en Excel, CSV o RDF entre otros. Actualmente, estas investigaciones se basan en el análisis de datos, para a partir de ellos, buscar correlaciones que permitan inferir, por ejemplo, tratamientos nuevos o terapias más efectivas para una determinada enfermedad o dolencia. El volumen de datos que se maneja en ellas es muy grande y dispar, lo que hace que sea necesario el desarrollo de métodos automáticos de integración y homogeneización de los datos heterogéneos. El proyecto europeo p-medicine (FP7-ICT-2009-270089) tiene como objetivo asistir a los investigadores médicos, en este caso de investigaciones relacionadas con el cáncer, proveyéndoles con nuevas herramientas para el manejo de datos y generación de nuevo conocimiento a partir del análisis de los datos gestionados. La ingestión de datos en la plataforma de p-medicine, y el procesamiento de los mismos con los métodos proporcionados, buscan generar nuevos modelos para la toma de decisiones clínicas. Dentro de este proyecto existen diversas herramientas para integración de datos heterogéneos, diseño y gestión de ensayos clínicos, simulación y visualización de tumores y análisis estadístico de datos. Precisamente en el ámbito de la integración de datos heterogéneos surge la necesidad de añadir información externa al sistema proveniente de bases de datos públicas, así como relacionarla con la ya existente mediante técnicas de integración semántica. Para resolver esta necesidad se ha creado una herramienta, llamada Term Searcher, que permite hacer este proceso de una manera semiautomática. En el trabajo aquí expuesto se describe el desarrollo y los algoritmos creados para su correcto funcionamiento. Esta herramienta ofrece nuevas funcionalidades que no existían dentro del proyecto para la adición de nuevos datos provenientes de fuentes públicas y su integración semántica con datos privados.---ABSTRACT---Over the last few years, there has been a huge growth of biomedical data sources. The emergence of new techniques of genomic data generation and data base generation that contain this information, has created the need of storing it in order to access and work with its data. The information employed in the biomedical research field is stored in databases. This is due to the capability of databases to allow storing and managing data in a quick and simple way. Within databases there is a variety of formats, such as Excel, CSV or RDF. Currently, these biomedical investigations are based on data analysis, which lead to the discovery of correlations that allow inferring, for example, new treatments or more effective therapies for a specific disease or ailment. The volume of data handled in them is very large and dissimilar, which leads to the need of developing new methods for automatically integrating and homogenizing the heterogeneous data. The p-medicine (FP7-ICT-2009-270089) European project aims to assist medical researchers, in this case related to cancer research, providing them with new tools for managing and creating new knowledge from the analysis of the managed data. The ingestion of data into the platform and its subsequent processing with the provided tools aims to enable the generation of new models to assist in clinical decision support processes. Inside this project, there exist different tools related to areas such as the integration of heterogeneous data, the design and management of clinical trials, simulation and visualization of tumors and statistical data analysis. Particularly in the field of heterogeneous data integration, there is a need to add external information from public databases, and relate it to the existing ones through semantic integration methods. To solve this need a tool has been created: the term Searcher. This tool aims to make this process in a semiautomatic way. This work describes the development of this tool and the algorithms employed in its operation. This new tool provides new functionalities that did not exist inside the p-medicine project for adding new data from public databases and semantically integrate them with private data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The feasibility of using carbohydrate-based vaccines for the immunotherapy of cancer is being actively explored at the present time. Although a number of clinical trials have already been conducted with glycoconjugate vaccines, the optimal design and composition of the vaccines has yet to be determined. Among the candidate antigens being examined is Lewisy (Ley), a blood group-related antigen that is overexpressed on the majority of human carcinomas. Using Ley as a model for specificity, we have examined the role of epitope clustering, carrier structure, and adjuvant on the immunogenicity of Ley conjugates in mice. A glycolipopeptide containing a cluster of three contiguous Ley-serine epitopes and the Pam3Cys immunostimulating moiety was found to be superior to a similar construct containing only one Ley-serine epitope in eliciting antitumor cell antibodies. Because only IgM antibodies were produced by this vaccine, the effect on immunogenicity of coupling the glycopeptide to keyhole limpet hemocyanin was examined; although both IgM and IgG antibodies were formed, the antibodies reacted only with the immunizing structure. Reexamination of the clustered Ley-serine Pam3Cys conjugate with the adjuvant QS-21 resulted in the identification of both IgG and IgM antibodies reacting with tumor cells, thus demonstrating the feasibility of an entirely synthetic carbohydrate-based anticancer vaccine in an animal model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The central role of cyclin-dependent kinases (CDKs) in cell cycle regulation makes them a promising target for studying inhibitory molecules that can modify the degree of cell proliferation. The discovery of specific inhibitors of CDKs such as polyhydroxylated flavones has opened the way to investigation and design of antimitotic compounds. A novel flavone, (-)-cis-5,7-dihydroxyphenyl-8-[4-(3-hydroxy-1-methyl)piperidinyl] -4H-1-benzopyran-4-one hydrochloride hemihydrate (L868276), is a potent inhibitor of CDKs. A chlorinated form, flavopiridol, is currently in phase I clinical trials as a drug against breast tumors. We determined the crystal structure of a complex between CDK2 and L868276 at 2.33 angstroms resolution and refined to an Rfactor 20.3%. The aromatic portion of the inhibitor binds to the adenine-binding pocket of CDK2, and the position of the phenyl group of the inhibitor enables the inhibitor to make contacts with the enzyme not observed in the ATP complex structure. The analysis of the position of this phenyl ring not only explains the great differences of kinase inhibition among the flavonoid inhibitors but also explains the specificity of L868276 to inhibit CDK2 and CDC2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Study Design and Setting: Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Testeretest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen’s kappa (k). Results: The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good testeretest repeatability both for the scores obtained [ICC 5 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (k 5 0.612; 95% CI: 0.384, 0.839). Conclusion: The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

CONTEXT: Chitosan, a deacetylated chitin, is a widely available dietary supplement purported to decrease body weight and serum lipids through gastrointestinal fat binding. Although evaluated in a number of trials, its efficacy remains in dispute. OBJECTIVE: To evaluate the efficacy of chitosan for weight loss in overweight and obese adults. DESIGN AND SETTING: A 24-week randomised, double-blind, placebo-controlled trial, conducted at the University of Auckland between November 2001 and December 2002. PARTICIPANTS: A total of 250 participants (82% women; mean (s.d.) body mass index, 35.5 (5.1) kg/m(2); mean age, 48 (12) y). INTERVENTIONS: Participants were randomly assigned to receive 3 g chitosan/day (n = 125) or placebo (n = 125). All participants received standardised dietary and lifestyle advice for weight loss. Adherence was monitored by capsule counts. MAIN OUTCOME MEASURES: The primary outcome measure was change in body weight. Secondary outcomes included changes in body mass index, waist circumference, body fat percentage, blood pressure, serum lipids, plasma glucose, fat-soluble vitamins, faecal fat, and health-related quality of life. RESULTS: In an intention-to-treat analysis with the last observation carried forward, the chitosan group lost more body weight than the placebo group (mean (s.e.), -0.4 (0.2) kg (0.4% loss) vs +0.2 (0.2) kg (0.2% gain), P = 0.03) during the 24-week intervention, but effects were small. Similar small changes occurred in circulating total and LDL cholesterol, and glucose (P < 0.01). There were no significant differences between groups for any of the other measured outcomes. CONCLUSION: In this 24-week trial, chitosan treatment did not result in a clinically significant loss of body weight compared with placebo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The characterization of blood pressure in treatment trials assessing the benefits of blood pressure lowering regimens is a critical factor for the appropriate interpretation of study results. With numerous operators involved in the measurement of blood pressure in many thousands of patients being screened for entry into clinical trials, it is essential that operators follow pre-defined measurement protocols involving multiple measurements and standardized techniques. Blood pressure measurement protocols have been developed by international societies and emphasize the importance of appropriate choice of cuff size, identification of Korotkoff sounds, and digit preference. Training of operators and auditing of blood pressure measurement may assist in reducing the operator-related errors in measurement. This paper describes the quality control activities adopted for the screening stage of the 2nd Australian National Blood Pressure Study (ANBP2). ANBP2 is cardiovascular outcome trial of the treatment of hypertension in the elderly that was conducted entirely in general practices in Australia. A total of 54 288 subjects were screened; 3688 previously untreated subjects were identified as having blood pressure >140/90 mmHg at the initial screening visit, 898 (24%) were not eligible for study entry after two further visits due to the elevated reading not being sustained. For both systolic and diastolic blood pressure recording, observed digit preference fell within 7 percentage points of the expected frequency. Protocol adherence, in terms of the required minimum blood pressure difference between the last two successive recordings, was 99.8%. These data suggest that adherence to blood pressure recording protocols and elimination of digit preferences can be achieved through appropriate training programs and quality control activities in large multi-centre community-based trials in general practice. Repeated blood pressure measurement prior to initial diagnosis and study entry is essential to appropriately characterize hypertension in these elderly patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rationale and aims 'OTseeker' is an online database of randomized controlled trials (RCTs) and systematic reviews relevant to occupational therapy. RCTs are critically appraised and rated for quality using the 'PEDro' scale. We aimed to investigate the inter-rater reliability of the PEDro scale before and after revising rating guidelines. Methods In study 1, five raters scored 100 RCTs using the original PEDro scale guidelines. In study 2, two raters scored 40 different RCTs using revised guidelines. All RCTs were randomly selected from the OTseeker database. Reliability was calculated using Kappa and intraclass correlation coefficients [ICC (model 2,1)]. Results Inter-rater reliability was 'good to excellent' in the first study (Kappas >= 0.53; ICCs >= 0.71). After revising the rating guidelines, the reliability levels were equivalent or higher to those previously obtained (Kappas >= 0.53; ICCs >= 0.89), except for the item, 'groups similar at baseline', which still had moderate reliability (Kappa = 0.53). In study 2, two PEDro scale items, which had their definitions revised, 'less than 15% dropout' and 'point measures and variability', showed higher reliability. In both studies, the PEDro items with the lowest reliability were 'groups similar at baseline' (Kappas = 0.53), 'less than 15% dropout' (Kappas

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The demand for palliative care is increasing, yet there are few data on the best models of care nor well-validated interventions that translate current evidence into clinical practice. Supporting multidisciplinary patient-centered palliative care while successfully conducting a large clinical trial is a challenge. The Palliative Care Trial (PCT) is a pragmatic 2 x 2 x 2 factorial cluster randomized controlled trial that tests the ability of educational outreach visiting and case conferencing to improve patient-based outcomes such as performance status and pain intensity. Four hundred sixty-one consenting patients and their general practitioners (GPs) were randomized to the following: (1) GP educational outreach visiting versus usual care, (2) Structured patient and caregiver educational outreach visiting versus usual care and (3) A coordinated palliative care model of case conferencing versus the standard model of palliative care in Adelaide, South Australia (3:1 randomization). Main outcome measures included patient functional status over time, pain intensity, and resource utilization. Participants were followed longitudinally until death or November 30, 2004. The interventions are aimed at translating current evidence into clinical practice and there was particular attention in the trial's design to addressing common pitfalls for clinical studies in palliative care. Given the need for evidence about optimal interventions and service delivery models that improve the care of people with life-limiting illness, the results of this rigorous, high quality clinical trial will inform practice. Initial results are expected in mid 2005. (c) 2005 Elsevier Inc. All rights reserved.