15 resultados para translational medical research
em CentAUR: Central Archive University of Reading - UK
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Atherosclerosis, leading to cardiovascular disease, is a chronic condition involving a strong inflammatory component. There is evidence that the n-3 polyunsaturated fatty acids (PUFA) present in oily fish and fish oils protect against cardiovascular disease. While these fatty acids have well-recognised effects on plasma triacylglycerol concentrations, it is likely that they exert beneficial effects through other mechanisms in addition. A large body of evidence suggests that the n-3 PUFA have anti-inflammatory properties, some of which may be manifested in the arterial wall, either directly or indirectly, to modulate the progression of atherosclerosis. This review critically evaluates the evidence for the anti-inflammatory effects of the n-3 PUFA in cells and on pathways which have a direct influence on atherogenesis in the arterial wall.
Resumo:
Improving methodology for Phase I dose-finding studies is currently of great interest in pharmaceutical and medical research. This article discusses the current atmosphere and attitude towards adaptive designs and focuses on the influence of Bayesian approaches.
Resumo:
In recent years, there has been a drive to save development costs and shorten time-to-market of new therapies. Research into novel trial designs to facilitate this goal has led to, amongst other approaches, the development of methodology for seamless phase II/III designs. Such designs allow treatment or dose selection at an interim analysis and comparative evaluation of efficacy with control, in the same study. Methods have gained much attention because of their potential advantages compared to conventional drug development programmes with separate trials for individual phases. In this article, we review the various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method. The objective of this article is to describe the approaches in a unified framework and highlight their similarities and differences to allow choice of an appropriate methodology by a trialist considering conducting such a trial.
Resumo:
BACKGROUND: The endothelial nitric-oxide synthase (NOS3) gene encodes the enzyme (eNOS) that synthesizes the molecule nitric oxide, which facilitates endothelium-dependent vasodilation in response to physical activity. Thus, energy expenditure may modify the association between the genetic variation at NOS3 and blood pressure. METHODS: To test this hypothesis, we genotyped 11 NOS3 polymorphisms, capturing all common variations, in 726 men and women from the Medical Research Council (MRC) Ely Study (age (mean +/- s.d.): 55 +/- 10 years, body mass index: 26.4 +/- 4.1 kg/m(2)). Habitual/non-resting energy expenditure (NREE) was assessed via individually calibrated heart rate monitoring over 4 days. RESULTS: The intronic variant, IVS25+15 [G-->A], was significantly associated with blood pressure; GG homozygotes had significantly lower levels of diastolic blood pressure (DBP) (-2.8 mm Hg; P = 0.016) and systolic blood pressure (SBP) (-1.9 mm Hg; P = 0.018) than A-allele carriers. The interaction between NREE and IVS25+15 was also significant for both DBP (P = 0.006) and SBP (P = 0.026), in such a way that the effect of the GG-genotype on blood pressure was stronger in individuals with higher NREE (DBP: -4.9 mm Hg, P = 0.02. SBP: -3.8 mm Hg, P= 0.03 for the third tertile). Similar results were observed when the outcome was dichotomously defined as hypertension. CONCLUSIONS: In summary, the NOS3 IVS25+15 is directly associated with blood pressure and hypertension in white Europeans. However, the associations are most evident in the individuals with the highest NREE. These results need further replication and have to be ideally tested in a trial before being informative for targeted disease prevention. Eventually, the selection of individuals for lifestyle intervention programs could be guided by knowledge of genotype.
Resumo:
Genes play an important role in the development of diabetes mellitus. Putative susceptibility genes could be the key to the development of diabetes. Type 1 diabetes mellitus is one of the most common chronic diseases of childhood. A combination of genetic and environmental factors is most likely the cause of Type 1 diabetes. The pathogenetic sequence leading to the selective autoimmune destruction of islet beta-cells and development of Type 1 diabetes involves genetic factors, environmental factors, immune regulation and chemical mediators. Unlike Type 1 diabetes mellitus, Type 2 diabetes is often considered a polygenic disorder with multiple genes located on different chromosomes being associated with this condition. This is further complicated by numerous environmental factors which also contribute to the clinical manifestation of the disorder in genetically predisposed persons. Only a minority of cases of type 2 diabetes are caused by single gene defects such as maturity onset diabetes of the young (MODY), syndrome of insulin resistance (insulin receptor defect) and maternally inherited diabetes and deafness (mitochondrial gene defect). Although Type 2 diabetes mellitus appears in almost epidemic proportions our knowledge of the mechanism of this disease is limited. More information about insulin secretion and action and the genetic variability of the various factors involved will contribute to better understanding and classification of this group of diseases. This article discusses the results of various genetic studies on diabetes with special reference to Indian population.
Resumo:
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.
Resumo:
Background Cognitive–behavioural therapy (CBT) for childhood anxiety disorders is associated with modest outcomes in the context of parental anxiety disorder. Objectives This study evaluated whether or not the outcome of CBT for children with anxiety disorders in the context of maternal anxiety disorders is improved by the addition of (i) treatment of maternal anxiety disorders, or (ii) treatment focused on maternal responses. The incremental cost-effectiveness of the additional treatments was also evaluated. Design Participants were randomised to receive (i) child cognitive–behavioural therapy (CCBT); (ii) CCBT with CBT to target maternal anxiety disorders [CCBT + maternal cognitive–behavioural therapy (MCBT)]; or (iii) CCBT with an intervention to target mother–child interactions (MCIs) (CCBT + MCI). Setting A NHS university clinic in Berkshire, UK. Participants Two hundred and eleven children with a primary anxiety disorder, whose mothers also had an anxiety disorder. Interventions All families received eight sessions of individual CCBT. Mothers in the CCBT + MCBT arm also received eight sessions of CBT targeting their own anxiety disorders. Mothers in the MCI arm received 10 sessions targeting maternal parenting cognitions and behaviours. Non-specific interventions were delivered to balance groups for therapist contact. Main outcome measures Primary clinical outcomes were the child’s primary anxiety disorder status and degree of improvement at the end of treatment. Follow-up assessments were conducted at 6 and 12 months. Outcomes in the economic analyses were identified and measured using estimated quality-adjusted life-years (QALYs). QALYS were combined with treatment, health and social care costs and presented within an incremental cost–utility analysis framework with associated uncertainty. Results MCBT was associated with significant short-term improvement in maternal anxiety; however, after children had received CCBT, group differences were no longer apparent. CCBT + MCI was associated with a reduction in maternal overinvolvement and more confident expectations of the child. However, neither CCBT + MCBT nor CCBT + MCI conferred a significant post-treatment benefit over CCBT in terms of child anxiety disorder diagnoses [adjusted risk ratio (RR) 1.18, 95% confidence interval (CI) 0.87 to 1.62, p = 0.29; adjusted RR CCBT + MCI vs. control: adjusted RR 1.22, 95% CI 0.90 to 1.67, p = 0.20, respectively] or global improvement ratings (adjusted RR 1.25, 95% CI 1.00 to 1.59, p = 0.05; adjusted RR 1.20, 95% CI 0.95 to 1.53, p = 0.13). CCBT + MCI outperformed CCBT on some secondary outcome measures. Furthermore, primary economic analyses suggested that, at commonly accepted thresholds of cost-effectiveness, the probability that CCBT + MCI will be cost-effective in comparison with CCBT (plus non-specific interventions) is about 75%. Conclusions Good outcomes were achieved for children and their mothers across treatment conditions. There was no evidence of a benefit to child outcome of supplementing CCBT with either intervention focusing on maternal anxiety disorder or maternal cognitions and behaviours. However, supplementing CCBT with treatment that targeted maternal cognitions and behaviours represented a cost-effective use of resources, although the high percentage of missing data on some economic variables is a shortcoming. Future work should consider whether or not effects of the adjunct interventions are enhanced in particular contexts. The economic findings highlight the utility of considering the use of a broad range of services when evaluating interventions with this client group. Trial registration Current Controlled Trials ISRCTN19762288. Funding This trial was funded by the Medical Research Council (MRC) and Berkshire Healthcare Foundation Trust and managed by the National Institute for Health Research (NIHR) on behalf of the MRC–NIHR partnership (09/800/17) and will be published in full in Health Technology Assessment; Vol. 19, No. 38.
Resumo:
This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.
Resumo:
Differentiated human neural stem cells were cultured in an inert three-dimensional (3D) scaffold and, unlike two-dimensional (2D) but otherwise comparable monolayer cultures, formed spontaneously active, functional neuronal networks that responded reproducibly and predictably to conventional pharmacological treatments to reveal functional, glutamatergic synapses. Immunocytochemical and electron microscopy analysis revealed a neuronal and glial population, where markers of neuronal maturity were observed in the former. Oligonucleotide microarray analysis revealed substantial differences in gene expression conferred by culturing in a 3D vs a 2D environment. Notable and numerous differences were seen in genes coding for neuronal function, the extracellular matrix and cytoskeleton. In addition to producing functional networks, differentiated human neural stem cells grown in inert scaffolds offer several significant advantages over conventional 2D monolayers. These advantages include cost savings and improved physiological relevance, which make them better suited for use in the pharmacological and toxicological assays required for development of stem cell-based treatments and the reduction of animal use in medical research.
Resumo:
Background: Massive open online courses (MOOCs) have become commonplace in the e-learning landscape. Thousands of elderly learners are participating in courses offered by various institutions on a multitude of platforms in many different languages. However, there is very little research into understanding elderly learners in MOOCs. Objective: We aim to show that a considerable proportion of elderly learners are participating in MOOCs and that there is a lack of research in this area. We hope this assertion of the wide gap in research on elderly learners in MOOCs will pave the way for more research in this area. Methods: Pre-course survey data for 10 University of Reading courses on the FutureLearn platform were analyzed to show the level of participation of elderly learners in MOOCs. Two MOOC aggregator sites (Class Central and MOOC List) were consulted to gather data on MOOC offerings that include topics relating to aging. In parallel, a selected set of MOOC platform catalogues, along with a recently published review on health and medicine-related MOOCs, were searched to find courses relating to aging. A systematic literature search was then employed to identify research articles on elderly learners in MOOCs. Results: The 10 courses reviewed had a considerable proportion of elderly learners participating in them. For the over-66 age group, this varied from 0.5% (on the course “Managing people”) to 16.3% (on the course “Our changing climate”), while for the over-56 age group it ranged from 3.0% (on “A beginners guide to writing in English”) to 39.5% (on “Heart health”). Only six MOOCs were found to include topics related to aging: three were on the Coursera platform, two on the FutureLearn platform, and one on the Open2Study platform. Just three scholarly articles relating to MOOCs and elderly learners were retrieved from the literature search. Conclusions: This review presents evidence to suggest that elderly learners are already participating in MOOCs. Despite this, there has been very little research into their engagement with MOOCs. Similarly, there has been little research into exploiting the scope of MOOCs for delivering topics that would be of interest to elderly learners. We believe there is potential to use MOOCs as a way of tackling the issue of loneliness among older adults by engaging them as either resource personnel or learners.
Resumo:
Regeneration of periodontal tissues aims to utilize tissue engineering techniques to restore lost periodontal tissues including the cementum, periodontal ligament and alveolar bone. Regenerative dentistry and its special field regenerative periodontology represent relatively new and emerging branches of translational stem cell biology and regenerative medicine focusing on replacing and regenerating dental tissues to restore or re-establish their normal function lost during degenerative diseases or acute lesions. The regeneration itself can be achieved through transplantation of autologous or allogenic stem cells, or by improving the tissue self-repair mechanisms (e.g. by application of growth factors). In addition, a combination of stem cells or stem cell-containing tissue with bone implants can be used to improve tissue integration and the clinical outcome. As the oral cavity represents a complex system consisting of teeth, bone, soft tissues and sensory nerves, regenerative periodontology relies on the use of stem cells with relatively high developmental potential. Notably, the potential use of pluripotent stem cell types such as human embryonic stem cells or induced pluripotent stem cells is still aggravated by ethical and practical problems. Thus, other cellular sources such as those readily available in the postnatal craniofacial area and particularly in oral structures offer a much better and realistic alternative as cellular regenerative sources. In this review, we summarize current knowledge on the oral neural crest-derived stem cell populations (oNCSCs) and discuss their potential in regenerative periodontology.