925 resultados para Models and Methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Glucagon-like peptide-1(7-36)amide (tGLP-1) is an important insulin-releasing hormone of the enteroinsular axis which is secreted by endocrine L-cells of the small intestine following nutrient ingestion. The present study has evaluated tGLP-1 in the intestines of normal and diabetic animal models and estimated the proportion present in glycated form. Total immunoreactive tGLP-1 levels in the intestines of hyperglycaemic hydrocortisone-treated rats, streptozotocin-treated mice and ob/ob mice were similar to age-matched controls. Affinity chromatographic separation of glycated and non-glycated proteins in intestinal extracts followed by radioimmunoassay using a fully crossreacting anti-serum demonstrated the presence of glycated tGLP-1 within the intestinal extracts of all control animals (approximately 19%., of total tGLP-1 content). Chemically induced and spontaneous animal models of diabetes were found to possess significantly greater levels of glycated tGLP-1 than controls, corresponding to between 24-71% of the total content. These observations suggest that glycated tGLP-1 may be of physiological significance given that such N-terminal modification confers resistance to DPP IV inactivation and degradation, extending the very short half-life (

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: to assess the separate contributions of marital status, living arrangements and the presence of children to subsequent admission to a care home.

Design and methods: a longitudinal study derived from the health card registration system and linked to the 2001 Census, comprising 28% of the Northern Ireland population was analysed using Cox regression to assess the likelihood of admission for 51,619 older people in the 6 years following the census. Cohort members’ age, sex, marital and health status and relationship to other household members were analysed.

Results: there were 2,138 care home admissions; a rate of 7.4 admissions per thousand person years. Those living alone had the highest likelihood of admission [hazard ratio (HR) compared with living with partner 1.66 (95% CI 1.48, 1.87)] but there was little difference between the never-married and the previously married. Living with children offered similar protection as living with a partner (HR 0.97; 95% CI 0.81, 1.16). The presence of children reduced admissions especially for married couples (HR 0.67 95% CI 0.54, 0.83; models adjusting for age, gender and health). Women were more likely to be admitted, though there were no gender differences for people living alone or those co-habiting with siblings.

Implications: presence of potential caregivers within the home, rather than those living elsewhere, is a major factor determining admission to care home. Further research should concentrate on the health and needs of these co-residents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: To date, there are no clinically reliable predictive markers of response to the current treatment regimens for advanced colorectal cancer. The aim of the current study was to compare and assess the power of transcriptional profiling using a generic microarray and a disease-specific transcriptome-based microarray. We also examined the biological and clinical relevance of the disease-specific transcriptome.

METHODS: DNA microarray profiling was carried out on isogenic sensitive and 5-FU-resistant HCT116 colorectal cancer cell lines using the Affymetrix HG-U133 Plus2.0 array and the Almac Diagnostics Colorectal cancer disease specific Research tool. In addition, DNA microarray profiling was also carried out on pre-treatment metastatic colorectal cancer biopsies using the colorectal cancer disease specific Research tool. The two microarray platforms were compared based on detection of probesets and biological information.

RESULTS: The results demonstrated that the disease-specific transcriptome-based microarray was able to out-perform the generic genomic-based microarray on a number of levels including detection of transcripts and pathway analysis. In addition, the disease-specific microarray contains a high percentage of antisense transcripts and further analysis demonstrated that a number of these exist in sense:antisense pairs. Comparison between cell line models and metastatic CRC patient biopsies further demonstrated that a number of the identified sense:antisense pairs were also detected in CRC patient biopsies, suggesting potential clinical relevance.

CONCLUSIONS: Analysis from our in vitro and clinical experiments has demonstrated that many transcripts exist in sense:antisense pairs including IGF2BP2, which may have a direct regulatory function in the context of colorectal cancer. While the functional relevance of the antisense transcripts has been established by many studies, their functional role is currently unclear; however, the numbers that have been detected by the disease-specific microarray would suggest that they may be important regulatory transcripts. This study has demonstrated the power of a disease-specific transcriptome-based approach and highlighted the potential novel biologically and clinically relevant information that is gained when using such a methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose
This study was designed to investigate methods to help patients suffering from unilateral tinnitus synthesizing an auditory replica of their tinnitus.

Materials and methods
Two semi-automatic methods (A and B) derived from the auditory threshold of the patient and a method (C) combining a pure tone and a narrow band-pass noise centred on an adjustable frequency were devised and rated on their likeness over two test sessions. A third test evaluated the stability over time of the synthesized tinnitus replica built with method C, and its proneness to merge with the patient's tinnitus. Patients were then asked to try and control the lateralisation of this single percept through the adjustment of the tinnitus replica level.

Results
The first two tests showed that seven out of ten patients chose the tinnitus replica built with method C as their preferred one. The third test, performed on twelve patients, revealed pitch tuning was rather stable over a week interval. It showed that eight patients were able to consistently match the central frequency of the synthesized tinnitus (presented to the contralateral ear) to their own tinnitus, which leaded to a unique tinnitus percept. The lateralisation displacement was consistent across patients and revealed an average range of 29dB to obtain a full lateral shift from the ipsilateral to the contralateral side.

Conclusions
Although spectrally simpler than the semi-automatic methods, method C could replicate patients' tinnitus, to some extent. When a unique percept between synthesized tinnitus and patients' tinnitus arose, lateralisation of this percept was achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the evidence of an association between hypermobility and musculoskeletal pain in children. Methods: A systematic review of the literature was performed using the databases PubMed, EMBASE, NHS Evidence, and Medline. Inclusion criteria were observational studies investigating hypermobility and musculoskeletal pain in children. Exclusion criteria were studies conducted on specialist groups (i.e. dancers) or hospital referrals. Pooled odds ratios (ORs) were calculated using random effects models and heterogeneity was tested using ?(2)-tests. Study quality was assessed using the Newcastle-Ottawa Scale for case-control studies. Results: Of the 80 studies identified, 15 met the inclusion criteria and were included in the review. Of these, 13 were included in the statistical analyses. Analysing the data showed that the heterogeneity was too high to allow for interpretation of the meta-analysis (I(2) = 72%). Heterogeneity was much lower when the studies were divided into European (I(2) = 8%) and Afro-Asian subgroups (I(2) = 65%). Sensitivity analysis based on data from studies reporting from European and Afro-Asian regions showed no association in the European studies [OR 1.00, 95% confidence interval (CI) 0.79-1.26] but a marked relationship between hypermobility and joint pain in the Afro-Asian group (OR 2.01, 95% CI 1.45-2.77). Meta-regression showed a highly significant difference between subgroups in both meta-analyses (p <0.001). Conclusion: There seems to be no association between hypermobility and joint pain in Europeans. There does seem to be an association in Afro-Asians; however, there was a high heterogeneity. It is unclear whether this is due to differences in ethnicity, nourishment, climate or study design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RATIONALE Stable isotope values (d13C and d15N) of darted skin and blubber biopsies can shed light on habitat use and diet of cetaceans, which are otherwise difficult to study. Non-dietary factors affect isotopic variability, chiefly the depletion of C due to the presence of C-rich lipids. The efficacy of post hoc lipid-correction models (normalization) must be tested. METHODS For tissues with high natural lipid content (e.g., whale skin and blubber), chemical lipid extraction or normalization is necessary. C:N ratios, d13C values and d15N values were determined for duplicate control and lipid-extracted skin and blubber of fin (Balaenoptera physalus), humpback (Megaptera novaeangliae) and minke whales (B. acutorostrata) by continuous-flow elemental analysis isotope ratio mass spectrometry (CF-EA-IRMS). Six different normalization models were tested to correct d13C values for the presence of lipids. RESULTS Following lipid extraction, significant increases in d13C values were observed for both tissues in the three species. Significant increases were also found for d15N values in minke whale skin and fin whale blubber. In fin whale skin, the d15N values decreased, with no change observed in humpback whale skin. Non-linear models generally out-performed linear models and the suitability of models varied by species and tissue, indicating the need for high model specificity, even among these closely related taxa. CONCLUSIONS Given the poor predictive power of the models to estimate lipid-free d13C values, and the unpredictable changes in d N values due to lipid-extraction, we recommend against arithmetical normalization in accounting for lipid effects on d13C values for balaenopterid skin or blubber samples. Rather, we recommend that duplicate analysis of lipid-extracted (d13C values) and non-treated tissues (d15N values) be used. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An evolution in theoretical models and methodological paradigms for investigating cognitive biases in the addictions is discussed. Anomalies in traditional cognitive perspectives, and problems with the self-report methods which underpin them, are highlighted. An emergent body of cognitive research, contextualized within the principles and paradigms of cognitive neuropsychology rather than social learning theory, is presented which, it is argued, addresses these anomalies and problems. Evidence is presented that biases in the processing of addiction-related stimuli, and in the network of propositions which motivate addictive behaviours, occur at automatic, implicit and pre-conscious levels of awareness. It is suggested that methods which assess such implicit cognitive biases (e.g. Stroop, memory, priming and reaction-time paradigms) yield findings which have better predictive utility for ongoing behaviour than those biases determined by self-report methods of introspection. The potential utility of these findings for understanding "loss of control" phenomena, and the desynchrony between reported beliefs and intentions and ongoing addictive behaviours, is discussed. Applications to the practice of cognitive therapy are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE-To examine associations of neonatal adiposity with maternal glucose levels and cord serum C-peptide in a multicenter multinational study, the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) Study, thereby assessing the Pederson hypothesis linking maternal glycemia and fetal hyperinsulinemia to neonatal adiposity. RESEARCH DESIGN AND METHODS-Eligible pregnant women underwent a standard 75-g oral glucose tolerance test between 24 and 32 weeks gestation (as close to 28 weeks as possible). Neonatal anthropometrics and cord serum C-peptide were measured. Associations of maternal glucose and cord serum C-peptide with neonatal adiposity (sum of skin folds >90th percentile or percent body fat >90th percentile) were assessed using multiple logistic regression analyses, with adjustment for potential confounders, including maternal age, parity, BMI, mean arterial pressure, height, gestational age at delivery, and the baby's sex. RESULTS-Among 23,316 HAPO Study participants with glucose levels blinded to caregivers, cord serum C-peptide results were available for 19,885 babies and skin fold measurements for 19,389. For measures of neonatal adiposity, there were strong statistically significant gradients across increasing levels of maternal glucose and cord serum C-peptide, which persisted after adjustment for potential confounders. In fully adjusted continuous variable models, odds ratios ranged from 1.35 to 1.44 for the two measures of adiposity for fasting, 1-h, and 2-h plasma glucose higher by 1 SD. CONCLUSIONS-These findings confirm the link between maternal glucose and neonatal adiposity and suggest that the relationship is mediated by fetal insulin production and that the Pedersen hypothesis describes a basic biological relationship influencing fetal growth. © 2009 by the American Diabetes Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current conceptual models of reciprocal interactions linking soil structure, plants and arbuscular mycorrhizal fungi emphasise positive feedbacks among the components of the system. However, dynamical systems with high dimensionality and several positive feedbacks (i.e. mutualism) are prone to instability. Further, organisms such as arbuscular mycorrhizal fungi (AMF) are obligate biotrophs of plants and are considered major biological agents in soil aggregate stabilization. With these considerations in mind, we developed dynamical models of soil ecosystems that reflect the main features of current conceptual models and empirical data, especially positive feedbacks and linear interactions among plants, AMF and the component of soil structure dependent on aggregates. We found that systems become increasingly unstable the more positive effects with Type I functional response (i.e., the growth rate of a mutualist is modified by the density of its partner through linear proportionality) are added to the model, to the point that increasing the realism of models by adding linear effects produces the most unstable systems. The present theoretical analysis thus offers a framework for modelling and suggests new directions for experimental studies on the interrelationship between soil structure, plants and AMF. Non-linearity in functional responses, spatial and temporal heterogeneity, and indirect effects can be invoked on a theoretical basis and experimentally tested in laboratory and field experiments in order to account for and buffer the local instability of the simplest of current scenarios. This first model presented here may generate interest in more explicitly representing the role of biota in soil physical structure, a phenomenon that is typically viewed in a more process- and management-focused context. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emotion research has long been dominated by the “standard method” of displaying posed or acted static images of facial expressions of emotion. While this method has been useful it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose Generalized Additive Models and Generalized Additive Mixed Models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The mixed model GAMM approach is preferred as it can account for autocorrelation in time series data and allows emotion decoding participants to be modelled as random effects. To increase confidence in linear differences we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition we provide comments on the use of Generalized Additive Models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: To investigate the potential dosimetric and clinical benefits predicted by using four-dimensional computed tomography (4DCT) compared with 3DCT in the planning of radical radiotherapy for non-small cell lung cancer.

MATERIALS AND METHODS:
Twenty patients were planned using free breathing 4DCT then retrospectively delineated on three-dimensional helical scan sets (3DCT). Beam arrangement and total dose (55 Gy in 20 fractions) were matched for 3D and 4D plans. Plans were compared for differences in planning target volume (PTV) geometrics and normal tissue complication probability (NTCP) for organs at risk using dose volume histograms. Tumour control probability and NTCP were modelled using the Lyman-Kutcher-Burman (LKB) model. This was compared with a predictive clinical algorithm (Maastro), which is based on patient characteristics, including: age, performance status, smoking history, lung function, tumour staging and concomitant chemotherapy, to predict survival and toxicity outcomes. Potential therapeutic gains were investigated by applying isotoxic dose escalation to both plans using constraints for mean lung dose (18 Gy), oesophageal maximum (70 Gy) and spinal cord maximum (48 Gy).

RESULTS:
4DCT based plans had lower PTV volumes, a lower dose to organs at risk and lower predicted NTCP rates on LKB modelling (P < 0.006). The clinical algorithm showed no difference for predicted 2-year survival and dyspnoea rates between the groups, but did predict for lower oesophageal toxicity with 4DCT plans (P = 0.001). There was no correlation between LKB modelling and the clinical algorithm for lung toxicity or survival. Dose escalation was possible in 15/20 cases, with a mean increase in dose by a factor of 1.19 (10.45 Gy) using 4DCT compared with 3DCT plans.

CONCLUSIONS:
4DCT can theoretically improve therapeutic ratio and dose escalation based on dosimetric parameters and mathematical modelling. However, when individual characteristics are incorporated, this gain may be less evident in terms of survival and dyspnoea rates. 4DCT allows potential for isotoxic dose escalation, which may lead to improved local control and better overall survival.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantum annealing is a promising tool for solving optimization problems, similar in some ways to the traditional ( classical) simulated annealing of Kirkpatrick et al. Simulated annealing takes advantage of thermal fluctuations in order to explore the optimization landscape of the problem at hand, whereas quantum annealing employs quantum fluctuations. Intriguingly, quantum annealing has been proved to be more effective than its classical counterpart in many applications. We illustrate the theory and the practical implementation of both classical and quantum annealing - highlighting the crucial differences between these two methods - by means of results recently obtained in experiments, in simple toy-models, and more challenging combinatorial optimization problems ( namely, Random Ising model and Travelling Salesman Problem). The techniques used to implement quantum and classical annealing are either deterministic evolutions, for the simplest models, or Monte Carlo approaches, for harder optimization tasks. We discuss the pro and cons of these approaches and their possible connections to the landscape of the problem addressed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a companion paper, Seitenzahl et al. have presented a set of three-dimensional delayed detonation models for thermonuclear explosions of near-Chandrasekhar-mass white dwarfs (WDs). Here,we present multidimensional radiative transfer simulations that provide synthetic light curves and spectra for those models. The model sequence explores both changes in the strength of the deflagration phase (which is controlled by the ignition configuration in our models) and the WD central density. In agreement with previous studies, we find that the strength of the deflagration significantly affects the explosion and the observables. Variations in the central density also have an influence on both brightness and colour, but overall it is a secondary parameter in our set of models. In many respects, the models yield a good match to the observed properties of normal Type Ia supernovae (SNe Ia): peak brightness, rise/decline time-scales and synthetic spectra are all in reasonable agreement. There are, however, several differences. In particular, the models are systematically too red around maximum light, manifest spectral line velocities that are a little too high and yield I-band light curves that do not match observations. Although some of these discrepancies may simply relate to approximations made in the modelling, some pose real challenges to the models. If viewed as a complete sequence, our models do not reproduce the observed light-curve width- luminosity relation (WLR) of SNe Ia: all our models show rather similar B-band decline rates, irrespective of peak brightness. This suggests that simple variations in the strength of the deflagration phase in Chandrasekhar-mass deflagration-to-detonation models do not readily explain the observed diversity of normal SNe Ia. This may imply that some other parameter within the Chandrasekhar-mass paradigm is key to the WLR, or that a substantial fraction of normal SNe Ia arise from an alternative explosion scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively ‘noisy’ magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.