891 resultados para potential models
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
The resilience of a social-ecological system is measured by its ability to retain core functionality when subjected to perturbation. Resilience is contextually dependent on the state of system components, the complex interactions among these components, and the timing, location, and magnitude of perturbations. The stability landscape concept provides a useful framework for considering resilience within the specified context of a particular social-ecological system but has proven difficult to operationalize. This difficulty stems largely from the complex, multidimensional nature of the systems of interest and uncertainty in system response. Agent-based models are an effective methodology for understanding how cross-scale processes within and across social and ecological domains contribute to overall system resilience. We present the results of a stylized model of agricultural land use in a small watershed that is typical of the Midwestern United States. The spatially explicit model couples land use, biophysical models, and economic drivers with an agent-based model to explore the effects of perturbations and policy adaptations on system outcomes. By applying the coupled modeling approach within the resilience and stability landscape frameworks, we (1) estimate the sensitivity of the system to context-specific perturbations, (2) determine potential outcomes of those perturbations, (3) identify possible alternative states within state space, (4) evaluate the resilience of system states, and (5) characterize changes in system-scale resilience brought on by changes in individual land use decisions.
Resumo:
The fishing sector has been suffering a strong setback, with reduction in fishing stocks and more recently with the reduction of the fishing fleet. One of the most important factors for this decrease, is related to the continuous difficulty to find fish with quality and quantity, allowing the sector work constantly all year long. However other factors are affecting negatively the fishing sector, in particular the huge maintenance costs of the ships and the high diary costs that are necessary for daily work of each vessel. One of the main costs associated with daily work, is the fuel consumption. As an example, one boat with 30 meters working around 17 hours every day, consumes 2500 liters of fuel/day. This value is very high taking into account the productivity of the sector. Supporting this premise was developed a project with the aim of reducing fuel consumption in fishing vessels. The project calls “ShipTrack” and aims the use of forecasts of ocean currents in the routes of the ships. The objective involves the use of ocean currents in favor, and avoiding ocean currents against, taking into account the course of the ship, in order to reduce fuel consumption and increase the ship speed. The methodology used underwent the creation of specific Software, in order to optimize routes, taking into account the forecasts of the ocean currents. These forecasts are performed using numerical modelling, methodology that become more and more important in all communities, because through the modeling, it can be analyzed, verified and predicted important phenomena to all the terrestrial ecosystem. The objective was the creation of Software, however its development was not completed, so it was necessary a new approach in order to verify the influence of the ocean currents in the navigation of the fishing ship "Cruz de Malta". In this new approach, and during the various ship routes it was gathering a constant information about the instant speed, instantaneous fuel consumption, the state of the ocean currents along the course of the ship, among other factors. After 4 sea travels and many routes analyzed, it was possible to verify the influence of the ocean currents in the Ship speed and in fuel consumption. For example, in many stages of the sea travels it was possible to verify an increase in speed in zones where the ocean currents are in favor to the ships movements. This incorporation of new data inside the fishing industry, was seen positively by his players, which encourages new developments in this industry.
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
In this paper we consider a neural field model comprised of two distinct populations of neurons, excitatory and inhibitory, for which both the velocities of action potential propagation and the time courses of synaptic processing are different. Using recently-developed techniques we construct the Evans function characterising the stability of both stationary and travelling wave solutions, under the assumption that the firing rate function is the Heaviside step. We find that these differences in timing for the two populations can cause instabilities of these solutions, leading to, for example, stationary breathers. We also analyse $quot;anti-pulses,$quot; a novel type of pattern for which all but a small interval of the domain (in moving coordinates) is active. These results extend previous work on neural fields with space dependent delays, and demonstrate the importance of considering the effects of the different time-courses of excitatory and inhibitory neural activity.
Resumo:
International audience
Resumo:
People, animals and the environment can be exposed to multiple chemicals at once from a variety of sources, but current risk assessment is usually carried out based on one chemical substance at a time. In human health risk assessment, ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, a wide group of fungal secondary metabolites that are known to potentially cause toxicity and carcinogenic outcomes. Mycotoxins are commonly found in a variety of foods including those intended for consumption by infants and young children and have been found in processed cereal-based foods available in the Portuguese market. The use of mathematical models, including probabilistic approaches using Monte Carlo simulations, constitutes a prominent issue in human health risk assessment in general and in mycotoxins exposure assessment in particular. The present study aims to characterize, for the first time, the risk associated with the exposure of Portuguese children to single and multiple mycotoxins present in processed cereal-based foods (CBF). Portuguese children (0-3 years old) food consumption data (n=103) were collected using a 3 days food diary. Contamination data concerned the quantification of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) were evaluated in 20 CBF samples marketed in 2014 and 2015 in Lisbon; samples were analyzed by HPLC-FLD, LC-MS/MS and GC-MS. Daily exposure of children to mycotoxins was performed using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) to the aflatoxin exposure. The magnitude of the MoE gives an indication of the risk level. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (ratio between exposure and a reference dose, HQ). For the cumulative risk assessment of multiple mycotoxins, the concentration addition (CA) concept was used. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. 71% of CBF analyzed samples were contaminated with mycotoxins (with values below the legal limits) and approximately 56% of the studied children consumed CBF at least once in these 3 days. Preliminary results showed that children exposure to single mycotoxins present in CBF were below the TDI. Aflatoxins MoE and MoET revealed a reduced potential risk by exposure through consumption of CBF (with values around 10000 or more). HQ and HI values for the remaining mycotoxins were below 1. Children are a particularly vulnerable population group to food contaminants and the present results point out an urgent need to establish legal limits and control strategies regarding the presence of multiple mycotoxins in children foods in order to protect their health. The development of packaging materials with antifungal properties is a possible solution to control the growth of moulds and consequently to reduce mycotoxin production, contributing to guarantee the quality and safety of foods intended for children consumption.
Resumo:
Cellular models are important tools in various research areas related to colorectal biology and associated diseases. Herein, we review the most widely used cell lines and the different techniques to grow them, either as cell monolayer, polarized two-dimensional epithelia on membrane filters, or as three-dimensional spheres in scaffoldfree or matrix-supported culture conditions. Moreover, recent developments, such as gut-on-chip devices or the ex vivo growth of biopsy-derived organoids, are also discussed. We provide an overview on the potential applications but also on the limitations for each of these techniques, while evaluating their contribution to provide more reliable cellular models for research, diagnostic testing, or pharmacological validation related to colon physiology and pathophysiology.
Resumo:
Nowadays it is still difficult to perform an early and accurate diagnosis of dementia, therefore many research focus on the finding of new dementia biomarkers that can aid in that purpose. So scientists try to find a noninvasive, rapid, and relatively inexpensive procedures for early diagnosis purpose. Several studies demonstrated that the utilization of spectroscopic techniques, such as Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopy could be an useful and accurate procedure to diagnose dementia. As several biochemical mechanisms related to neurodegeneration and dementia can lead to changes in plasma components and others peripheral body fluids, blood-based samples and spectroscopic analyses can be used as a more simple and less invasive technique. This work is intended to confirm some of the hypotheses of previous studies in which FTIR was used in the study of plasma samples of possible patient with AD and respective controls and verify the reproducibility of this spectroscopic technique in the analysis of such samples. Through the spectroscopic analysis combined with multivariate analysis it is possible to discriminate controls and demented samples and identify key spectroscopic differences between these two groups of samples which allows the identification of metabolites altered in this disease. It can be concluded that there are three spectral regions, 3500-2700 cm -1, 1800-1400 cm-1 and 1200-900 cm-1 where it can be extracted relevant spectroscopic information. In the first region, the main conclusion that is possible to take is that there is an unbalance between the content of saturated and unsaturated lipids. In the 1800-1400 cm-1 region it is possible to see the presence of protein aggregates and the change in protein conformation for highly stable parallel β-sheet. The last region showed the presence of products of lipid peroxidation related to impairment of membranes, and nucleic acids oxidative damage. FTIR technique and the information gathered in this work can be used in the construction of classification models that may be used for the diagnosis of cognitive dysfunction.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Spinal cord injury (SCI) is a devastating condition, which results from trauma to the cord, resulting in a primary injury response which leads to a secondary injury cascade, causing damage to both glial and neuronal cells. Following trauma, the central nervous system (CNS) fails to regenerate due to a plethora of both intrinsic and extrinsic factors. Unfortunately, these events lead to loss of both motor and sensory function and lifelong disability and care for sufferers of SCI. There have been tremendous advancements made in our understanding of the mechanisms behind axonal regeneration and remyelination of the damaged cord. These have provided many promising therapeutic targets. However, very few have made it to clinical application, which could potentially be due to inadequate understanding of compound mechanism of action and reliance on poor SCI models. This thesis describes the use of an established neural cell co-culture model of SCI as a medium throughput screen for compounds with potential therapeutic properties. A number of compounds were screened which resulted in a family of compounds, modified heparins, being taken forward for more intense investigation. Modified heparins (mHeps) are made up of the core heparin disaccharide unit with variable sulphation groups on the iduronic acid and glucosamine residues; 2-O-sulphate (C2), 6-O-sulphate (C6) and N-sulphate (N). 2-O-sulphated (mHep6) and N-sulphated (mHep7) heparin isomers were shown to promote both neurite outgrowth and myelination in the SCI model. It was found that both mHeps decreased oligodendrocyte precursor cell (OPC) proliferation and increased oligodendrocyte (OL) number adjacent to the lesion. However, there is a difference in the direct effects on the OL from each of the mHeps; mHep6 increased myelin internode length and mHep7 increased the overall cell size. It was further elucidated that these isoforms interact with and mediate both Wnt and FGF signalling. In OPC monoculture experiments FGF2 treated OPCs displayed increased proliferation but this effect was removed when co-treated with the mHeps. Therefore, suggesting that the mHeps interact with the ligand and inhibit FGF2 signalling. Additionally, it was shown that both mHeps could be partially mediating their effects through the Wnt pathway. mHep effects on both myelination and neurite outgrowth were removed when co-treated with a Wnt signalling inhibitor, suggesting cell signalling mediation by ligand immobilisation and signalling activation as a mechanistic action for the mHeps. However, the initial methods employed in this thesis were not sufficient to provide a more detailed study into the effects the mHeps have on neurite outgrowth. This led to the design and development of a novel microfluidic device (MFD), which provides a platform to study of axonal injury. This novel device is a three chamber device with two chambers converging onto a central open access chamber. This design allows axons from two points of origin to enter a chamber which can be subjected to injury, thus providing a platform in which targeted axonal injury and the regenerative capacity of a compound study can be performed. In conclusion, this thesis contributes to and advances the study of SCI in two ways; 1) identification and investigation of a novel set of compounds with potential therapeutic potential i.e. desulphated modified heparins. These compounds have multiple therapeutic properties and could revolutionise both the understanding of the basic pathological mechanisms underlying SCI but also be a powered therapeutic option. 2) Development of a novel microfluidic device to study in greater detail axonal biology, specifically, targeted axonal injury and treatment, providing a more representative model of SCI than standard in vitro models. Therefore, the MFD could lead to advancements and the identification of factors and compounds relating to axonal regeneration.
Resumo:
A parameterization of mesoscale eddy fluxes in the ocean should be consistent with the fact that the ocean interior is nearly adiabatic. Gent and McWilliams have described a framework in which this can be approximated in L-coordinate primitive equation models by incorporating the effects of eddies on the buoyancy field through an eddy-induced velocity. It is also natural to base a parameterization on the simple picture of the mixing of potential vorticity in the interior and the mixing of buoyancy at the surface. The authors discuss the various constraints imposed by these two requirements and attempt to clarify the appropriate boundary conditions on the eddy-induced velocities at the surface. Quasigeostrophic theory is used as a guide to the simplest way of satisfying these constraints.
Resumo:
Background: Depression is a major health problem worldwide and the majority of patients presenting with depressive symptoms are managed in primary care. Current approaches for assessing depressive symptoms in primary care are not accurate in predicting future clinical outcomes, which may potentially lead to over or under treatment. The Allostatic Load (AL) theory suggests that by measuring multi-system biomarker levels as a proxy of measuring multi-system physiological dysregulation, it is possible to identify individuals at risk of having adverse health outcomes at a prodromal stage. Allostatic Index (AI) score, calculated by applying statistical formulations to different multi-system biomarkers, have been associated with depressive symptoms. Aims and Objectives: To test the hypothesis, that a combination of allostatic load (AL) biomarkers will form a predictive algorithm in defining clinically meaningful outcomes in a population of patients presenting with depressive symptoms. The key objectives were: 1. To explore the relationship between various allostatic load biomarkers and prevalence of depressive symptoms in patients, especially in patients diagnosed with three common cardiometabolic diseases (Coronary Heart Disease (CHD), Diabetes and Stroke). 2 To explore whether allostatic load biomarkers predict clinical outcomes in patients with depressive symptoms, especially in patients with three common cardiometabolic diseases (CHD, Diabetes and Stroke). 3 To develop a predictive tool to identify individuals with depressive symptoms at highest risk of adverse clinical outcomes. Methods: Datasets used: ‘DepChron’ was a dataset of 35,537 patients with existing cardiometabolic disease collected as a part of routine clinical practice. ‘Psobid’ was a research data source containing health related information from 666 participants recruited from the general population. The clinical outcomes for 3 both datasets were studied using electronic data linkage to hospital and mortality health records, undertaken by Information Services Division, Scotland. Cross-sectional associations between allostatic load biomarkers calculated at baseline, with clinical severity of depression assessed by a symptom score, were assessed using logistic and linear regression models in both datasets. Cox’s proportional hazards survival analysis models were used to assess the relationship of allostatic load biomarkers at baseline and the risk of adverse physical health outcomes at follow-up, in patients with depressive symptoms. The possibility of interaction between depressive symptoms and allostatic load biomarkers in risk prediction of adverse clinical outcomes was studied using the analysis of variance (ANOVA) test. Finally, the value of constructing a risk scoring scale using patient demographics and allostatic load biomarkers for predicting adverse outcomes in depressed patients was investigated using clinical risk prediction modelling and Area Under Curve (AUC) statistics. Key Results: Literature Review Findings. The literature review showed that twelve blood based peripheral biomarkers were statistically significant in predicting six different clinical outcomes in participants with depressive symptoms. Outcomes related to both mental health (depressive symptoms) and physical health were statistically associated with pre-treatment levels of peripheral biomarkers; however only two studies investigated outcomes related to physical health. Cross-sectional Analysis Findings: In DepChron, dysregulation of individual allostatic biomarkers (mainly cardiometabolic) were found to have a non-linear association with increased probability of co-morbid depressive symptoms (as assessed by Hospital Anxiety and Depression Score HADS-D≥8). A composite AI score constructed using five biomarkers did not lead to any improvement in the observed strength of the association. In Psobid, BMI was found to have a significant cross-sectional association with the probability of depressive symptoms (assessed by General Health Questionnaire GHQ-28≥5). BMI, triglycerides, highly sensitive C - reactive 4 protein (CRP) and High Density Lipoprotein-HDL cholesterol were found to have a significant cross-sectional relationship with the continuous measure of GHQ-28. A composite AI score constructed using 12 biomarkers did not show a significant association with depressive symptoms among Psobid participants. Longitudinal Analysis Findings: In DepChron, three clinical outcomes were studied over four years: all-cause death, all-cause hospital admissions and composite major adverse cardiovascular outcome-MACE (cardiovascular death or admission due to MI/stroke/HF). Presence of depressive symptoms and composite AI score calculated using mainly peripheral cardiometabolic biomarkers was found to have a significant association with all three clinical outcomes over the following four years in DepChron patients. There was no evidence of an interaction between AI score and presence of depressive symptoms in risk prediction of any of the three clinical outcomes. There was a statistically significant interaction noted between SBP and depressive symptoms in risk prediction of major adverse cardiovascular outcome, and also between HbA1c and depressive symptoms in risk prediction of all-cause mortality for patients with diabetes. In Psobid, depressive symptoms (assessed by GHQ-28≥5) did not have a statistically significant association with any of the four outcomes under study at seven years: all cause death, all cause hospital admission, MACE and incidence of new cancer. A composite AI score at baseline had a significant association with the risk of MACE at seven years, after adjusting for confounders. A continuous measure of IL-6 observed at baseline had a significant association with the risk of three clinical outcomes- all-cause mortality, all-cause hospital admissions and major adverse cardiovascular event. Raised total cholesterol at baseline was associated with lower risk of all-cause death at seven years while raised waist hip ratio- WHR at baseline was associated with higher risk of MACE at seven years among Psobid participants. There was no significant interaction between depressive symptoms and peripheral biomarkers (individual or combined) in risk prediction of any of the four clinical outcomes under consideration. Risk Scoring System Development: In the DepChron cohort, a scoring system was constructed based on eight baseline demographic and clinical variables to predict the risk of MACE over four years. The AUC value for the risk scoring system was modest at 56.7% (95% CI 55.6 to 57.5%). In Psobid, it was not possible to perform this analysis due to the low event rate observed for the clinical outcomes. Conclusion: Individual peripheral biomarkers were found to have a cross-sectional association with depressive symptoms both in patients with cardiometabolic disease and middle-aged participants recruited from the general population. AI score calculated with different statistical formulations was of no greater benefit in predicting concurrent depressive symptoms or clinical outcomes at follow-up, over and above its individual constituent biomarkers, in either patient cohort. SBP had a significant interaction with depressive symptoms in predicting cardiovascular events in patients with cardiometabolic disease; HbA1c had a significant interaction with depressive symptoms in predicting all-cause mortality in patients with diabetes. Peripheral biomarkers may have a role in predicting clinical outcomes in patients with depressive symptoms, especially for those with existing cardiometabolic disease, and this merits further investigation.
Resumo:
Mechanistic models used for prediction should be parsimonious, as models which are over-parameterised may have poor predictive performance. Determining whether a model is parsimonious requires comparisons with alternative model formulations with differing levels of complexity. However, creating alternative formulations for large mechanistic models is often problematic, and usually time-consuming. Consequently, few are ever investigated. In this paper, we present an approach which rapidly generates reduced model formulations by replacing a model’s variables with constants. These reduced alternatives can be compared to the original model, using data based model selection criteria, to assist in the identification of potentially unnecessary model complexity, and thereby inform reformulation of the model. To illustrate the approach, we present its application to a published radiocaesium plant-uptake model, which predicts uptake on the basis of soil characteristics (e.g. pH, organic matter content, clay content). A total of 1024 reduced model formulations were generated, and ranked according to five model selection criteria: Residual Sum of Squares (RSS), AICc, BIC, MDL and ICOMP. The lowest scores for RSS and AICc occurred for the same reduced model in which pH dependent model components were replaced. The lowest scores for BIC, MDL and ICOMP occurred for a further reduced model in which model components related to the distinction between adsorption on clay and organic surfaces were replaced. Both these reduced models had a lower RSS for the parameterisation dataset than the original model. As a test of their predictive performance, the original model and the two reduced models outlined above were used to predict an independent dataset. The reduced models have lower prediction sums of squares than the original model, suggesting that the latter may be overfitted. The approach presented has the potential to inform model development by rapidly creating a class of alternative model formulations, which can be compared.
Resumo:
The presence of gap junction coupling among neurons of the central nervous systems has been appreciated for some time now. In recent years there has been an upsurge of interest from the mathematical community in understanding the contribution of these direct electrical connections between cells to large-scale brain rhythms. Here we analyze a class of exactly soluble single neuron models, capable of producing realistic action potential shapes, that can be used as the basis for understanding dynamics at the network level. This work focuses on planar piece-wise linear models that can mimic the firing response of several different cell types. Under constant current injection the periodic response and phase response curve (PRC) is calculated in closed form. A simple formula for the stability of a periodic orbit is found using Floquet theory. From the calculated PRC and the periodic orbit a phase interaction function is constructed that allows the investigation of phase-locked network states using the theory of weakly coupled oscillators. For large networks with global gap junction connectivity we develop a theory of strong coupling instabilities of the homogeneous, synchronous and splay state. For a piece-wise linear caricature of the Morris-Lecar model, with oscillations arising from a homoclinic bifurcation, we show that large amplitude oscillations in the mean membrane potential are organized around such unstable orbits.