830 resultados para Network model
Resumo:
Importance In treatment-resistant schizophrenia, clozapine is considered the standard treatment. However, clozapine use has restrictions owing to its many adverse effects. Moreover, an increasing number of randomized clinical trials (RCTs) of other antipsychotics have been published. Objective To integrate all the randomized evidence from the available antipsychotics used for treatment-resistant schizophrenia by performing a network meta-analysis. Data Sources MEDLINE, EMBASE, Biosis, PsycINFO, PubMed, Cochrane Central Register of Controlled Trials, World Health Organization International Trial Registry, and clinicaltrials.gov were searched up to June 30, 2014. Study Selection At least 2 independent reviewers selected published and unpublished single- and double-blind RCTs in treatment-resistant schizophrenia (any study-defined criterion) that compared any antipsychotic (at any dose and in any form of administration) with another antipsychotic or placebo. Data Extraction and Synthesis At least 2 independent reviewers extracted all data into standard forms and assessed the quality of all included trials with the Cochrane Collaboration's risk-of-bias tool. Data were pooled using a random-effects model in a Bayesian setting. Main Outcomes and Measures The primary outcome was efficacy as measured by overall change in symptoms of schizophrenia. Secondary outcomes included change in positive and negative symptoms of schizophrenia, categorical response to treatment, dropouts for any reason and for inefficacy of treatment, and important adverse events. Results Forty blinded RCTs with 5172 unique participants (71.5% men; mean [SD] age, 38.8 [3.7] years) were included in the analysis. Few significant differences were found in all outcomes. In the primary outcome (reported as standardized mean difference; 95% credible interval), olanzapine was more effective than quetiapine (-0.29; -0.56 to -0.02), haloperidol (-0. 29; -0.44 to -0.13), and sertindole (-0.46; -0.80 to -0.06); clozapine was more effective than haloperidol (-0.22; -0.38 to -0.07) and sertindole (-0.40; -0.74 to -0.04); and risperidone was more effective than sertindole (-0.32; -0.63 to -0.01). A pattern of superiority for olanzapine, clozapine, and risperidone was seen in other efficacy outcomes, but results were not consistent and effect sizes were usually small. In addition, relatively few RCTs were available for antipsychotics other than clozapine, haloperidol, olanzapine, and risperidone. The most surprising finding was that clozapine was not significantly better than most other drugs. Conclusions and Relevance Insufficient evidence exists on which antipsychotic is more efficacious for patients with treatment-resistant schizophrenia, and blinded RCTs-in contrast to unblinded, randomized effectiveness studies-provide little evidence of the superiority of clozapine compared with other second-generation antipsychotics. Future clozapine studies with high doses and patients with extremely treatment-refractory schizophrenia might be most promising to change the current evidence.
Resumo:
Intravital imaging has revealed that T cells change their migratory behavior during physiological activation inside lymphoid tissue. Yet, it remains less well investigated how the intrinsic migratory capacity of activated T cells is regulated by chemokine receptor levels or other regulatory elements. Here, we used an adjuvant-driven inflammation model to examine how motility patterns corresponded with CCR7, CXCR4, and CXCR5 expression levels on ovalbumin-specific DO11.10 CD4(+) T cells in draining lymph nodes. We found that while CCR7 and CXCR4 surface levels remained essentially unaltered during the first 48-72 h after activation of CD4(+) T cells, their in vitro chemokinetic and directed migratory capacity to the respective ligands, CCL19, CCL21, and CXCL12, was substantially reduced during this time window. Activated T cells recovered from this temporary decrease in motility on day 6 post immunization, coinciding with increased migration to the CXCR5 ligand CXCL13. The transiently impaired CD4(+) T cell motility pattern correlated with increased LFA-1 expression and augmented phosphorylation of the microtubule regulator Stathmin on day 3 post immunization, yet neither microtubule destabilization nor integrin blocking could reverse TCR-imprinted unresponsiveness. Furthermore, protein kinase C (PKC) inhibition did not restore chemotactic activity, ruling out PKC-mediated receptor desensitization as mechanism for reduced migration in activated T cells. Thus, we identify a cell-intrinsic, chemokine receptor level-uncoupled decrease in motility in CD4(+) T cells shortly after activation, coinciding with clonal expansion. The transiently reduced ability to react to chemokinetic and chemotactic stimuli may contribute to the sequestering of activated CD4(+) T cells in reactive peripheral lymph nodes, allowing for integration of costimulatory signals required for full activation.
Resumo:
BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.
Resumo:
INTRODUCTION Despite important advances in psychological and pharmacological treatments of persistent depressive disorders in the past decades, their responses remain typically slow and poor, and differential responses among different modalities of treatments or their combinations are not well understood. Cognitive-Behavioural Analysis System of Psychotherapy (CBASP) is the only psychotherapy that has been specifically designed for chronic depression and has been examined in an increasing number of trials against medications, alone or in combination. When several treatment alternatives are available for a certain condition, network meta-analysis (NMA) provides a powerful tool to examine their relative efficacy by combining all direct and indirect comparisons. Individual participant data (IPD) meta-analysis enables exploration of impacts of individual characteristics that lead to a differentiated approach matching treatments to specific subgroups of patients. METHODS AND ANALYSIS We will search for all randomised controlled trials that compared CBASP, pharmacotherapy or their combination, in the treatment of patients with persistent depressive disorder, in Cochrane CENTRAL, PUBMED, SCOPUS and PsycINFO, supplemented by personal contacts. Individual participant data will be sought from the principal investigators of all the identified trials. Our primary outcomes are depression severity as measured on a continuous observer-rated scale for depression, and dropouts for any reason as a proxy measure of overall treatment acceptability. We will conduct a one-step IPD-NMA to compare CBASP, medications and their combinations, and also carry out a meta-regression to identify their prognostic factors and effect moderators. The model will be fitted in OpenBUGS, using vague priors for all location parameters. For the heterogeneity we will use a half-normal prior on the SD. ETHICS AND DISSEMINATION This study requires no ethical approval. We will publish the findings in a peer-reviewed journal. The study results will contribute to more finely differentiated therapeutics for patients suffering from this chronically disabling disorder. TRIAL REGISTRATION NUMBER CRD42016035886.
Resumo:
Introduction. Tissue engineering techniques offer a potential means to develop a tissue engineered construct (TEC) for the treatment of tissue and organ deficiencies. However, a lack of adequate vascularization is a limiting factor in the development of most viable engineered tissues. Vascular endothelial growth factor (VEGF) could aid in the development of a viable vascular network within TECs. The long-term goals of this research are to develop clinically relevant, appropriately vascularized TECs for use in humans. This project tested the hypothesis that the delivery of VEGF via controlled release from biodegradable microspheres would increase the vascular density and rate of angiogenesis within a model TEC. ^ Materials and methods. Biodegradable VEGF-encapsulated microspheres were manufactured using a novel method entitled the Solid Encapsulation/Single Emulsion/Solvent Extraction technique. Using a PLGA/PEG polymer blend, microspheres were manufactured and characterized in vitro. A model TEC using fibrin was designed for in vivo tissue engineering experimentation. At the appropriate timepoint, the TECs were explanted, and stained and quantified for CD31 using a novel semi-automated thresholding technique. ^ Results. In vitro results show the microspheres could be manufactured, stored, degrade, and release biologically active VEGF. The in vivo investigations revealed that skeletal muscle was the optimal implantation site as compared to dermis. In addition, the TECs containing fibrin with VEGF demonstrated significantly more angiogenesis than the controls. The TECs containing VEGF microspheres displayed a significant increase in vascular density by day 10. Furthermore, TECs containing VEGF microspheres had a significantly increased relative rate of angiogenesis from implantation day 5 to day 10. ^ Conclusions. A novel technique for producing microspheres loaded with biologically active proteins was developed. A defined concentration of microspheres can deliver a quantifiable level of VEGF with known release kinetics. A novel model TEC for in vivo tissue engineering investigations was developed. VEGF and VEGF microspheres stimulate angiogenesis within the model TEC. This investigation determined that biodegradable rhVEGF 165-encapsulated microspheres increased the vascular density and relative rate of angiogenesis within a model TEC. Future applications could include the incorporation of microvascular fragments into the model TEC and the incorporation of specific tissues, such as fat or bone. ^
Resumo:
Hierarchically clustered populations are often encountered in public health research, but the traditional methods used in analyzing this type of data are not always adequate. In the case of survival time data, more appropriate methods have only begun to surface in the last couple of decades. Such methods include multilevel statistical techniques which, although more complicated to implement than traditional methods, are more appropriate. ^ One population that is known to exhibit a hierarchical structure is that of patients who utilize the health care system of the Department of Veterans Affairs where patients are grouped not only by hospital, but also by geographic network (VISN). This project analyzes survival time data sets housed at the Houston Veterans Affairs Medical Center Research Department using two different Cox Proportional Hazards regression models, a traditional model and a multilevel model. VISNs that exhibit significantly higher or lower survival rates than the rest are identified separately for each model. ^ In this particular case, although there are differences in the results of the two models, it is not enough to warrant using the more complex multilevel technique. This is shown by the small estimates of variance associated with levels two and three in the multilevel Cox analysis. Much of the differences that are exhibited in identification of VISNs with high or low survival rates is attributable to computer hardware difficulties rather than to any significant improvements in the model. ^
Resumo:
The Everglades Depth Estimation Network (EDEN) is an integrated network of realtime water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on grid with 400-square-meter spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to: (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) (U.S. Army Corps of Engineers, 1999). The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades. The first objective of this report is to validate the spatially continuous EDEN water-surface model for the Everglades, Florida developed by Pearlstine et al. (2007) by using an independent field-measured data-set. The second objective is to demonstrate two applications of the EDEN water-surface model: to estimate site-specific ground elevation by using the validated EDEN water-surface model and observed water depth data; and to create water-depth hydrographs for tree islands. We found that there are no statistically significant differences between model-predicted and field-observed water-stage data in both southern Water Conservation Area (WCA) 3A and WCA 3B. Tree island elevations were derived by subtracting field water-depth measurements from the predicted EDEN water-surface. Water-depth hydrographs were then computed by subtracting tree island elevations from the EDEN water stage. Overall, the model is reliable by a root mean square error (RMSE) of 3.31 cm. By region, the RMSE is 2.49 cm and 7.77 cm in WCA 3A and 3B, respectively. This new landscape-scale hydrological model has wide applications for ongoing research and management efforts that are vital to restoration of the Florida Everglades. The accurate, high-resolution hydrological data, generated over broad spatial and temporal scales by the EDEN model, provides a previously missing key to understanding the habitat requirements and linkages among native and invasive populations, including fish, wildlife, wading birds, and plants. The EDEN model is a powerful tool that could be adapted for other ecosystem-scale restoration and management programs worldwide.
Resumo:
Genome-wide association studies (GWAS) have rapidly become a standard method for disease gene discovery. Many recent GWAS indicate that for most disorders, only a few common variants are implicated and the associated SNPs explain only a small fraction of the genetic risk. The current study incorporated gene network information into gene-based analysis of GWAS data for Crohn's disease (CD). The purpose was to develop statistical models to boost the power of identifying disease-associated genes and gene subnetworks by maximizing the use of existing biological knowledge from multiple sources. The results revealed that Markov random field (MRF) based mixture model incorporating direct neighborhood information from a single gene network is not efficient in identifying CD-related genes based on the GWAS data. The incorporation of solely direct neighborhood information might lead to the low efficiency of these models. Alternative MRF models looking beyond direct neighboring information are necessary to be developed in the future for the purpose of this study.^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Neocortical hyperexcitability defect in a mutant mouse model of spike-wave epilepsy, {\it stargazer}
Resumo:
Single-locus mutations in mice can express epileptic phenotypes and provide critical insights into the naturally occurring defects that alter excitability and mediate synchronization in the central nervous system (CNS). One such recessive mutation (on chromosome (Chr) 15), stargazer(stg/stg) expresses frequent bilateral 6-7 cycles per second (c/sec) spike-wave seizures associated with behavioral arrest, and provides a valuable opportunity to examine the inherited lesion associated with spike-wave synchronization.^ The existence of distinct and heterogeneous defects mediating spike-wave discharge (SWD) generation has been demonstrated by the presence of multiple genetic loci expressing generalized spike-wave activity and the differential effects of pharmacological agents on SWDs in different spike-wave epilepsy models. Attempts at understanding the different basic mechanisms underlying spike-wave synchronization have focused on $\gamma$-aminobutyric acid (GABA) receptor-, low threshold T-type Ca$\sp{2+}$ channel-, and N-methyl-D-aspartate receptor (NMDA-R)-mediated transmission. It is believed that defects in these modes of transmission can mediate the conversion of normal oscillations in a trisynaptic circuit, which includes the neocortex, reticular nucleus and thalamus, into spike-wave activity. However, the underlying lesions involved in spike-wave synchronization have not been clearly identified.^ The purpose of this research project was to locate and characterize a distinct neuronal hyperexcitability defect favoring spike-wave synchronization in the stargazer brain. One experimental approach for anatomically locating areas of synchronization and hyperexcitability involved an attempt to map patterns of hypersynchronous activity with antibodies to activity-induced proteins.^ A second approach to characterizing the neuronal defect involved examining the neuronal responses in the mutant following application of pharmacological agents with well known sites of action.^ In order to test the hypothesis that an NMDA receptor mediated hyperexcitability defect exists in stargazer neocortex, extracellular field recordings were used to examine the effects of CPP and MK-801 on coronal neocortical brain slices of stargazer and wild type perfused with 0 Mg$\sp{2+}$ artificial cerebral spinal fluid (aCSF).^ To study how NMDA receptor antagonists might promote increased excitability in stargazer neocortex, two basic hypotheses were tested: (1) NMDA receptor antagonists directly activate deep layer principal pyramidal cells in the neocortex of stargazer, presumably by opening NMDA receptor channels altered by the stg mutation; and (2) NMDA receptor antagonists disinhibit the neocortical network by blocking recurrent excitatory synaptic inputs onto inhibitory interneurons in the deep layers of stargazer neocortex.^ In order to test whether CPP might disinhibit the 0 Mg$\sp{2+}$ bursting network in the mutant by acting on inhibitory interneurons, the inhibitory inputs were pharmacologically removed by application of GABA receptor antagonists to the cortical network, and the effects of CPP under 0 Mg$\sp{2+}$aCSF perfusion in layer V of stg/stg were then compared with those found in +/+ neocortex using in vitro extracellular field recordings. (Abstract shortened by UMI.) ^
Resumo:
Interpretation of ice-core records requires accurate knowledge of the past and present surface topography and stress-strain fields. The European Project for Ice Coring in Antarctica (EPICA) drilling site (0.0684° E and 75.0025° S, 2891.7 m) in Dronning Maud Land, Antarctica, is located in the immediate vicinity of a transient and splitting ice divide. A digital elevation model is determined from the combination of kinematic GPS measurements with the GLAS12 data sets from the ICESat satellite. Based on a network of stakes, surveyed with static GPS, the velocity field around the EDML drilling site is calculated. The annual mean velocity magnitude of 12 survey points amounts to 0.74 m/a. Flow directions mainly vary according to their distance from the ice divide. Surface strain rates are determined from a pentagon-shaped stake network with one center point, close to the drilling site. The strain field is characterised by along flow compression, lateral dilatation, and vertical layer thinning.
Resumo:
Learning the structure of a graphical model from data is a common task in a wide range of practical applications. In this paper, we focus on Gaussian Bayesian networks, i.e., on continuous data and directed acyclic graphs with a joint probability density of all variables given by a Gaussian. We propose to work in an equivalence class search space, specifically using the k-greedy equivalence search algorithm. This, combined with regularization techniques to guide the structure search, can learn sparse networks close to the one that generated the data. We provide results on some synthetic networks and on modeling the gene network of the two biological pathways regulating the biosynthesis of isoprenoids for the Arabidopsis thaliana plant
Resumo:
ICTs account nowadays for 2% of total carbon emissions. However, in a time when strict measures to reduce energyconsumption in all the industrial and services sectors are required, the ICT sector faces an increase in services and bandwidth demand. The deployment of NextGenerationNetworks (NGN) will be the answer to this new demand and specifically, the NextGenerationAccessNetworks (NGANs) will provide higher bandwidth access to users. Several policy and cost analysis are being carried out to understand the risks and opportunities of new deployments, though the question of which is the role of energyconsumption in NGANs seems off the table. Thus, this paper proposes amodel to analyze the energyconsumption of the main fiber-based NGAN architectures, i.e. Fiber To The House (FTTH) in both Passive Optical Network (PON) and Point-to-Point (PtP) variations, and FTTx/VDSL. The aim of this analysis is to provide deeper insight on the impact of new deployments on the energyconsumption of the ICT sector and the effects of energyconsumption on the life-cycle cost of NGANs. The paper presents also an energyconsumption comparison of the presented architectures, particularized in the specific geographic and demographic distribution of users of Spain, but easily extendable to other countries.
Resumo:
The paper proposes a model for estimation of perceived video quality in IPTV, taking as input both video coding and network Quality of Service parameters. It includes some fitting parameters that depend mainly on the information contents of the video sequences. A method to derive them from the Spatial and Temporal Information contents of the sequences is proposed. The model may be used for near real-time monitoring of IPTV video quality.
Resumo:
Semantic technologies have become widely adopted in recent years, and choosing the right technologies for the problems that users face is often a difficult task. This paper presents an application of the Analytic Network Process for the recommendation of semantic technologies, which is based on a quality model for semantic technologies. Instead of relying on expert-based comparisons of alternatives, the comparisons in our framework depend on real evaluation results. Furthermore, the recommendations in our framework derive from user quality requirements, which leads to better recommendations tailored to users’ needs. This paper also presents an algorithm for pairwise comparisons, which is based on user quality requirements and evaluation results.