838 resultados para Multiple methods framework
Resumo:
INTRODUCTION: Local microstructural pathology in multiple sclerosis patients might influence their clinical performance. This study applied multicontrast MRI to quantify inflammation and neurodegeneration in MS lesions. We explored the impact of MRI-based lesion pathology in cognition and disability. METHODS: 36 relapsing-remitting MS subjects and 18 healthy controls underwent neurological, cognitive, behavioural examinations and 3 T MRI including (i) fluid attenuated inversion recovery, double inversion recovery, and magnetization-prepared gradient echo for lesion count; (ii) T1, T2, and T2(*) relaxometry and magnetisation transfer imaging for lesion tissue characterization. Lesions were classified according to the extent of inflammation/neurodegeneration. A generalized linear model assessed the contribution of lesion groups to clinical performances. RESULTS: Four lesion groups were identified and characterized by (1) absence of significant alterations, (2) prevalent inflammation, (3) concomitant inflammation and microdegeneration, and (4) prevalent tissue loss. Groups 1, 3, 4 correlated with general disability (Adj-R (2) = 0.6; P = 0.0005), executive function (Adj-R (2) = 0.5; P = 0.004), verbal memory (Adj-R (2) = 0.4; P = 0.02), and attention (Adj-R (2) = 0.5; P = 0.002). CONCLUSION: Multicontrast MRI provides a new approach to infer in vivo histopathology of plaques. Our results support evidence that neurodegeneration is the major determinant of patients' disability and cognitive dysfunction.
Resumo:
BACKGROUND: Cerebellar pathology occurs in late multiple sclerosis (MS) but little is known about cerebellar changes during early disease stages. In this study, we propose a new multicontrast "connectometry" approach to assess the structural and functional integrity of cerebellar networks and connectivity in early MS. METHODS: We used diffusion spectrum and resting-state functional MRI (rs-fMRI) to establish the structural and functional cerebellar connectomes in 28 early relapsing-remitting MS patients and 16 healthy controls (HC). We performed multicontrast "connectometry" by quantifying multiple MRI parameters along the structural tracts (generalized fractional anisotropy-GFA, T1/T2 relaxation times and magnetization transfer ratio) and functional connectivity measures. Subsequently, we assessed multivariate differences in local connections and network properties between MS and HC subjects; finally, we correlated detected alterations with lesion load, disease duration, and clinical scores. RESULTS: In MS patients, a subset of structural connections showed quantitative MRI changes suggesting loss of axonal microstructure and integrity (increased T1 and decreased GFA, P < 0.05). These alterations highly correlated with motor, memory and attention in patients, but were independent of cerebellar lesion load and disease duration. Neither network organization nor rs-fMRI abnormalities were observed at this early stage. CONCLUSION: Multicontrast cerebellar connectometry revealed subtle cerebellar alterations in MS patients, which were independent of conventional disease markers and highly correlated with patient function. Future work should assess the prognostic value of the observed damage. Hum Brain Mapp 36:1609-1619, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
BACKGROUND: The management of unresectable metastatic colorectal cancer (mCRC) is a comprehensive treatment strategy involving several lines of therapy, maintenance, salvage surgery, and treatment-free intervals. Besides chemotherapy (fluoropyrimidine, oxaliplatin, irinotecan), molecular-targeted agents such as anti-angiogenic agents (bevacizumab, aflibercept, regorafenib) and anti-epidermal growth factor receptor agents (cetuximab, panitumumab) have become available. Ultimately, given the increasing cost of new active compounds, new strategy trials are needed to define the optimal use and the best sequencing of these agents. Such new clinical trials require alternative endpoints that can capture the effect of several treatment lines and be measured earlier than overall survival to help shorten the duration and reduce the size and cost of trials. METHODS/DESIGN: STRATEGIC-1 is an international, open-label, randomized, multicenter phase III trial designed to determine an optimally personalized treatment sequence of the available treatment modalities in patients with unresectable RAS wild-type mCRC. Two standard treatment strategies are compared: first-line FOLFIRI-cetuximab, followed by oxaliplatin-based second-line chemotherapy with bevacizumab (Arm A) vs. first-line OPTIMOX-bevacizumab, followed by irinotecan-based second-line chemotherapy with bevacizumab, and by an anti-epidermal growth factor receptor monoclonal antibody with or without irinotecan as third-line treatment (Arm B). The primary endpoint is duration of disease control. A total of 500 patients will be randomized in a 1:1 ratio to one of the two treatment strategies. DISCUSSION: The STRATEGIC-1 trial is designed to give global information on the therapeutic sequences in patients with unresectable RAS wild-type mCRC that in turn is likely to have a significant impact on the management of this patient population. The trial is open for inclusion since August 2013. TRIAL REGISTRATION: STRATEGIC-1 is registered at Clinicaltrials.gov: NCT01910610, 23 July, 2013. STRATEGIC-1 is registered at EudraCT-No.: 2013-001928-19, 25 April, 2013.
Resumo:
BACKGROUND: Increasing evidences link T helper 17 (Th17) cells with multiple sclerosis (MS). In this context, interleukin-22 (IL-22), a Th17-linked cytokine, has been implicated in blood brain barrier breakdown and lymphocyte infiltration. Furthermore, polymorphism between MS patients and controls has been recently described in the gene coding for IL-22 binding protein (IL-22BP). Here, we aimed to better characterize IL-22 in the context of MS. METHODS: IL-22 and IL-22BP expressions were assessed by ELISA and qPCR in the following compartments of MS patients and control subjects: (1) the serum, (2) the cerebrospinal fluid, and (3) immune cells of peripheral blood. Identification of the IL-22 receptor subunit, IL-22R1, was performed by immunohistochemistry and immunofluorescence in human brain tissues and human primary astrocytes. The role of IL-22 on human primary astrocytes was evaluated using 7-AAD and annexin V, markers of cell viability and apoptosis, respectively. RESULTS: In a cohort of 141 MS patients and healthy control (HC) subjects, we found that serum levels of IL-22 were significantly higher in relapsing MS patients than in HC but also remitting and progressive MS patients. Monocytes and monocyte-derived dendritic cells contained an enhanced expression of mRNA coding for IL-22BP as compared to HC. Using immunohistochemistry and confocal microscopy, we found that IL-22 and its receptor were detected on astrocytes of brain tissues from both control subjects and MS patients, although in the latter, the expression was higher around blood vessels and in MS plaques. Cytometry-based functional assays revealed that addition of IL-22 improved the survival of human primary astrocytes. Furthermore, tumor necrosis factor α-treated astrocytes had a better long-term survival capacity upon IL-22 co-treatment. This protective effect of IL-22 seemed to be conferred, at least partially, by a decreased apoptosis. CONCLUSIONS: We show that (1) there is a dysregulation in the expression of IL-22 and its antagonist, IL-22BP, in MS patients, (2) IL-22 targets specifically astrocytes in the human brain, and (3) this cytokine confers an increased survival of the latter cells.
Resumo:
OBJECTIVE: To develop predictive models for early triage of burn patients based on hypersusceptibility to repeated infections. BACKGROUND: Infection remains a major cause of mortality and morbidity after severe trauma, demanding new strategies to combat infections. Models for infection prediction are lacking. METHODS: Secondary analysis of 459 burn patients (≥16 years old) with 20% or more total body surface area burns recruited from 6 US burn centers. We compared blood transcriptomes with a 180-hour cutoff on the injury-to-transcriptome interval of 47 patients (≤1 infection episode) to those of 66 hypersusceptible patients [multiple (≥2) infection episodes (MIE)]. We used LASSO regression to select biomarkers and multivariate logistic regression to built models, accuracy of which were assessed by area under receiver operating characteristic curve (AUROC) and cross-validation. RESULTS: Three predictive models were developed using covariates of (1) clinical characteristics; (2) expression profiles of 14 genomic probes; (3) combining (1) and (2). The genomic and clinical models were highly predictive of MIE status [AUROCGenomic = 0.946 (95% CI: 0.906-0.986); AUROCClinical = 0.864 (CI: 0.794-0.933); AUROCGenomic/AUROCClinical P = 0.044]. Combined model has an increased AUROCCombined of 0.967 (CI: 0.940-0.993) compared with the individual models (AUROCCombined/AUROCClinical P = 0.0069). Hypersusceptible patients show early alterations in immune-related signaling pathways, epigenetic modulation, and chromatin remodeling. CONCLUSIONS: Early triage of burn patients more susceptible to infections can be made using clinical characteristics and/or genomic signatures. Genomic signature suggests new insights into the pathophysiology of hypersusceptibility to infection may lead to novel potential therapeutic or prophylactic targets.
Resumo:
OBJECTIVES: The objective of this study was to characterize the underlying molecular mechanisms in consecutive clinical Candida albicans isolates from a single patient displaying stepwise-acquired multidrug resistance. METHODS: Nine clinical isolates (P-1 to P-9) were susceptibility tested by EUCAST EDef 7.2 and Etest. P-4, P-5, P-7, P-8 and P-9 were available for further studies. Relatedness was evaluated by MLST. Additional genes were analysed by sequencing (including FKS1, ERG11, ERG2 and TAC1) and gene expression by quantitative PCR (CDR1, CDR2 and ERG11). UV-spectrophotometry and GC-MS were used for sterol analyses. In vivo virulence was determined in the insect model Galleria mellonella and evaluated by log-rank Mantel-Cox tests. RESULTS: P-1 + P-2 were susceptible, P-3 + P-4 fluconazole resistant, P-5 pan-azole resistant, P-6 + P-7 pan-azole and echinocandin resistant and P-8 + P-9 MDR. MLST supported genetic relatedness among clinical isolates. P-4 harboured four changes in Erg11 (E266D, G307S, G450E and V488I), increased expression of ERG11 and CDR2 and a change in Tac1 (R688Q). P-5, P-7, P-8 and P-9 had an additional change in Erg11 (A61E), increased expression of CDR1, CDR2 and ERG11 (except for P-7) and a different amino acid change in Tac1 (R673L). Echinocandin-resistant isolates harboured the Fks1 S645P alteration. Polyene-resistant P-8 + P-9 lacked ergosterol and harboured a frameshift mutation in ERG2 (F105SfsX23). Virulence was attenuated (but equivalent) in the clinical isolates, but higher than in the azole- and echinocandin-resistant unrelated control strain. CONCLUSIONS: C. albicans demonstrates a diverse capacity to adapt to antifungal exposure. Potentially novel resistance-inducing mutations in TAC1, ERG11 and ERG2 require independent validation.
Resumo:
Research question: International and national sport federations as well as their member organisations are key actors within the sport system and have a wide range of relationships outside the sport system (e.g. with the state, sponsors, and the media). They are currently facing major challenges such as growing competition in top-level sports, democratisation of sports with 'sports for all' and sports as the answer to social problems. In this context, professionalising sport organisations seems to be an appropriate strategy to face these challenges and current problems. We define the professionalisation of sport organisations as an organisational process of transformation leading towards organisational rationalisation, efficiency and business-like management. This has led to a profound organisational change, particularly within sport federations, characterised by the strengthening of institutional management (managerialism) and the implementation of efficiency-based management instruments and paid staff. Research methods: The goal of this article is to review the current international literature and establish a global understanding of and theoretical framework for analysing why and how sport organisations professionalise and what consequences this may have. Results and findings: Our multi-level approach based on the social theory of action integrates the current concepts for analysing professionalisation in sport federations. We specify the framework for the following research perspectives: (1) forms, (2) causes and (3) consequences, and discuss the reciprocal relations between sport federations and their member organisations in this context. Implications: Finally, we work out a research agenda and derive general methodological consequences for the investigation of professionalisation processes in sport organisations.
Resumo:
In a very volatile industry of high technology it is of utmost importance to accurately forecast customers’ demand. However, statistical forecasting of sales, especially in heavily competitive electronics product business, has always been a challenging task due to very high variation in demand and very short product life cycles of products. The purpose of this thesis is to validate if statistical methods can be applied to forecasting sales of short life cycle electronics products and provide a feasible framework for implementing statistical forecasting in the environment of the case company. Two different approaches have been developed for forecasting on short and medium term and long term horizons. Both models are based on decomposition models, but differ in interpretation of the model residuals. For long term horizons residuals are assumed to represent white noise, whereas for short and medium term forecasting horizon residuals are modeled using statistical forecasting methods. Implementation of both approaches is performed in Matlab. Modeling results have shown that different markets exhibit different demand patterns and therefore different analytical approaches are appropriate for modeling demand in these markets. Moreover, the outcomes of modeling imply that statistical forecasting can not be handled separately from judgmental forecasting, but should be perceived only as a basis for judgmental forecasting activities. Based on modeling results recommendations for further deployment of statistical methods in sales forecasting of the case company are developed.
Resumo:
BACKGROUND: Frequent emergency department (ED) users meet several of the criteria of vulnerability, but this needs to be further examined taking into consideration all vulnerability's different dimensions. This study aimed to characterize frequent ED users and to define risk factors of frequent ED use within a universal health care coverage system, applying a conceptual framework of vulnerability. METHODS: A controlled, cross-sectional study comparing frequent ED users to a control group of non-frequent users was conducted at the Lausanne University Hospital, Switzerland. Frequent users were defined as patients with five or more visits to the ED in the previous 12 months. The two groups were compared using validated scales for each one of the five dimensions of an innovative conceptual framework: socio-demographic characteristics; somatic, mental, and risk-behavior indicators; and use of health care services. Independent t-tests, Wilcoxon rank-sum tests, Pearson's Chi-squared test and Fisher's exact test were used for the comparison. To examine the -related to vulnerability- risk factors for being a frequent ED user, univariate and multivariate logistic regression models were used. RESULTS: We compared 226 frequent users and 173 controls. Frequent users had more vulnerabilities in all five dimensions of the conceptual framework. They were younger, and more often immigrants from low/middle-income countries or unemployed, had more somatic and psychiatric comorbidities, were more often tobacco users, and had more primary care physician (PCP) visits. The most significant frequent ED use risk factors were a history of more than three hospital admissions in the previous 12 months (adj OR:23.2, 95%CI = 9.1-59.2), the absence of a PCP (adj OR:8.4, 95%CI = 2.1-32.7), living less than 5 km from an ED (adj OR:4.4, 95%CI = 2.1-9.0), and household income lower than USD 2,800/month (adj OR:4.3, 95%CI = 2.0-9.2). CONCLUSIONS: Frequent ED users within a universal health coverage system form a highly vulnerable population, when taking into account all five dimensions of a conceptual framework of vulnerability. The predictive factors identified could be useful in the early detection of future frequent users, in order to address their specific needs and decrease vulnerability, a key priority for health care policy makers. Application of the conceptual framework in future research is warranted.
Resumo:
Aim The aim of this study was to test different modelling approaches, including a new framework, for predicting the spatial distribution of richness and composition of two insect groups. Location The western Swiss Alps. Methods We compared two community modelling approaches: the classical method of stacking binary prediction obtained fromindividual species distribution models (binary stacked species distribution models, bS-SDMs), and various implementations of a recent framework (spatially explicit species assemblage modelling, SESAM) based on four steps that integrate the different drivers of the assembly process in a unique modelling procedure. We used: (1) five methods to create bS-SDM predictions; (2) two approaches for predicting species richness, by summing individual SDM probabilities or by modelling the number of species (i.e. richness) directly; and (3) five different biotic rules based either on ranking probabilities from SDMs or on community co-occurrence patterns. Combining these various options resulted in 47 implementations for each taxon. Results Species richness of the two taxonomic groups was predicted with good accuracy overall, and in most cases bS-SDM did not produce a biased prediction exceeding the actual number of species in each unit. In the prediction of community composition bS-SDM often also yielded the best evaluation score. In the case of poor performance of bS-SDM (i.e. when bS-SDM overestimated the prediction of richness) the SESAM framework improved predictions of species composition. Main conclusions Our results differed from previous findings using community-level models. First, we show that overprediction of richness by bS-SDM is not a general rule, thus highlighting the relevance of producing good individual SDMs to capture the ecological filters that are important for the assembly process. Second, we confirm the potential of SESAM when richness is overpredicted by bS-SDM; limiting the number of species for each unit and applying biotic rules (here using the ranking of SDM probabilities) can improve predictions of species composition
Resumo:
Off-pump coronary bypass grafting may decrease the rate of stroke, due to minimal aortic manipulation. For venous grafts, clampless hemostasis when performing the proximal anastomosis can be achieved using the Heartstring device. We describe a technique using a single device to suture two veins to one aortotomy. This technique requires less space and could be advantageous in very short, small, and calcified aortas. In to our experience, this technique is rapid, simple, easy to reproduce, and cost-saving.
Resumo:
PURPOSE: Despite growing interest in measurement of health care quality and patient experience, the current evidence base largely derives from adult health settings, at least in part because of the absence of appropriately developed measurement tools for adolescents. To rectify this, we set out to develop a conceptual framework and a set of indicators to measure the quality of health care delivered to adolescents in hospital. METHODS: A conceptual framework was developed from the following four elements: (1) a review of the evidence around what young people perceive as "adolescent-friendly" health care; (2) an exploration with adolescent patients of the principles of patient-centered care; (3) a scoping review to identify core clinical practices around working with adolescents; and (4) a scoping review of existing conceptual frameworks. Using criteria for indicator development, we then developed a set of indicators that mapped to this framework. RESULTS: Embedded within the notion of patient- and family-centered care, the conceptual framework for adolescent-friendly health care (quality health care for adolescents) was based on the constructs of experience of care (positive engagement with health care) and evidence-informed care. A set of 14 indicators was developed, half of which related to adolescents' and parents' experience of care and half of which related to aspects of evidence-informed care. CONCLUSIONS: The conceptual framework and indicators of quality health care for adolescents set the stage to develop measures to populate these indicators, the next step in the agenda of improving the quality of health care delivered to adolescents in hospital settings.
Accelerated Microstructure Imaging via Convex Optimisation for regions with multiple fibres (AMICOx)
Resumo:
This paper reviews and extends our previous work to enable fast axonal diameter mapping from diffusion MRI data in the presence of multiple fibre populations within a voxel. Most of the existing mi-crostructure imaging techniques use non-linear algorithms to fit their data models and consequently, they are computationally expensive and usually slow. Moreover, most of them assume a single axon orientation while numerous regions of the brain actually present more complex configurations, e.g. fiber crossing. We present a flexible framework, based on convex optimisation, that enables fast and accurate reconstructions of the microstructure organisation, not limited to areas where the white matter is coherently oriented. We show through numerical simulations the ability of our method to correctly estimate the microstructure features (mean axon diameter and intra-cellular volume fraction) in crossing regions.
Resumo:
We systematically reviewed 25 randomised controlled trials of ultrasound-guided brachial plexus blockade that recruited 1948 participants: either one approach vs another (axillary, infraclavicular or supraclavicular); or one injection vs multiple injections. There were no differences in the rates of successful blockade with approach, relative risk (95% CI): axillary vs infraclavicular, 1.0 (1.0-1.1), p = 0.97; axillary vs supraclavicular, 1.0 (1.0-1.1), p = 0.68; and infraclavicular vs supraclavicular, 1.0 (1.0-1.1), p = 0.32. There was no difference in the rate of successful blockade with the number of injections, relative risk (95% CI) 1.0 (1.0-1.0), p = 0.69, for one vs multiple injections. The rate of procedural paraesthesia was less with one injection than multiple injections, relative risk (95% CI) 0.6 (0.4-0.9), p = 0.004.