969 resultados para Prior data
Resumo:
Motivation. The study of human brain development in itsearly stage is today possible thanks to in vivo fetalmagnetic resonance imaging (MRI) techniques. Aquantitative analysis of fetal cortical surfacerepresents a new approach which can be used as a markerof the cerebral maturation (as gyration) and also forstudying central nervous system pathologies [1]. However,this quantitative approach is a major challenge forseveral reasons. First, movement of the fetus inside theamniotic cavity requires very fast MRI sequences tominimize motion artifacts, resulting in a poor spatialresolution and/or lower SNR. Second, due to the ongoingmyelination and cortical maturation, the appearance ofthe developing brain differs very much from thehomogenous tissue types found in adults. Third, due tolow resolution, fetal MR images considerably suffer ofpartial volume (PV) effect, sometimes in large areas.Today extensive efforts are made to deal with thereconstruction of high resolution 3D fetal volumes[2,3,4] to cope with intra-volume motion and low SNR.However, few studies exist related to the automatedsegmentation of MR fetal imaging. [5] and [6] work on thesegmentation of specific areas of the fetal brain such asposterior fossa, brainstem or germinal matrix. Firstattempt for automated brain tissue segmentation has beenpresented in [7] and in our previous work [8]. Bothmethods apply the Expectation-Maximization Markov RandomField (EM-MRF) framework but contrary to [7] we do notneed from any anatomical atlas prior. Data set &Methods. Prenatal MR imaging was performed with a 1-Tsystem (GE Medical Systems, Milwaukee) using single shotfast spin echo (ssFSE) sequences (TR 7000 ms, TE 180 ms,FOV 40 x 40 cm, slice thickness 5.4mm, in plane spatialresolution 1.09mm). Each fetus has 6 axial volumes(around 15 slices per volume), each of them acquired inabout 1 min. Each volume is shifted by 1 mm with respectto the previous one. Gestational age (GA) ranges from 29to 32 weeks. Mother is under sedation. Each volume ismanually segmented to extract fetal brain fromsurrounding maternal tissues. Then, in-homogeneityintensity correction is performed using [9] and linearintensity normalization is performed to have intensityvalues that range from 0 to 255. Note that due tointra-tissue variability of developing brain someintensity variability still remains. For each fetus, ahigh spatial resolution image of isotropic voxel size of1.09 mm is created applying [2] and using B-splines forthe scattered data interpolation [10] (see Fig. 1). Then,basal ganglia (BS) segmentation is performed on thissuper reconstructed volume. Active contour framework witha Level Set (LS) implementation is used. Our LS follows aslightly different formulation from well-known Chan-Vese[11] formulation. In our case, the LS evolves forcing themean of the inside of the curve to be the mean intensityof basal ganglia. Moreover, we add local spatial priorthrough a probabilistic map created by fitting anellipsoid onto the basal ganglia region. Some userinteraction is needed to set the mean intensity of BG(green dots in Fig. 2) and the initial fitting points forthe probabilistic prior map (blue points in Fig. 2). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed as described in [8]. Results.The case study presented here has 29 weeks of GA. Thehigh resolution reconstructed volume is presented in Fig.1. The steps of BG segmentation are shown in Fig. 2.Overlap in comparison with manual segmentation isquantified by the Dice similarity index (DSI) equal to0.829 (values above 0.7 are considered a very goodagreement). Such BG segmentation has been applied on 3other subjects ranging for 29 to 32 GA and the DSI hasbeen of 0.856, 0.794 and 0.785. Our segmentation of theinner (red and blue contours) and outer cortical surface(green contour) is presented in Fig. 3. Finally, torefine the results we include our WM segmentation in theFreesurfer software [12] and some manual corrections toobtain Fig.4. Discussion. Precise cortical surfaceextraction of fetal brain is needed for quantitativestudies of early human brain development. Our workcombines the well known statistical classificationframework with the active contour segmentation forcentral gray mater extraction. A main advantage of thepresented procedure for fetal brain surface extraction isthat we do not include any spatial prior coming fromanatomical atlases. The results presented here arepreliminary but promising. Our efforts are now in testingsuch approach on a wider range of gestational ages thatwe will include in the final version of this work andstudying as well its generalization to different scannersand different type of MRI sequences. References. [1]Guibaud, Prenatal Diagnosis 29(4) (2009). [2] Rousseau,Acad. Rad. 13(9), 2006, [3] Jiang, IEEE TMI 2007. [4]Warfield IADB, MICCAI 2009. [5] Claude, IEEE Trans. Bio.Eng. 51(4) (2004). [6] Habas, MICCAI (Pt. 1) 2008. [7]Bertelsen, ISMRM 2009 [8] Bach Cuadra, IADB, MICCAI 2009.[9] Styner, IEEE TMI 19(39 (2000). [10] Lee, IEEE Trans.Visual. And Comp. Graph. 3(3), 1997, [11] Chan, IEEETrans. Img. Proc, 10(2), 2001 [12] Freesurfer,http://surfer.nmr.mgh.harvard.edu.
Resumo:
We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.
Resumo:
The co-operative credit structure in a state set up consists of 3 tiers — Primary Societies at the base, District Co-operative Banks at the middle and State Cooperative Bank at the top. But, some societies at the primary level are governed by, in addition to Co-operative Societies Act, the Banking Regulation Act. Thus they are under dual control. In addition, they are working under the direct purview of Reserve Bank of India. The scope of this study is restricted to such Primary Societies, District Co-operative Banks and State Co-operative Bank. For the evaluation of the working of Co-operative Banks, the board of directors and staff were interviewed with the help of pre-constructed and pre-tested interview schedules. However, the share holders and customers were not interviewed mainly because almost all respondents were reluctant to provide copies of an exhaustive list of share holders and non-share holder customers, for the purpose of maintaining secrecy. This being an individual work, it was found physically and financially very difficult to extend the study so as to cover the share holders and non-share holder customers. Limitations of time were also responsible for restricting this study. The period of study was restricted to 1980-'81 to 1983-'84 as the data relating to earlier periods were firstly not available from all banks and secondly the prior data was considered out of date for the purpose of the study.
Resumo:
Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment, and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the meta-analytic-predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors, which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations. Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust. Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data conflicts ' they should encourage better and more frequent use of historical data in clinical trials.
Resumo:
Most demographic data indicate a roughly exponential increase in adult mortality with age, a phenomenon that has been explained in terms of a decline in the force of natural selection acting on age-specific mortality. Scattered demographic findings suggest the existence of a late-life mortality plateau in both humans and dipteran insects, seemingly at odds with both prior data and evolutionary theory. Extensions to the evolutionary theory of aging are developed which indicate that such late-life mortality plateaus are to be expected when enough late-life data are collected. This expanded theory predicts late-life mortality plateaus, with both antagonistic pleiotropy and mutation accumulation as driving population genetic mechanisms.
Resumo:
The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.
A global historical ozone data set and prominent features of stratospheric variability prior to 1979
Resumo:
We present a vertically resolved zonal mean monthly mean global ozone data set spanning the period 1901 to 2007, called HISTOZ.1.0. It is based on a new approach that combines information from an ensemble of chemistry climate model (CCM) simulations with historical total column ozone information. The CCM simulations incorporate important external drivers of stratospheric chemistry and dynamics (in particular solar and volcanic effects, greenhouse gases and ozone depleting substances, sea surface temperatures, and the quasi-biennial oscillation). The historical total column ozone observations include ground-based measurements from the 1920s onward and satellite observations from 1970 to 1976. An off-line data assimilation approach is used to combine model simulations, observations, and information on the observation error. The period starting in 1979 was used for validation with existing ozone data sets and therefore only ground-based measurements were assimilated. Results demonstrate considerable skill from the CCM simulations alone. Assimilating observations provides additional skill for total column ozone. With respect to the vertical ozone distribution, assimilating observations increases on average the correlation with a reference data set, but does not decrease the mean squared error. Analyses of HISTOZ.1.0 with respect to the effects of El Niño–Southern Oscillation (ENSO) and of the 11 yr solar cycle on stratospheric ozone from 1934 to 1979 qualitatively confirm previous studies that focussed on the post-1979 period. The ENSO signature exhibits a much clearer imprint of a change in strength of the Brewer–Dobson circulation compared to the post-1979 period. The imprint of the 11 yr solar cycle is slightly weaker in the earlier period. Furthermore, the total column ozone increase from the 1950s to around 1970 at northern mid-latitudes is briefly discussed. Indications for contributions of a tropospheric ozone increase, greenhouse gases, and changes in atmospheric circulation are found. Finally, the paper points at several possible future improvements of HISTOZ.1.0.
Resumo:
The objective of this study was to assess implant therapy after a staged guided bone regeneration procedure in the anterior maxilla by lateralization of the nasopalatine nerve and vessel bundle. Neurosensory function following augmentative procedures and implant placement, assessed using a standardized questionnaire and clinical examination, were the primary outcome variables measured. This retrospective study included patients with a bone defect in the anterior maxilla in need of horizontal and/or vertical ridge augmentation prior to dental implant placement. The surgical sites were allowed to heal for at least 6 months before placement of dental implants. All patients received fixed implant-supported restorations and entered into a tightly scheduled maintenance program. In addition to the maintenance program, patients were recalled for a clinical examination and to fill out a questionnaire to assess any changes in the neurosensory function of the nasopalatine nerve at least 6 months after function. Twenty patients were included in the study from February 2001 to December 2010. They received a total of 51 implants after augmentation of the alveolar crest and lateralization of the nasopalatine nerve. The follow-up examination for questionnaire and neurosensory assessment was scheduled after a mean period of 4.18 years of function. None of the patients examined reported any pain, they did not have less or an altered sensation, and they did not experience a "foreign body" feeling in the area of surgery. Overall, 6 patients out of 20 (30%) showed palatal sensibility alterations of the soft tissues in the region of the maxillary canines and incisors resulting in a risk for a neurosensory change of 0.45 mucosal teeth regions per patient after ridge augmentation with lateralization of the nasopalatine nerve. Regeneration of bone defects in the anterior maxilla by horizontal and/or vertical ridge augmentation and lateralization of the nasopalatine nerve prior to dental implant placement is a predictable surgical technique. Whether or not there were clinically measurable impairments of neurosensory function, the patients did not report them or were not bothered by them.
Resumo:
English and French.
Resumo:
Visualising data for exploratory analysis is a big challenge in scientific and engineering domains where there is a need to gain insight into the structure and distribution of the data. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are used, but it is difficult to incorporate prior knowledge about structure of the data into the analysis. In this technical report we discuss a complementary approach based on an extension of a well known non-linear probabilistic model, the Generative Topographic Mapping. We show that by including prior information of the covariance structure into the model, we are able to improve both the data visualisation and the model fit.
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
Here, we describe gene expression compositional assignment (GECA), a powerful, yet simple method based on compositional statistics that can validate the transfer of prior knowledge, such as gene lists, into independent data sets, platforms and technologies. Transcriptional profiling has been used to derive gene lists that stratify patients into prognostic molecular subgroups and assess biomarker performance in the pre-clinical setting. Archived public data sets are an invaluable resource for subsequent in silico validation, though their use can lead to data integration issues. We show that GECA can be used without the need for normalising expression levels between data sets and can outperform rank-based correlation methods. To validate GECA, we demonstrate its success in the cross-platform transfer of gene lists in different domains including: bladder cancer staging, tumour site of origin and mislabelled cell lines. We also show its effectiveness in transferring an epithelial ovarian cancer prognostic gene signature across technologies, from a microarray to a next-generation sequencing setting. In a final case study, we predict the tumour site of origin and histopathology of epithelial ovarian cancer cell lines. In particular, we identify and validate the commonly-used cell line OVCAR-5 as non-ovarian, being gastrointestinal in origin. GECA is available as an open-source R package.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Background: This study used household survey data on the prevalence of child, parent and family variables to establish potential targets for a population-level intervention to strengthen parenting skills in the community. The goals of the intervention include decreasing child conduct problems, increasing parental self-efficacy, use of positive parenting strategies, decreasing coercive parenting and increasing help-seeking, social support and participation in positive parenting programmes. Methods: A total of 4010 parents with a child under the age of 12 years completed a statewide telephone survey on parenting. Results: One in three parents reported that their child had a behavioural or emotional problem in the previous 6 months. Furthermore, 9% of children aged 2–12 years meet criteria for oppositional defiant disorder. Parents who reported their child's behaviour to be difficult were more likely to perceive parenting as a negative experience (i.e. demanding, stressful and depressing). Parents with greatest difficulties were mothers without partners and who had low levels of confidence in their parenting roles. About 20% of parents reported being stressed and 5% reported being depressed in the 2 weeks prior to the survey. Parents with personal adjustment problems had lower levels of parenting confidence and their child was more difficult to manage. Only one in four parents had participated in a parent education programme. Conclusions: Implications for the setting of population-level goals and targets for strengthening parenting skills are discussed.