974 resultados para Inverse analysis
Resumo:
Directly imaged exoplanets are unexplored laboratories for the application of the spectral and temperature retrieval method, where the chemistry and composition of their atmospheres are inferred from inverse modeling of the available data. As a pilot study, we focus on the extrasolar gas giant HR 8799b, for which more than 50 data points are available. We upgrade our non-linear optimal estimation retrieval method to include a phenomenological model of clouds that requires the cloud optical depth and monodisperse particle size to be specified. Previous studies have focused on forward models with assumed values of the exoplanetary properties; there is no consensus on the best-fit values of the radius, mass, surface gravity, and effective temperature of HR 8799b. We show that cloud-free models produce reasonable fits to the data if the atmosphere is of super-solar metallicity and non-solar elemental abundances. Intermediate cloudy models with moderate values of the cloud optical depth and micron-sized particles provide an equally reasonable fit to the data and require a lower mean molecular weight. We report our best-fit values for the radius, mass, surface gravity, and effective temperature of HR 8799b. The mean molecular weight is about 3.8, while the carbon-to-oxygen ratio is about unity due to the prevalence of carbon monoxide. Our study emphasizes the need for robust claims about the nature of an exoplanetary atmosphere to be based on analyses involving both photometry and spectroscopy and inferred from beyond a few photometric data points, such as are typically reported for hot Jupiters.
Resumo:
BACKGROUND Metamizole is used to treat pain in many parts of the world. Information on the safety profile of metamizole is scarce; no conclusive summary of the literature exists. OBJECTIVE To determine whether metamizole is clinically safe compared to placebo and other analgesics. METHODS We searched CENTRAL, MEDLINE, EMBASE, CINAHL, and several clinical trial registries. We screened the reference lists of included trials and previous systematic reviews. We included randomized controlled trials that compared the effects of metamizole, administered to adults in any form and for any indication, to other analgesics or to placebo. Two authors extracted data regarding trial design and size, indications for pain medication, patient characteristics, treatment regimens, and methodological characteristics. Adverse events (AEs), serious adverse events (SAEs), and dropouts were assessed. We conducted separate meta-analyses for each metamizole comparator, using standard inverse-variance random effects meta-analysis to pool the estimates across trials, reported as risk ratios (RRs). We calculated the DerSimonian and Laird variance estimate T2 to measure heterogeneity between trials. The pre-specified primary end point was any AE during the trial period. RESULTS Of the 696 potentially eligible trials, 79 trials including almost 4000 patients with short-term metamizole use of less than two weeks met our inclusion criteria. Fewer AEs were reported for metamizole compared to opioids, RR = 0.79 (confidence interval 0.79 to 0.96). We found no differences between metamizole and placebo, paracetamol and NSAIDs. Only a few SAEs were reported, with no difference between metamizole and other analgesics. No agranulocytosis or deaths were reported. Our results were limited by the mediocre overall quality of the reports. CONCLUSION For short-term use in the hospital setting, metamizole seems to be a safe choice when compared to other widely used analgesics. High-quality, adequately sized trials assessing the intermediate- and long-term safety of metamizole are needed.
Resumo:
AIMS The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). METHODS AND RESULTS Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. CONCLUSION We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Pathway based genome wide association study evolves from pathway analysis for microarray gene expression and is under rapid development as a complementary for single-SNP based genome wide association study. However, it faces new challenges, such as the summarization of SNP statistics to pathway statistics. The current study applies the ridge regularized Kernel Sliced Inverse Regression (KSIR) to achieve dimension reduction and compared this method to the other two widely used methods, the minimal-p-value (minP) approach of assigning the best test statistics of all SNPs in each pathway as the statistics of the pathway and the principal component analysis (PCA) method of utilizing PCA to calculate the principal components of each pathway. Comparison of the three methods using simulated datasets consisting of 500 cases, 500 controls and100 SNPs demonstrated that KSIR method outperformed the other two methods in terms of causal pathway ranking and the statistical power. PCA method showed similar performance as the minP method. KSIR method also showed a better performance over the other two methods in analyzing a real dataset, the WTCCC Ulcerative Colitis dataset consisting of 1762 cases, 3773 controls as the discovery cohort and 591 cases, 1639 controls as the replication cohort. Several immune and non-immune pathways relevant to ulcerative colitis were identified by these methods. Results from the current study provided a reference for further methodology development and identified novel pathways that may be of importance to the development of ulcerative colitis.^
Resumo:
Much advancement has been made in recent years in field data assimilation, remote sensing and ecosystem modeling, yet our global view of phytoplankton biogeography beyond chlorophyll biomass is still a cursory taxonomic picture with vast areas of the open ocean requiring field validations. High performance liquid chromatography (HPLC) pigment data combined with inverse methods offer an advantage over many other phytoplankton quantification measures by way of providing an immediate perspective of the whole phytoplankton community in a sample as a function of chlorophyll biomass. Historically, such chemotaxonomic analysis has been conducted mainly at local spatial and temporal scales in the ocean. Here, we apply a widely tested inverse approach, CHEMTAX, to a global climatology of pigment observations from HPLC. This study marks the first systematic and objective global application of CHEMTAX, yielding a seasonal climatology comprised of ~1500 1°x1° global grid points of the major phytoplankton pigment types in the ocean characterizing cyanobacteria, haptophytes, chlorophytes, cryptophytes, dinoflagellates, and diatoms, with results validated against prior regional studies where possible. Key findings from this new global view of specific phytoplankton abundances from pigments are a) the large global proportion of marine haptophytes (comprising 32 ± 5% of total chlorophyll), whose biogeochemical functional roles are relatively unknown, and b) the contrasting spatial scales of complexity in global community structure that can be explained in part by regional oceanographic conditions. These publicly accessible results will guide future parameterizations of marine ecosystem models exploring the link between phytoplankton community structure and marine biogeochemical cycles.
Resumo:
Paleomagnetic analysis of sediment samples from Ocean Drilling Program (ODP) Leg 133, Site 820, 10 km from the outer edge of the Great Barrier Reef, is undertaken to investigate the mineral magnetic response to environmental (sea level) changes. Viscous remanent magnetization (VRM) of both multidomain and near-superparamagnetic origin is prevalent and largely obscures the primary remanence, except in isolated high-magnetization zones. The Brunhes/Matuyama boundary cannot be identified, but is expected to be below 120 mbsf. The only evidence that exists for a geomagnetic excursion occurs at about 33 mbsf (-135 k.y.). Only one-half the cores were oriented, and many suffered from internal rotation about the core axis, caused by coring and/or slicing. The decay of magnetic remanence below the surface layer (0-2 mbsf) is attributed to sulfate reduction processes. The magnetic susceptibility (K) record is central for describing and understanding the magnetic properties of the sediments, and their relationship to glacio-eustatic fluctuations in sea level. Three prominent magnetic susceptibility peaks, at about 7, 32, and 64 mbsf, are superimposed on a background of smaller susceptibility oscillations. Fluctuations in susceptibility and remanence in the ôbackgroundö zone are controlled predominantly by variations in the concentration, rather than the composition of ferrimagnetics, with carbonate dilution playing an important role (type-A properties). The sharp susceptibility maxima occur at the start of the marine transgressions following low stands in sea level (high d18O, glacial maxima), and are characterized by a stable single-domain remanence, with a significant contribution from ultra-fine, superparamagnetic grains (type-C properties). During the later marine transgression, the susceptibility gradually returns to low values and the remanence is carried by stable single-domain magnetite (type-B properties). The A, B, and C types of sediment have distinctive ARM/K ratios. Throughout most of the sequence a strong inverse correlation exists between magnetic susceptibility and both CaCO3 and d18O variations. However, in the sharp susceptibility peaks (early transgression), more complex phase relationships are apparent among these parameters. In particular, the K-d18O correlation switches to positive, then reverts to negative during the course of the late transgression, indicating that two distinct mechanisms are responsible for the K-d18O correlation. Lower in the sequence, where sea-level-controlled cycles of upward-coarsening sediments, we find that the initial, mud phase of each cycle has been enriched in high-coercivity magnetic material, which is indicative of more oxic conditions. The main magnetic characteristics of the sediments are thought to reflect sea-level-controlled variations in the sediment source regions and related run-off conditions. Some preliminary evidence is seen that biogenic magnetite may play a significant role in the magnetization of these sediments.
Resumo:
The grain size of deep-sea sediments provides an apparently simple proxy for current speed. However, grain size-based proxies may be ambiguous when the size distribution reflects a combination of processes, with current sorting only one of them. In particular, such sediment mixing hinders reconstruction of deep circulation changes associated with ice-rafting events in the glacial North Atlantic because variable ice-rafted detritus (IRD) input may falsely suggest current speed changes. Inverse modeling has been suggested as a way to overcome this problem. However, this approach requires high-precision size measurements that register small changes in the size distribution. Here we show that such data can be obtained using electrosensing and laser diffraction techniques, despite issues previously raised on the low precision of electrosensing methods and potential grain shape effects on laser diffraction. Down-core size patterns obtained from a sediment core from the North Atlantic are similar for both techniques, reinforcing the conclusion that both techniques yield comparable results. However, IRD input leads to a coarsening that spuriously suggests faster current speed. We show that this IRD influence can be accounted for using inverse modeling as long as wide size spectra are taken into account. This yields current speed variations that are in agreement with other proxies. Our experiments thus show that for current speed reconstruction, the choice of instrument is subordinate to a proper recognition of the various processes that determine the size distribution and that by using inverse modeling meaningful current speed reconstructions can be obtained from mixed sediments.
Resumo:
DNA extraction was carried out as described on the MICROBIS project pages (http://icomm.mbl.edu/microbis ) using a commercially available extraction kit. We amplified the hypervariable regions V4-V6 of archaeal and bacterial 16S rRNA genes using PCR and several sets of forward and reverse primers (http://vamps.mbl.edu/resources/primers.php). Massively parallel tag sequencing of the PCR products was carried out on a 454 Life Sciences GS FLX sequencer at Marine Biological Laboratory, Woods Hole, MA, following the same experimental conditions for all samples. Sequence reads were submitted to a rigorous quality control procedure based on mothur v30 (doi:10.1128/AEM.01541-09) including denoising of the flow grams using an algorithm based on PyroNoise (doi:10.1038/nmeth.1361), removal of PCR errors and a chimera check using uchime (doi:10.1093/bioinformatics/btr381). The reads were taxonomically assigned according to the SILVA taxonomy (SSURef v119, 07-2014; doi:10.1093/nar/gks1219) implemented in mothur and clustered at 98% ribosomal RNA gene V4-V6 sequence identity. V4-V6 amplicon sequence abundance tables were standardized to account for unequal sampling effort using 1000 (Archaea) and 2300 (Bacteria) randomly chosen sequences without replacement using mothur and then used to calculate inverse Simpson diversity indices and Chao1 richness (doi:10.2307/4615964). Bray-Curtis dissimilarities (doi:10.2307/1942268) between all samples were calculated and used for 2-dimensional non metric multidimensional scaling (NMDS) ordinations with 20 random starts (doi:10.1007/BF02289694). Stress values below 0.2 indicated that the multidimensional dataset was well represented by the 2D ordination. NMDS ordinations were compared and tested using Procrustes correlation analysis (doi:10.1007/BF02291478). All analyses were carried out with the R statistical environment and the packages vegan (available at: http://cran.r-project.org/package=vegan), labdsv (available at: http://cran.r-project.org/package=labdsv), as well as with custom R scripts. Operational taxonomic units at 98% sequence identity (OTU0.03) that occurred only once in the whole dataset were termed absolute single sequence OTUs (SSOabs; doi:10.1038/ismej.2011.132). OTU0.03 sequences that occurred only once in at least one sample, but may occur more often in other samples were termed relative single sequence OTUs (SSOrel). SSOrel are particularly interesting for community ecology, since they comprise rare organisms that might become abundant when conditions change.16S rRNA amplicons and metagenomic reads have been stored in the sequence read archive under SRA project accession number SRP042162.
Resumo:
Inverse bremsstrahlung has been incorporated into an analytical model of the expanding corona of a laser-irradiated spherical target. Absorption decreases slowly with increasing intensity, in agreement with some numerical simulations, and contrary to estimates from simple models in use up to now, which are optimistic at low values of intensity and very pessimistic at high values. Present results agree well with experimental data from many laboratories; substantial absorption is found up to moderate intensities,say below IOl5 W cm-2 for 1.06 pm light. Anomalous absorption, wher, included in the analysis, leaves practically unaffected the ablation pressure and mass ablation rate, for given absorbed intensity. Universal results are given in dimensionless fom.
Resumo:
There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.
Resumo:
Experimental time series for a nonequilibrium reaction may in some cases contain sufficient data to determine a unique kinetic model for the reaction by a systematic mathematical analysis. As an example, a kinetic model for the self-assembly of microtubules is derived here from turbidity time series for solutions in which microtubules assemble. The model may be seen as a generalization of Oosawa's classical nucleation-polymerization model. It reproduces the experimental data with a four-stage nucleation process and a critical nucleus of 15 monomers.
Resumo:
Gene transduction of pluripotent human hematopoietic stem cells (HSCs) is necessary for successful gene therapy of genetic disorders involving hematolymphoid cells. Evidence for transduction of pluripotent HSCs can be deduced from the demonstration of a retroviral vector integrated into the same cellular chromosomal DNA site in myeloid and lymphoid cells descended from a common HSC precursor. CD34+ progenitors from human bone marrow and mobilized peripheral blood were transduced by retroviral vectors and used for long-term engraftment in immune-deficient (beige/nude/XIS) mice. Human lymphoid and myeloid populations were recovered from the marrow of the mice after 7-11 months, and individual human granulocyte-macrophage and T-cell clones were isolated and expanded ex vivo. Inverse PCR from the retroviral long terminal repeat into the flanking genomic DNA was performed on each sorted cell population. The recovered cellular DNA segments that flanked proviral integrants were sequenced to confirm identity. Three mice were found (of 24 informative mice) to contain human lymphoid and myeloid populations with identical proviral integration sites, confirming that pluripotent human HSCs had been transduced.
Resumo:
Various field experiments were conducted to examine the influence of social status on aggression in road traffic. Horn-honking response times of subjects blocked by an experimental car at traffic lights were considered to be an indicator of the degree of aggression. During an initial experiment, the status of the frustrator was varied and an inverse relation was observed between status and aggression towards the frustrator. On the other hand, in a more recent experiment higher status aggressors were found to behave more aggressively. In our study we combined the two designs, i.e., we varied the status of the frustrator and at the same time measured the status of the aggressor. Neither results of the former experiments could be replicated, but we observed a reduction in aggression when frustrator and aggressor were of similar social status.
Resumo:
We have employed an inverse engineering strategy based on quantitative proteome analysis to identify changes in intracellular protein abundance that correlate with increased specific recombinant monoclonal antibody production (qMab) by engineered murine myeloma (NSO) cells. Four homogeneous NSO cell lines differing in qMab were isolated from a pool of primary transfectants. The proteome of each stably transfected cell line was analyzed at mid-exponential growth phase by two-dimensional gel electrophoresis (2D-PAGE) and individual protein spot volume data derived from digitized gel images were compared statistically. To identify changes in protein abundance associated with qMab clatasets were screened for proteins that exhibited either a linear correlation with cell line qMab or a conserved change in abundance specific only to the cell line with highest qMab. Several proteins with altered abundance were identified by mass spectrometry. Proteins exhibiting a significant increase in abundance with increasing qMab included molecular chaperones known to interact directly with nascent immunoglobulins during their folding and assembly (e.g., BiP, endoplasmin, protein disulfide isomerase). 2D-PAGE analysis showed that in all cell lines Mab light chain was more abundant than heavy chain, indicating that this is a likely prerequisite for efficient Mab production. In summary, these data reveal both the adaptive responses and molecular mechanisms enabling mammalian cells in culture to achieve high-level recombinant monoclonal antibody production. (C) 2004 Wiley Periodicals, Inc.