48 resultados para Time-varying covariance matrices
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing bservation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14°C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.
Resumo:
BACKGROUND Recommendations have differed nationally and internationally with respect to the best time to start antiretroviral therapy (ART). We compared effectiveness of three strategies for initiation of ART in high-income countries for HIV-positive individuals who do not have AIDS: immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL. METHODS We used data from the HIV-CAUSAL Collaboration of cohort studies in Europe and the USA. We included 55 826 individuals aged 18 years or older who were diagnosed with HIV-1 infection between January, 2000, and September, 2013, had not started ART, did not have AIDS, and had CD4 count and HIV-RNA viral load measurements within 6 months of HIV diagnosis. We estimated relative risks of death and of death or AIDS-defining illness, mean survival time, the proportion of individuals in need of ART, and the proportion of individuals with HIV-RNA viral load less than 50 copies per mL, as would have been recorded under each ART initiation strategy after 7 years of HIV diagnosis. We used the parametric g-formula to adjust for baseline and time-varying confounders. FINDINGS Median CD4 count at diagnosis of HIV infection was 376 cells per μL (IQR 222-551). Compared with immediate initiation, the estimated relative risk of death was 1·02 (95% CI 1·01-1·02) when ART was started at a CD4 count less than 500 cells per μL, and 1·06 (1·04-1·08) with initiation at a CD4 count less than 350 cells per μL. Corresponding estimates for death or AIDS-defining illness were 1·06 (1·06-1·07) and 1·20 (1·17-1·23), respectively. Compared with immediate initiation, the mean survival time at 7 years with a strategy of initiation at a CD4 count less than 500 cells per μL was 2 days shorter (95% CI 1-2) and at a CD4 count less than 350 cells per μL was 5 days shorter (4-6). 7 years after diagnosis of HIV, 100%, 98·7% (95% CI 98·6-98·7), and 92·6% (92·2-92·9) of individuals would have been in need of ART with immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL, respectively. Corresponding proportions of individuals with HIV-RNA viral load less than 50 copies per mL at 7 years were 87·3% (87·3-88·6), 87·4% (87·4-88·6), and 83·8% (83·6-84·9). INTERPRETATION The benefits of immediate initiation of ART, such as prolonged survival and AIDS-free survival and increased virological suppression, were small in this high-income setting with relatively low CD4 count at HIV diagnosis. The estimated beneficial effect on AIDS is less than in recently reported randomised trials. Increasing rates of HIV testing might be as important as a policy of early initiation of ART. FUNDING National Institutes of Health.
Resumo:
We introduce a multistable subordinator, which generalizes the stable subordinator to the case of time-varying stability index. This enables us to define a multifractional Poisson process. We study properties of these processes and establish the convergence of a continuous-time random walk to the multifractional Poisson process.
Resumo:
Orbital tuning is central for ice core chronologies beyond annual layer counting, available back to 60 ka (i.e. thousands of years before 1950) for Greenland ice cores. While several complementary orbital tuning tools have recently been developed using δ¹⁸Oatm, δO₂⁄N₂ and air content with different orbital targets, quantifying their uncertainties remains a challenge. Indeed, the exact processes linking variations of these parameters, measured in the air trapped in ice, to their orbital targets are not yet fully understood. Here, we provide new series of δO₂∕N₂ and δ¹⁸Oatm data encompassing Marine Isotopic Stage (MIS) 5 (between 100 and 160 ka) and the oldest part (340–800 ka) of the East Antarctic EPICA Dome C (EDC) ice core. For the first time, the measurements over MIS 5 allow an inter-comparison of δO₂∕N₂ and δ¹⁸Oatm records from three East Antarctic ice core sites (EDC, Vostok and Dome F). This comparison highlights some site-specific δO₂∕N₂ variations. Such an observation, the evidence of a 100 ka periodicity in the δO₂∕N₂ signal and the difficulty to identify extrema and mid-slopes in δO2∕N2 increase the uncertainty associated with the use of δO₂∕N₂ as an orbital tuning tool, now calculated to be 3–4 ka. When combining records of δ¹⁸Oatm and δO₂∕N₂ from Vostok and EDC, we find a loss of orbital signature for these two parameters during periods of minimum eccentricity (∼ 400 ka, ∼ 720–800 ka). Our data set reveals a time-varying offset between δO₂∕N₂ and δ¹⁸Oatm records over the last 800 ka that we interpret as variations in the lagged response of δ¹⁸Oatm to precession. The largest offsets are identified during Terminations II, MIS 8 and MIS 16, corresponding to periods of destabilization of the Northern polar ice sheets. We therefore suggest that the occurrence of Heinrich–like events influences the response of δ¹⁸Oatm to precession.
Resumo:
The objective of this study is to determine the impact of expectation associated with placebo and caffeine ingestion. We used a three-armed, randomized, double-blind design. Two three-armed experiments varying instruction (true, false, control) investigated the role of expectations of changes in arousal (blood pressure, heart rate), subjective well-being, and reaction time (RT). In Experiment 1 (N = 45), decaffeinated coffee was administered, and expectations were produced in one group by making them believe they had ingested caffeinated coffee. In Experiment 2 (N = 45), caffeinated orange juice was given in both experimental groups, but only one was informed about the true content. In Experiment 1, a significant effect for subjective alertness was found in the placebo treatment compared to the control group. However, for RT and well-being no significant effects were found. In Experiment 2, no significant expectancy effects were found. Caffeine produced large effects for blood pressure in both treatments compared to the control group, but the effects were larger for the false information group. For subjective well-being (alertness, calmness), considerable but nonsignificant changes were found for correctly informed participants, indicating possible additivity of pharmacologic effect and expectations. The results tentatively indicate that placebo and expectancy effects primarily show through introspection.
Resumo:
Nitrous oxide fluxes were measured at the Lägeren CarboEurope IP flux site over the multi-species mixed forest dominated by European beech and Norway spruce. Measurements were carried out during a four-week period in October–November 2005 during leaf senescence. Fluxes were measured with a standard ultrasonic anemometer in combination with a quantum cascade laser absorption spectrometer that measured N2O, CO2, and H2O mixing ratios simultaneously at 5 Hz time resolution. To distinguish insignificant fluxes from significant ones it is proposed to use a new approach based on the significance of the correlation coefficient between vertical wind speed and mixing ratio fluctuations. This procedure eliminated roughly 56% of our half-hourly fluxes. Based on the remaining, quality checked N2O fluxes we quantified the mean efflux at 0.8±0.4 μmol m−2 h−1 (mean ± standard error). Most of the contribution to the N2O flux occurred during a 6.5-h period starting 4.5 h before each precipitation event. No relation with precipitation amount could be found. Visibility data representing fog density and duration at the site indicate that wetting of the canopy may have as strong an effect on N2O effluxes as does below-ground microbial activity. It is speculated that above-ground N2O production from the senescing leaves at high moisture (fog, drizzle, onset of precipitation event) may be responsible for part of the measured flux.
Resumo:
An important problem in unsupervised data clustering is how to determine the number of clusters. Here we investigate how this can be achieved in an automated way by using interrelation matrices of multivariate time series. Two nonparametric and purely data driven algorithms are expounded and compared. The first exploits the eigenvalue spectra of surrogate data, while the second employs the eigenvector components of the interrelation matrix. Compared to the first algorithm, the second approach is computationally faster and not limited to linear interrelation measures.
Resumo:
Fenofibrate, widely used for the treatment of dyslipidemia, activates the nuclear receptor, peroxisome proliferator-activated receptor alpha. However, liver toxicity, including liver cancer, occurs in rodents treated with fibrate drugs. Marked species differences occur in response to fibrate drugs, especially between rodents and humans, the latter of which are resistant to fibrate-induced cancer. Fenofibrate metabolism, which also shows species differences, has not been fully determined in humans and surrogate primates. In the present study, the metabolism of fenofibrate was investigated in cynomolgus monkeys by ultraperformance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOFMS)-based metabolomics. Urine samples were collected before and after oral doses of fenofibrate. The samples were analyzed in both positive-ion and negative-ion modes by UPLC-QTOFMS, and after data deconvolution, the resulting data matrices were subjected to multivariate data analysis. Pattern recognition was performed on the retention time, mass/charge ratio, and other metabolite-related variables. Synthesized or purchased authentic compounds were used for metabolite identification and structure elucidation by liquid chromatographytandem mass spectrometry. Several metabolites were identified, including fenofibric acid, reduced fenofibric acid, fenofibric acid ester glucuronide, reduced fenofibric acid ester glucuronide, and compound X. Another two metabolites (compound B and compound AR), not previously reported in other species, were characterized in cynomolgus monkeys. More importantly, previously unknown metabolites, fenofibric acid taurine conjugate and reduced fenofibric acid taurine conjugate were identified, revealing a previously unrecognized conjugation pathway for fenofibrate.
Resumo:
Quantitative reverse transcriptase real-time PCR (QRT-PCR) is a robust method to quantitate RNA abundance. The procedure is highly sensitive and reproducible as long as the initial RNA is intact. However, breaks in the RNA due to chemical or enzymatic cleavage may reduce the number of RNA molecules that contain intact amplicons. As a consequence, the number of molecules available for amplification decreases. We determined the relation between RNA fragmentation and threshold values (Ct values) in subsequent QRT-PCR for four genes in an experimental model of intact and partially hydrolyzed RNA derived from a cell line and we describe the relation between RNA integrity, amplicon size and Ct values in this biologically homogenous system. We demonstrate that degradation-related shifts of Ct values can be compensated by calculating delta Ct values between test genes and the mean values of several control genes. These delta Ct values are less sensitive to fragmentation of the RNA and are unaffected by varying amounts of input RNA. The feasibility of the procedure was demonstrated by comparing Ct values from a larger panel of genes in intact and in partially degraded RNA. We compared Ct values from intact RNA derived from well-preserved tumor material and from fragmented RNA derived from formalin-fixed, paraffin-embedded (FFPE) samples of the same tumors. We demonstrate that the relative abundance of gene expression can be based on FFPE material even when the amount of RNA in the sample and the extent of fragmentation are not known.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
Resumo:
By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.
Resumo:
Ecology and conservation require reliable data on the occurrence of animals and plants. A major source of bias is imperfect detection, which, however, can be corrected for by estimation of detectability. In traditional occupancy models, this requires repeat or multi-observer surveys. Recently, time-to-detection models have been developed as a cost-effective alternative, which requires no repeat surveys and hence costs could be halved. We compared the efficiency and reliability of time-to-detection and traditional occupancy models under varying survey effort. Two observers independently searched for 17 plant species in 44100m(2) Swiss grassland quadrats and recorded the time-to-detection for each species, enabling detectability to be estimated with both time-to-detection and traditional occupancy models. In addition, we gauged the relative influence on detectability of species, observer, plant height and two measures of abundance (cover and frequency). Estimates of detectability and occupancy under both models were very similar. Rare species were more likely to be overlooked; detectability was strongly affected by abundance. As a measure of abundance, frequency outperformed cover in its predictive power. The two observers differed significantly in their detection ability. Time-to-detection models were as accurate as traditional occupancy models, but their data easier to obtain; thus they provide a cost-effective alternative to traditional occupancy models for detection-corrected estimation of occurrence.
Resumo:
Recently, multiple studies showed that spatial and temporal features of a task-negative default mode network (DMN) (Greicius et al., 2003) are important markers for psychiatric diseases (Balsters et al., 2013). Another prominent indicator of cognitive functioning, yielding information about the mental condition in health and disease, is working memory (WM) processing. In EEG and MEG studies, frontal-midline theta power has been shown to increase with load during WM retention in healthy subjects (Brookes et al., 2011). Negative correlations between DMN activity and theta amplitude have been found during resting state (Jann et al., 2010) as well as during WM (Michels et al., 2010). Likewise, WM training resulted in higher resting state theta power as well as increased small-worldness of the resting brain (Langer et al., 2013). Further, increased fMRI connectivity between nodes of the DMN correlated with better WM performance (Hampson et al., 2006). Hence, the brain’s default state might influence it’s functioning during task. We therefore hypothesized correlations between pre-stimulus DMN activity and EEG-theta power during WM maintenance, depending on the WM load. 17 healthy subjects performed a Sternberg WM task while being measured simultaneously with EEG and fMRI. Data was recorded within a multicenter-study: 12 subjects were measured in Zurich with a 64-channels MR-compatible system (Brain Products) in a 3T Philips scanner, 5 subjects with a 96-channel MR-compatible system (Brain Products) in a 3T Siemens Scanner in Bern. The DMN components was obtained by a group BOLD-ICA approach over the full task duration (figure 1). The subject-wise dynamics were obtained by back-reconstructed onto each subject’s fMRI data and normalized to percent signal change values. The single trial pre-stimulus-DMN activation was then temporally correlated with the single trial EEG-theta (3-8 Hz) spectral power during retention intervals. This so-called covariance mapping (Jann et al., 2010) yielded the spatial distribution of the theta EEG fluctuations during retention associated with the dynamics of the pre-stimulus DMN. In line with previous findings, theta power was increased at frontal-midline electrodes in high- versus low-load conditions during early WM retention (figure 2). However, correlations of DMN with theta power resulted in primarily positive correlations in low-load conditions, while during high-load conditions negative correlations of DMN activity and theta power were observed at frontal-midline electrodes. This DMN-dependent load effect reached significance in the middle of the retention period (TANOVA, p<0.05) (figure 3). Our results show a complex and load-dependent interaction of pre-stimulus DMN activity and theta power during retention, varying over time. While at a more global, load-independent view pre-stimulus DMN activity correlated positively with theta power during retention, the correlation was inversed during certain time windows in high-load trials, meaning that in trials with enhanced pre-stimulus DMN activity theta power decreases during retention. Since both WM performance and DMN activity are markers of mental health our results could be important for further investigations of psychiatric populations.
Resumo:
Abstract Previous work highlighted the possibility that musical training has an influence on cognitive functioning. The suggested reason for this influence is the strong recruitment of attention, planning, and working memory functions during playing a musical instrument. The purpose of the present work was twofold, namely to evaluate the general relationship between pre-stimulus electrophysiological activity and cognition, and more specifically the influence of musical expertise on working memory functions. With this purpose in mind, we used covariance mapping analyses to evaluate whether pre-stimulus electroencephalographic activity is predictive for reaction time during a visual working memory task (Sternberg paradigm) in musicians and non-musicians. In line with our hypothesis, we replicated previous findings pointing to a general predictive value of pre-stimulus activity for working memory performance. Most importantly, we also provide first evidence for an influence of musical expertise on working memory performance that could distinctively be predicted by pre-stimulus spectral power. Our results open novel perspectives for better comprehending the vast influences of musical expertise on cognition.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.