874 resultados para Time-varying covariance matrices


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the summers of 2001 and 2002, glacio-climatological research was performed at 4110-4120 m a.s.l. on the Belukha snow/firn plateau, Siberian Altai. Hundreds of samples from snow pits and a 21 m snow/firn core were collected to establish the annual/seasonal/monthly depth-accumulation scale, based on stable-isotope records, stratigraphic analyses and meteorological and synoptic data. The fluctuations of water stable-isotope records show well-preserved seasonal variations. The delta(18)O and delta D relationships in precipitation, snow pits and the snow/firn core have the same slope to the covariance as that of the global meteoric water line. The origins of precipitation nourishing the Belukha plateau were determined based on clustering analysis of delta(18)O and d-excess records and examination of synoptic atmospheric patterns. Calibration and validation of the developed clusters occurred at event and monthly timescales with about 15% uncertainty. Two distinct moisture sources were shown: oceanic sources with d-excess < 12 parts per thousand, and the Aral-Caspian closed drainage basin sources with d-excess > 12 parts per thousand. Two-thirds of the annual accumulation was from oceanic precipitation, of which more than half had isotopic ratios corresponding to moisture evaporated over the Atlantic Ocean. Precipitation from the Arctic/Pacific Ocean had the lowest deuterium excess, contributing one-tenth to annual accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecology and conservation require reliable data on the occurrence of animals and plants. A major source of bias is imperfect detection, which, however, can be corrected for by estimation of detectability. In traditional occupancy models, this requires repeat or multi-observer surveys. Recently, time-to-detection models have been developed as a cost-effective alternative, which requires no repeat surveys and hence costs could be halved. We compared the efficiency and reliability of time-to-detection and traditional occupancy models under varying survey effort. Two observers independently searched for 17 plant species in 44100m(2) Swiss grassland quadrats and recorded the time-to-detection for each species, enabling detectability to be estimated with both time-to-detection and traditional occupancy models. In addition, we gauged the relative influence on detectability of species, observer, plant height and two measures of abundance (cover and frequency). Estimates of detectability and occupancy under both models were very similar. Rare species were more likely to be overlooked; detectability was strongly affected by abundance. As a measure of abundance, frequency outperformed cover in its predictive power. The two observers differed significantly in their detection ability. Time-to-detection models were as accurate as traditional occupancy models, but their data easier to obtain; thus they provide a cost-effective alternative to traditional occupancy models for detection-corrected estimation of occurrence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, multiple studies showed that spatial and temporal features of a task-negative default mode network (DMN) (Greicius et al., 2003) are important markers for psychiatric diseases (Balsters et al., 2013). Another prominent indicator of cognitive functioning, yielding information about the mental condition in health and disease, is working memory (WM) processing. In EEG and MEG studies, frontal-midline theta power has been shown to increase with load during WM retention in healthy subjects (Brookes et al., 2011). Negative correlations between DMN activity and theta amplitude have been found during resting state (Jann et al., 2010) as well as during WM (Michels et al., 2010). Likewise, WM training resulted in higher resting state theta power as well as increased small-worldness of the resting brain (Langer et al., 2013). Further, increased fMRI connectivity between nodes of the DMN correlated with better WM performance (Hampson et al., 2006). Hence, the brain’s default state might influence it’s functioning during task. We therefore hypothesized correlations between pre-stimulus DMN activity and EEG-theta power during WM maintenance, depending on the WM load. 17 healthy subjects performed a Sternberg WM task while being measured simultaneously with EEG and fMRI. Data was recorded within a multicenter-study: 12 subjects were measured in Zurich with a 64-channels MR-compatible system (Brain Products) in a 3T Philips scanner, 5 subjects with a 96-channel MR-compatible system (Brain Products) in a 3T Siemens Scanner in Bern. The DMN components was obtained by a group BOLD-ICA approach over the full task duration (figure 1). The subject-wise dynamics were obtained by back-reconstructed onto each subject’s fMRI data and normalized to percent signal change values. The single trial pre-stimulus-DMN activation was then temporally correlated with the single trial EEG-theta (3-8 Hz) spectral power during retention intervals. This so-called covariance mapping (Jann et al., 2010) yielded the spatial distribution of the theta EEG fluctuations during retention associated with the dynamics of the pre-stimulus DMN. In line with previous findings, theta power was increased at frontal-midline electrodes in high- versus low-load conditions during early WM retention (figure 2). However, correlations of DMN with theta power resulted in primarily positive correlations in low-load conditions, while during high-load conditions negative correlations of DMN activity and theta power were observed at frontal-midline electrodes. This DMN-dependent load effect reached significance in the middle of the retention period (TANOVA, p<0.05) (figure 3). Our results show a complex and load-dependent interaction of pre-stimulus DMN activity and theta power during retention, varying over time. While at a more global, load-independent view pre-stimulus DMN activity correlated positively with theta power during retention, the correlation was inversed during certain time windows in high-load trials, meaning that in trials with enhanced pre-stimulus DMN activity theta power decreases during retention. Since both WM performance and DMN activity are markers of mental health our results could be important for further investigations of psychiatric populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Previous work highlighted the possibility that musical training has an influence on cognitive functioning. The suggested reason for this influence is the strong recruitment of attention, planning, and working memory functions during playing a musical instrument. The purpose of the present work was twofold, namely to evaluate the general relationship between pre-stimulus electrophysiological activity and cognition, and more specifically the influence of musical expertise on working memory functions. With this purpose in mind, we used covariance mapping analyses to evaluate whether pre-stimulus electroencephalographic activity is predictive for reaction time during a visual working memory task (Sternberg paradigm) in musicians and non-musicians. In line with our hypothesis, we replicated previous findings pointing to a general predictive value of pre-stimulus activity for working memory performance. Most importantly, we also provide first evidence for an influence of musical expertise on working memory performance that could distinctively be predicted by pre-stimulus spectral power. Our results open novel perspectives for better comprehending the vast influences of musical expertise on cognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An in-depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of observations to obtain the most accurate orbit propagation. The accuracy of the results of an orbit determination/ improvement process depends on: tracklet length, number of observations, type of orbit, astrometric error, time interval between tracklets and observation geometry. The latter depends on the position of the object along its orbit and the location of the observing station. This covariance analysis aims to optimize the observation strategy taking into account the influence of the orbit shape, of the relative object-observer geometry and the interval between observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for space debris using optical sensors. The debris objects are discovered during systematic survey observations. In general, the result of a discovery consists in only a short observation arc, or tracklet, which is used to perform a first orbit determination in order to be able to observe t he object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In order to obtain the most accurate orbit within the time available it is necessary to optimize the follow-up observations strategy. In this paper an in‐depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of follow-up observations to obtain the most accurate orbit propagation to be used for the space debris catalogue maintenance. The main factors that determine the accuracy of the results of an orbit determination/improvement process are: tracklet length, number of observations, type of orbit, astrometric error of the measurements, time interval between tracklets, and the relative position of the object along its orbit with respect to the observing station. The main aim of the covariance analysis is to optimize the follow-up strategy as a function of the object-observer geometry, the interval between follow-up observations and the shape of the orbit. This an alysis can be applied to every orbital regime but particular attention was dedicated to geostationary, Molniya, and geostationary transfer orbits. Finally the case with more than two follow-up observations and the influence of a second observing station are also analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM To describe structural covariance networks of gray matter volume (GMV) change in 28 patients with first-ever stroke to the primary sensorimotor cortices, and to investigate their relationship to hand function recovery and local GMV change. METHODS Tensor-based morphometry maps derived from high-resolution structural images were subject to principal component analyses to identify the networks. We calculated correlations between network expression and local GMV change, sensorimotor hand function and lesion volume. To verify which of the structural covariance networks of GMV change have a significant relationship to hand function, we performed an additional multivariate regression approach. RESULTS Expression of the second network, explaining 9.1% of variance, correlated with GMV increase in the medio-dorsal (md) thalamus and hand motor skill. Patients with positive expression coefficients were distinguished by significantly higher GMV increase of this structure during stroke recovery. Significant nodes of this network were located in md thalamus, dorsolateral prefrontal cortex, and higher order sensorimotor cortices. Parameter of hand function had a unique relationship to the network and depended on an interaction between network expression and lesion volume. Inversely, network expression is limited in patients with large lesion volumes. CONCLUSION Chronic phase of sensorimotor cortical stroke has been characterized by a large scale co-varying structural network in the ipsilesional hemisphere associated specifically with sensorimotor hand skill. Its expression is related to GMV increase of md thalamus, one constituent of the network, and correlated with the cortico-striato-thalamic loop involved in control of motor execution and higher order sensorimotor cortices. A close relation between expression of this network with degree of recovery might indicate reduced compensatory resources in the impaired subgroup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objectives of this research were (1) to study the effect of contact pressure, compression time, and liquid (moisture content of the fabric) on the transfer by sliding contact of non-fixed surface contamination to protective clothing constructed from uncoated, woven fabrics, (2) to study the effect of contact pressure, compression time, and liquid content on the subsequent penetration through the fabric, and (3) to determine if varying the type of contaminant changes the effect of contact pressure, compression time, and liquid content on the transfer by sliding contact and penetration of non-fixed surface contamination. ^ It was found that the combined influence of the liquid (moisture content of the fabric), load (contact pressure), compression time, and their interactions significantly influenced the penetration of all three test agents, sucrose- 14C, triolein-3H, and starch-14C through 100% cotton fabric. The combined influence of the statistically significant main effects and their interactions increased the penetration of triolein- 3H by 32,548%, sucrose-14C by 7,006%, and starch- 14C by 1,900%. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of clinical chemistry has traditionally been to evaluate acutely ill or hospitalized patients. Traditional statistical methods have serious drawbacks in that they use univariate techniques. To demonstrate alternative methodology, a multivariate analysis of covariance model was developed and applied to the data from the Cooperative Study of Sickle Cell Disease.^ The purpose of developing the model for the laboratory data from the CSSCD was to evaluate the comparability of the results from the different clinics. Several variables were incorporated into the model in order to control for possible differences among the clinics that might confound any real laboratory differences.^ Differences for LDH, alkaline phosphatase and SGOT were identified which will necessitate adjustments by clinic whenever these data are used. In addition, aberrant clinic values for LDH, creatinine and BUN were also identified.^ The use of any statistical technique including multivariate analysis without thoughtful consideration may lead to spurious conclusions that may not be corrected for some time, if ever. However, the advantages of multivariate analysis far outweigh its potential problems. If its use increases as it should, the applicability to the analysis of laboratory data in prospective patient monitoring, quality control programs, and interpretation of data from cooperative studies could well have a major impact on the health and well being of a large number of individuals. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The infant mortality rate (IMR) is considered to be one of the most important indices of a country's well-being. Countries around the world and other health organizations like the World Health Organization are dedicating their resources, knowledge and energy to reduce the infant mortality rates. The well-known Millennium Development Goal 4 (MDG 4), whose aim is to archive a two thirds reduction of the under-five mortality rate between 1990 and 2015, is an example of the commitment. ^ In this study our goal is to model the trends of IMR between the 1950s to 2010s for selected countries. We would like to know how the IMR is changing overtime and how it differs across countries. ^ IMR data collected over time forms a time series. The repeated observations of IMR time series are not statistically independent. So in modeling the trend of IMR, it is necessary to account for these correlations. We proposed to use the generalized least squares method in general linear models setting to deal with the variance-covariance structure in our model. In order to estimate the variance-covariance matrix, we referred to the time-series models, especially the autoregressive and moving average models. Furthermore, we will compared results from general linear model with correlation structure to that from ordinary least squares method without taking into account the correlation structure to check how significantly the estimates change.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An exposure system was constructed to evaluate the performance of a personal organic vapor dosimeter (3520 OVM) at ppb concentrations of nine selected target volatile organic compounds (VOCs). These concentration levels are generally encountered in community air environments, both indoor and outdoor. It was demonstrated that the chamber system could provide closely-controlled conditions of VOC concentrations, temperature and relative humidity (RH) required for the experiments. The target experimental conditions included combinations of three VOC concentrations (10, 20 and 200 $\rm\mu g/m\sp3),$ three temperatures (10, 25 and 40$\sp\circ$C) and three RHs (12, 50 and 90% RH), leading to a total of 27 exposure conditions. No backgrounds of target VOCs were found in the exposure chamber system. In the exposure chamber, the variation of the temperature was controlled within $\pm$1$\sp\circ$C, and the variation of RH was controlled within $\pm$1.5% at 12% RH, $\pm$2% at 50% RH and $\pm$3% at 90% RH. High-emission permeation tubes were utilized to generate the target VOCs. Various patterns of the permeation rates were observed over time. The lifetimes and permeation rates of the tubes differed by compound, length of the tube and manufacturer. By carefully selecting the source and length of the tubes, and closely monitoring tube weight loss over time, the permeation tubes can be used for delivering low and stable concentrations of VOCs during multiple days.^ The results of this study indicate that the performance of the 3520 OVM is compound-specific and depends on concentration, temperature and humidity. With the exception of 1,3-butadiene under most conditions, and styrene and methylene chloride at very high relative humidities, recoveries were generally within $\pm$25% of theory, indicating that the 3520 OVM can be effectively used over the range of concentrations and environmental conditions tested with a 24-hour sampling period. Increasing humidities resulted in increasing negative bias from full recovery. Reverse diffusion conducted at 200 $\rm\mu g/m\sp3$ and five temperature/humidity combinations indicated severe diffusion losses only for 1,3-butadiene, methylene chloride and styrene under increased humidity. Overall, the results of this study do not support the need to employ diffusion samplers with backup sections for the exposure conditions tested. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.