968 resultados para Longitudinal Data Analysis and Time Series
Resumo:
In this dissertation, we propose a continuous-time Markov chain model to examine the longitudinal data that have three categories in the outcome variable. The advantage of this model is that it permits a different number of measurements for each subject and the duration between two consecutive time points of measurements can be irregular. Using the maximum likelihood principle, we can estimate the transition probability between two time points. By using the information provided by the independent variables, this model can also estimate the transition probability for each subject. The Monte Carlo simulation method will be used to investigate the goodness of model fitting compared with that obtained from other models. A public health example will be used to demonstrate the application of this method. ^
Resumo:
This data set contains three time series of measurements of soil carbon (particular and dissolved) from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. 1. Particulate soil carbon: Stratified soil sampling was performed every two years since before sowing in April 2002 and was repeated in April 2004, 2006 and 2008 to a depth of 30 cm segmented to a depth resolution of 5 cm giving six depth subsamples per core. Total carbon concentration was analyzed on ball-milled subsamples by an elemental analyzer at 1150°C. Inorganic carbon concentration was measured by elemental analysis at 1150°C after removal of organic carbon for 16 h at 450°C in a muffle furnace. Organic carbon concentration was calculated as the difference between both measurements of total and inorganic carbon. 2. Particulate soil carbon (high intensity sampling): In one block of the Jena Experiment soil samples were taken to a depth of 1 m (segmented to a depth resolution of 5 cm giving 20 depth subsamples per core) with three replicates per block ever 5 years starting before sowing in April 2002. Samples were processed as for the more frequent sampling. 3. Dissolved organic carbon: Suction plates installed on the field site in 10, 20, 30 and 60 cm depth were used to sample soil pore water. Cumulative soil solution was sampled biweekly and analyzed for dissolved organic carbon concentration by a high TOC elemental analyzer. Annual mean values of DOC are provided.
Resumo:
This data set contains four time series of particulate and dissolved soil nitrogen measurements from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. 1. Total nitrogen from solid phase: Stratified soil sampling was performed every two years since before sowing in April 2002 and was repeated in April 2004, 2006 and 2008 to a depth of 30 cm segmented to a depth resolution of 5 cm giving six depth subsamples per core. In 2002 five samples per plot were taken and analyzed independently. Averaged values per depth layer are reported. In later years, three samples per plot were taken, pooled in the field, and measured as a combined sample. Sampling locations were less than 30 cm apart from sampling locations in other years. All soil samples were passed through a sieve with a mesh size of 2 mm in 2002. In later years samples were further sieved to 1 mm. No additional mineral particles were removed by this procedure. Total nitrogen concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany). 2. Total nitrogen from solid phase (high intensity sampling): In block 2 of the Jena Experiment, soil samples were taken to a depth of 1m (segmented to a depth resolution of 5 cm giving 20 depth subsamples per core) with three replicates per block ever 5 years starting before sowing in April 2002. Samples were processed as for the more frequent sampling but were always analyzed independently and never pooled. 3. Mineral nitrogen from KCl extractions: Five soil cores (diameter 0.01 m) were taken at a depth of 0 to 0.15 m (and between 2002 and 2004 also at a depth of 0.15 to 0.3 m) of the mineral soil from each of the experimental plots at various times over the years. In addition also plots of the management experiment, that altered mowing frequency and fertilized subplots (see further details below) were sampled in some later years. Samples of the soil cores per plot (subplots in case of the management experiment) were pooled during each sampling campaign. NO3-N and NH4-N concentrations were determined by extraction of soil samples with 1 M KCl solution and were measured in the soil extract with a Continuous Flow Analyzer (CFA, 2003-2005: Skalar, Breda, Netherlands; 2006-2007: AutoAnalyzer, Seal, Burgess Hill, United Kingdom). 4. Dissolved nitrogen in soil solution: Glass suction plates with a diameter of 12 cm, 1 cm thickness and a pore size of 1-1.6 µm (UMS GmbH, Munich, Germany) were installed in April 2002 in depths of 10, 20, 30 and 60 cm to collect soil solution. The sampling bottles were continuously evacuated to a negative pressure between 50 and 350 mbar, such that the suction pressure was about 50 mbar above the actual soil water tension. Thus, only the soil leachate was collected. Cumulative soil solution was sampled biweekly and analyzed for nitrate (NO3-), ammonium (NH4+) and total dissolved nitrogen concentrations with a continuous flow analyzer (CFA, Skalar, Breda, The Netherlands). Nitrate was analyzed photometrically after reduction to NO2- and reaction with sulfanilamide and naphthylethylenediamine-dihydrochloride to an azo-dye. Our NO3- concentrations contained an unknown contribution of NO2- that is expected to be small. Simultaneously to the NO3- analysis, NH4+ was determined photometrically as 5-aminosalicylate after a modified Berthelot reaction. The detection limits of NO3- and NH4+ were 0.02 and 0.03 mg N L-1, respectively. Total dissolved N in soil solution was analyzed by oxidation with K2S2O8 followed by reduction to NO2- as described above for NO3-. Dissolved organic N (DON) concentrations in soil solution were calculated as the difference between TDN and the sum of mineral N (NO3- + NH4+).
Resumo:
This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2015) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km**2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.
Resumo:
During the last decades, Puertos del Estado and research groups have developed a huge effort on numerical an physical monitoring. This effort has led to the necessity to implement a tool to standardize, store and process all gathered data. The Test Analysis Tool (TATo) is described in the paper.
Resumo:
A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time–period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.
Resumo:
Data were collected during various groundfish surveys carried out by IFREMER from October to December between 1997 and 2011, on the eastern continental shelf of the Bay of Biscay and in the Celtic Sea (EVHOE series). The sampling design was stratified according to latitude and depth. A 36/47 GOV trawl was used with a 20 mm mesh codend liner. Haul duration was 30 minutes at a towing speed of 4 knots. Fishing was restricted to daylight hours. Catch weights and catch numbers were recorded for all species and body size measured. The weights and numbers per haul were transformed into abundances per km**2 by considering the swept area of a standard haul (0.069 km**2).
Resumo:
Mode of access: Internet.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
This study explores whether the introduction of selectively trained radiographers reporting Accident and Emergency (A&E) X-ray examinations or the appendicular skeleton affected the availability of reports for A&E and General Practitioner (GP) examinations at it typical district general hospital. This was achieved by analysing monthly data on A&E and GP examinations for 1993 1997 using structural time-series models. Parameters to capture stochastic seasonal effects and stochastic time trends were included ill the models. The main outcome measures were changes in the number, proportion and timeliness of A&E and GP examinations reported. Radiographer reporting X-ray examinations requested by A&E was associated with it 12% (p = 0.050) increase in the number of A&E examinations reported and it 37% (p
Resumo:
Min/max autocorrelation factor analysis (MAFA) and dynamic factor analysis (DFA) are complementary techniques for analysing short (> 15-25 y), non-stationary, multivariate data sets. We illustrate the two techniques using catch rate (cpue) time-series (1982-2001) for 17 species caught during trawl surveys off Mauritania, with the NAO index, an upwelling index, sea surface temperature, and an index of fishing effort as explanatory variables. Both techniques gave coherent results, the most important common trend being a decrease in cpue during the latter half of the time-series, and the next important being an increase during the first half. A DFA model with SST and UPW as explanatory variables and two common trends gave good fits to most of the cpue time-series. (c) 2004 International Council for the Exploration of the Sea. Published by Elsevier Ltd. All rights reserved.
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
Background: The inherent complexity of statistical methods and clinical phenomena compel researchers with diverse domains of expertise to work in interdisciplinary teams, where none of them have a complete knowledge in their counterpart's field. As a result, knowledge exchange may often be characterized by miscommunication leading to misinterpretation, ultimately resulting in errors in research and even clinical practice. Though communication has a central role in interdisciplinary collaboration and since miscommunication can have a negative impact on research processes, to the best of our knowledge, no study has yet explored how data analysis specialists and clinical researchers communicate over time. Methods/Principal Findings: We conducted qualitative analysis of encounters between clinical researchers and data analysis specialists (epidemiologist, clinical epidemiologist, and data mining specialist). These encounters were recorded and systematically analyzed using a grounded theory methodology for extraction of emerging themes, followed by data triangulation and analysis of negative cases for validation. A policy analysis was then performed using a system dynamics methodology looking for potential interventions to improve this process. Four major emerging themes were found. Definitions using lay language were frequently employed as a way to bridge the language gap between the specialties. Thought experiments presented a series of ""what if'' situations that helped clarify how the method or information from the other field would behave, if exposed to alternative situations, ultimately aiding in explaining their main objective. Metaphors and analogies were used to translate concepts across fields, from the unfamiliar to the familiar. Prolepsis was used to anticipate study outcomes, thus helping specialists understand the current context based on an understanding of their final goal. Conclusion/Significance: The communication between clinical researchers and data analysis specialists presents multiple challenges that can lead to errors.