214 resultados para temporal sampling
Resumo:
Oscillatory entrainment to the speech signal is important for language processing, but has not yet been studied in developmental disorders of language. Developmental dyslexia, a difficulty in acquiring efficient reading skills linked to difficulties with phonology (the sound structure of language), has been associated with behavioural entrainment deficits. It has been proposed that the phonological ‘deficit’ that characterises dyslexia across languages is related to impaired auditory entrainment to speech at lower frequencies via neuroelectric oscillations (<10 Hz, ‘temporal sampling theory’). Impaired entrainment to temporal modulations at lower frequencies would affect the recovery of the prosodic and syllabic structure of speech. Here we investigated event-related oscillatory EEG activity and contingent negative variation (CNV) to auditory rhythmic tone streams delivered at frequencies within the delta band (2 Hz, 1.5 Hz), relevant to sampling stressed syllables in speech. Given prior behavioural entrainment findings at these rates, we predicted functionally atypical entrainment of delta oscillations in dyslexia. Participants performed a rhythmic expectancy task, detecting occasional white noise targets interspersed with tones occurring regularly at rates of 2 Hz or 1.5 Hz. Both groups showed significant entrainment of delta oscillations to the rhythmic stimulus stream, however the strength of inter-trial delta phase coherence (ITC, ‘phase locking’) and the CNV were both significantly weaker in dyslexics, suggestive of weaker entrainment and less preparatory brain activity. Both ITC strength and CNV amplitude were significantly related to individual differences in language processing and reading. Additionally, the instantaneous phase of prestimulus delta oscillation predicted behavioural responding (response time) for control participants only.
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.
Resumo:
The temporal variations in CO2, CH4 and N2O fluxes were measured over two consecutive years from February 2007 to March 2009 from a subtropical rainforest in south-eastern Queensland, Australia, using an automated sampling system. A concurrent study using an additional 30 manual chambers examined the spatial variability of emissions distributed across three nearby remnant rainforest sites with similar vegetation and climatic conditions. Interannual variation in fluxes of all gases over the 2 years was minimal, despite large discrepancies in rainfall, whereas a pronounced seasonal variation could only be observed for CO2 fluxes. High infiltration, drainage and subsequent high soil aeration under the rainforest limited N2O loss while promoting substantial CH4 uptake. The average annual N2O loss of 0.5 ± 0.1 kg N2O-N ha−1 over the 2-year measurement period was at the lower end of reported fluxes from rainforest soils. The rainforest soil functioned as a sink for atmospheric CH4 throughout the entire 2-year period, despite periods of substantial rainfall. A clear linear correlation between soil moisture and CH4 uptake was found. Rates of uptake ranged from greater than 15 g CH4-C ha−1 day−1 during extended dry periods to less than 2–5 g CH4-C ha−1 day−1 when soil water content was high. The calculated annual CH4 uptake at the site was 3.65 kg CH4-C ha−1 yr−1. This is amongst the highest reported for rainforest systems, reiterating the ability of aerated subtropical rainforests to act as substantial sinks of CH4. The spatial study showed N2O fluxes almost eight times higher, and CH4 uptake reduced by over one-third, as clay content of the rainforest soil increased from 12% to more than 23%. This demonstrates that for some rainforest ecosystems, soil texture and related water infiltration and drainage capacity constraints may play a more important role in controlling fluxes than either vegetation or seasonal variability
Resumo:
Acoustic sensors provide an effective means of monitoring biodiversity at large spatial and temporal scales. They can continuously and passively record large volumes of data over extended periods, however these data must be analysed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced users can produce accurate results, however the time and effort required to process even small volumes of data can make manual analysis prohibitive. Our research examined the use of sampling methods to reduce the cost of analysing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilising five days of manually analysed acoustic sensor data from four sites, we examined a range of sampling rates and methods including random, stratified and biologically informed. Our findings indicate that randomly selecting 120, one-minute samples from the three hours immediately following dawn provided the most effective sampling method. This method detected, on average 62% of total species after 120 one-minute samples were analysed, compared to 34% of total species from traditional point counts. Our results demonstrate that targeted sampling methods can provide an effective means for analysing large volumes of acoustic sensor data efficiently and accurately.
Resumo:
Sediment samples were taken from six sampling sites in Bramble Bay, Queensland, Australia between February and November in 2012. They were analysed for a range of heavy metals including Al, Fe, Mn, Ti, Ce, Th, U, V, Cr, Co, Ni, Cu, Zn, As, Cd, Sb, Te, Hg, Tl and Pb. Fraction analysis, enrichment factors and Principal Component Analysis –Absolute Principal Component Scores (PCA-APCS) were carried out in order to assess metal pollution, potential bioavailability and source apportionment. Cr and Ni exceeded the Australian Interim Sediment Quality Guidelines at some sampling sites, while Hg was found to be the most enriched metal. Fraction analysis identified increased weak acid soluble Hg and Cd during the sampling period. Source apportionment via PCA-APCS found four sources of metals pollution, namely, marine sediments, shipping, antifouling coatings and a mixed source. These sources need to be considered in any metal pollution control measure within Bramble Bay.
Resumo:
Giant Cell Arteritis (GCA) is the most common vasculitis affecting the elderly. Archived formalin-fixed paraffin-embedded (FFPE) temporal artery biopsy (TAB) specimens potentially represent a valuable resource for large-scale genetic analysis of this disease. FFPE TAB samples were obtained from 12 patients with GCA. Extracted TAB DNA was assessed by real time PCR before restoration using the Illumina HD FFPE Restore Kit. Paired FFPE-blood samples were genotyped on the Illumina OmniExpress FFPE microarray. The FFPE samples that passed stringent quality control measures had a mean genotyping success of >97%. When compared with their matching peripheral blood DNA, the mean discordant heterozygote and homozygote single nucleotide polymorphisms calls were 0.0028 and 0.0003, respectively, which is within the accepted tolerance of reproducibility. This work demonstrates that it is possible to successfully obtain high-quality microarray-based genotypes FFPE TAB samples and that this data is similar to that obtained from peripheral blood.
Resumo:
Bird species richness survey is one of the most intriguing ecological topics for evaluating environmental health. Here, bird species richness denotes the number of unique bird species in a particular area. Factors affecting the investigation of bird species richness include weather, observation bias, and most importantly, the prohibitive costs of conducting surveys at large spatiotemporal scales. Thanks to advances in recording techniques, these problems have been alleviated by deploying sensors for acoustic data collection. Although automated detection techniques have been introduced to identify various bird species, the innate complexity of bird vocalizations, the background noise present in the recording and the escalating volumes of acoustic data pose a challenging task on determination of bird species richness. In this paper we proposed a two-step computer-assisted sampling approach for determining bird species richness in one-day acoustic data. First, a classification model is built based on acoustic indices for filtering out minutes that contain few bird species. Then the classified bird minutes are ordered by an acoustic index and the redundant temporal minutes are removed from the ranked minute sequence. The experimental results show that our method is more efficient in directing experts for determination of bird species compared with the previous methods.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
Quantifying nitrous oxide (N(2)O) fluxes, a potent greenhouse gas, from soils is necessary to improve our knowledge of terrestrial N(2)O losses. Developing universal sampling frequencies for calculating annual N(2)O fluxes is difficult, as fluxes are renowned for their high temporal variability. We demonstrate daily sampling was largely required to achieve annual N(2)O fluxes within 10% of the best estimate for 28 annual datasets collected from three continents, Australia, Europe and Asia. Decreasing the regularity of measurements either under- or overestimated annual N(2)O fluxes, with a maximum overestimation of 935%. Measurement frequency was lowered using a sampling strategy based on environmental factors known to affect temporal variability, but still required sampling more than once a week. Consequently, uncertainty in current global terrestrial N(2)O budgets associated with the upscaling of field-based datasets can be decreased significantly using adequate sampling frequencies.
Resumo:
Accurately quantifying total greenhouse gas emissions (e.g. methane) from natural systems such as lakes, reservoirs and wetlands requires the spatial-temporal measurement of both diffusive and ebullitive (bubbling) emissions. Traditional, manual, measurement techniques provide only limited localised assessment of methane flux, often introducing significant errors when extrapolated to the whole-of-system. In this paper, we directly address these current sampling limitations and present a novel multiple robotic boat system configured to measure the spatiotemporal release of methane to atmosphere across inland waterways. The system, consisting of multiple networked Autonomous Surface Vehicles (ASVs) and capable of persistent operation, enables scientists to remotely evaluate the performance of sampling and modelling algorithms for real-world process quantification over extended periods of time. This paper provides an overview of the multi-robot sampling system including the vehicle and gas sampling unit design. Experimental results are shown demonstrating the system’s ability to autonomously navigate and implement an exploratory sampling algorithm to measure methane emissions on two inland reservoirs.