790 resultados para Datasets


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The electroencephalogram (EEG) may be described by a large number of different feature types and automated feature selection methods are needed in order to reliably identify features which correlate with continuous independent variables. New method: A method is presented for the automated identification of features that differentiate two or more groups inneurologicaldatasets basedupona spectraldecompositionofthe feature set. Furthermore, the method is able to identify features that relate to continuous independent variables. Results: The proposed method is first evaluated on synthetic EEG datasets and observed to reliably identify the correct features. The method is then applied to EEG recorded during a music listening task and is observed to automatically identify neural correlates of music tempo changes similar to neural correlates identified in a previous study. Finally,the method is applied to identify neural correlates of music-induced affective states. The identified neural correlates reside primarily over the frontal cortex and are consistent with widely reported neural correlates of emotions. Comparison with existing methods: The proposed method is compared to the state-of-the-art methods of canonical correlation analysis and common spatial patterns, in order to identify features differentiating synthetic event-related potentials of different amplitudes and is observed to exhibit greater performance as the number of unique groups in the dataset increases. Conclusions: The proposed method is able to identify neural correlates of continuous variables in EEG datasets and is shown to outperform canonical correlation analysis and common spatial patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Studiesthat use prolonged periods of sensory stimulation report associations between regional reductions in neural activity and negative blood oxygenation level-dependent (BOLD) signaling. However, the neural generators of the negative BOLD response remain to be characterized. Here, we use single-impulse electrical stimulation of the whisker pad in the anesthetized rat to identify components of the neural response that are related to “negative” hemodynamic changes in the brain. Laminar multiunit activity and local field potential recordings of neural activity were performed concurrently withtwo-dimensional optical imaging spectroscopy measuring hemodynamic changes. Repeated measurements over multiple stimulation trials revealed significant variations in neural responses across session and animal datasets. Within this variation, we found robust long-latency decreases (300 and 2000 ms after stimulus presentation) in gammaband power (30 – 80 Hz) in the middle-superficial cortical layers in regions surrounding the activated whisker barrel cortex. This reduction in gamma frequency activity was associated with corresponding decreases in the hemodynamic responses that drive the negative BOLD signal. These findings suggest a close relationship between BOLD responses and neural events that operate over time scales that outlast the initiating sensory stimulus, and provide important insights into the neurophysiological basis of negative neuroimaging signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, several research results have presented formulations for the auto-calibration problem. Most of these have relied on the evaluation of vanishing points to extract the camera parameters. Normally vanishing points are evaluated using pedestrians or the Manhattan World assumption i.e. it is assumed that the scene is necessarily composed of orthogonal planar surfaces. In this work, we present a robust framework for auto-calibration, with improved results and generalisability for real-life situations. This framework is capable of handling problems such as occlusions and the presence of unexpected objects in the scene. In our tests, we compare our formulation with the state-of-the-art in auto-calibration using pedestrians and Manhattan World-based assumptions. This paper reports on the experiments conducted using publicly available datasets; the results have shown that our formulation represents an improvement over the state-of-the-art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Somatic embryogenesis (SE) in plants is a process by which embryos are generated directly from somatic cells, rather than from the fused products of male and female gametes. Despite the detailed expression analysis of several somatic-to-embryonic marker genes, a comprehensive understanding of SE at a molecular level is still lacking. The present study was designed to generate high resolution transcriptome datasets for early SE providing the way for future research to understand the underlying molecular mechanisms that regulate this process. We sequenced Arabidopsis thaliana somatic embryos collected from three distinct developmental time-points (5, 10 and 15 d after in vitro culture) using the Illumina HiSeq 2000 platform. Results This study yielded a total of 426,001,826 sequence reads mapped to 26,520 genes in the A. thaliana reference genome. Analysis of embryonic cultures after 5 and 10 d showed differential expression of 1,195 genes; these included 778 genes that were more highly expressed after 5 d as compared to 10 d. Moreover, 1,718 genes were differentially expressed in embryonic cultures between 10 and 15 d. Our data also showed at least eight different expression patterns during early SE; the majority of genes are transcriptionally more active in embryos after 5 d. Comparison of transcriptomes derived from somatic embryos and leaf tissues revealed that at least 4,951 genes are transcriptionally more active in embryos than in the leaf; increased expression of genes involved in DNA cytosine methylation and histone deacetylation were noted in embryogenic tissues. In silico expression analysis based on microarray data found that approximately 5% of these genes are transcriptionally more active in somatic embryos than in actively dividing callus and non-dividing leaf tissues. Moreover, this identified 49 genes expressed at a higher level in somatic embryos than in other tissues. This included several genes with unknown function, as well as others related to oxidative and osmotic stress, and auxin signalling. Conclusions The transcriptome information provided here will form the foundation for future research on genetic and epigenetic control of plant embryogenesis at a molecular level. In follow-up studies, these data could be used to construct a regulatory network for SE; the genes more highly expressed in somatic embryos than in vegetative tissues can be considered as potential candidates to validate these networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydration-dependent DNA deformation has been known since Rosalind Franklin recognised that the relative humidity of the sample had to be maintained to observe a single conformation in DNA fibre diffraction. We now report for the first time the crystal structure, at the atomic level, of a dehydrated form of a DNA duplex and demonstrate the reversible interconversion to the hydrated form at room temperature. This system, containing d(TCGGCGCCGA) in the presence of Λ-[Ru(TAP)2(dppz)]2+ (TAP = 1,4,5,8-tetraazaphenanthrene, dppz = dipyridophenazine), undergoes a partial transition from an A/B hybrid to the A-DNA conformation, at 84-79% relative humidity. This is accompanied by an increase in kink at the central step from 22° to 51°, with a large movement of the terminal bases forming the intercalation site. This transition is reversible on rehydration. Seven datasets, collected from one crystal at room temperature, show the consequences of dehydration at near-atomic resolution. This result highlights that crystals, traditionally thought of as static systems, are still dynamic and therefore can be the subject of further experimentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extreme rainfall events continue to be one of the largest natural hazards in the UK. In winter, heavy precipitation and floods have been linked with intense moisture transport events associated with atmospheric rivers (ARs), yet no large-scale atmospheric precursors have been linked to summer flooding in the UK. This study investigates the link between ARs and extreme rainfall from two perspectives: 1) Given an extreme rainfall event, is there an associated AR? 2) Given an AR, is there an associated extreme rainfall event? We identify extreme rainfall events using the UK Met Office daily rain-gauge dataset and link these to ARs using two different horizontal resolution atmospheric datasets (ERA-Interim and 20th Century Re-analysis). The results show that less than 35% of winter ARs and less than 15% of summer ARs are associated with an extreme rainfall event. Consistent with previous studies, at least 50% of extreme winter rainfall events are associated with an AR. However, less than 20% of the identified summer extreme rainfall events are associated with an AR. The dependence of the water vapor transport intensity threshold used to define an AR on the years included in the study, and on the length of the season, is also examined. Including a longer period (1900-2012) compared to previous studies (1979-2005) reduces the water vapor transport intensity threshold used to define an AR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current methods for initialising coupled atmosphere-ocean forecasts often rely on the use of separate atmosphere and ocean analyses, the combination of which can leave the coupled system imbalanced at the beginning of the forecast, potentially accelerating the development of errors. Using a series of experiments with the European Centre for Medium-range Weather Forecasts coupled system, the magnitude and extent of these so-called initialisation shocks is quantified, and their impact on forecast skill measured. It is found that forecasts initialised by separate ocean and atmospheric analyses do exhibit initialisation shocks in lower atmospheric temperature, when compared to forecasts initialised using a coupled data assimilation method. These shocks result in as much as a doubling of root-mean-square error on the first day of the forecast in some regions, and in increases that are sustained for the duration of the 10-day forecasts performed here. However, the impacts of this choice of initialisation on forecast skill, assessed using independent datasets, were found to be negligible, at least over the limited period studied. Larger initialisation shocks are found to follow a change in either the atmospheric or ocean model component between the analysis and forecast phases: changes in the ocean component can lead to sea surface temperature shocks of more than 0.5K in some equatorial regions during the first day of the forecast. Implications for the development of coupled forecast systems, particularly with respect to coupled data assimilation methods, are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To effectively prevent the onset and reduce mortality from noncommunicable diseases, we must consider every individual as metabolically unique to allow for a personalized management to take place. Diet and gut microbiota are major components of the exposome that interact together with a genetic make-up in a complex interplay to result in an individual’s metabolic phenotype. In this context, foodomics approaches (such as nutrigenetics, nutrimetabolomics, nutritranscriptomics, nutriproteomics and metagenomics) are essential tools to assess an individual’s optimal metabolic space. These have recently been applied to large human cohorts to identify specific gene-metabolite, diet-metabolite and gene–diet interactions. As the gut microbiota is a key player in metabolic homeostasis, we suggest following a holistic investigation of metagenome–hyperbolome–diet interactions, the findings of which will provide the basis for developing personalized nutrition and personalized functional foods. However, examining these three-way interactions will only be possible when the challenge of large datasets integration will be overcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim Most vascular plants on Earth form mycorrhizae, a symbiotic relationship between plants and fungi. Despite the broad recognition of the importance of mycorrhizae for global carbon and nutrient cycling, we do not know how soil and climate variables relate to the intensity of colonization of plant roots by mycorrhizal fungi. Here we quantify the global patterns of these relationships. Location Global. Methods Data on plant root colonization intensities by the two dominant types of mycorrhizal fungi world-wide, arbuscular (4887 plant species in 233 sites) and ectomycorrhizal fungi (125 plant species in 92 sites), were compiled from published studies. Data for climatic and soil factors were extracted from global datasets. For a given mycorrhizal type, we calculated at each site the mean root colonization intensity by mycorrhizal fungi across all potentially mycorrhizal plant species found at the site, and subjected these data to generalized additive model regression analysis with environmental factors as predictor variables. Results We show for the first time that at the global scale the intensity of plant root colonization by arbuscular mycorrhizal fungi strongly relates to warm-season temperature, frost periods and soil carbon-to-nitrogen ratio, and is highest at sites featuring continental climates with mild summers and a high availability of soil nitrogen. In contrast, the intensity of ectomycorrhizal infection in plant roots is related to soil acidity, soil carbon-to-nitrogen ratio and seasonality of precipitation, and is highest at sites with acidic soils and relatively constant precipitation levels. Main conclusions We provide the first quantitative global maps of intensity of mycorrhizal colonization based on environmental drivers, and suggest that environmental changes will affect distinct types of mycorrhizae differently. Future analyses of the potential effects of environmental change on global carbon and nutrient cycling via mycorrhizal pathways will need to take into account the relationships discovered in this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. OBJECTIVES: The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. METHODS: Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. RESULTS: All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). CONCLUSIONS: This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by other organizations, and we conjecture that patterns observed in this study could be found in other DHSNs. Future research should analyze network growth over time and examine the characteristics and survival rates of superusers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.