850 resultados para High-dimensional data visualization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Asian summer monsoon is a high dimensional and highly nonlinear phenomenon involving considerable moisture transport towards land from the ocean, and is critical for the whole region. We have used daily ECMWF reanalysis (ERA-40) sea-level pressure (SLP) anomalies to the seasonal cycle, over the region 50-145°E, 20°S-35°N to study the nonlinearity of the Asian monsoon using Isomap. We have focused on the two-dimensional embedding of the SLP anomalies for ease of interpretation. Unlike the unimodality obtained from tests performed in empirical orthogonal function space, the probability density function, within the two-dimensional Isomap space, turns out to be bimodal. But a clustering procedure applied to the SLP data reveals support for three clusters, which are identified using a three-component bivariate Gaussian mixture model. The modes are found to appear similar to active and break phases of the monsoon over South Asia in addition to a third phase, which shows active conditions over the Western North Pacific. Using the low-level wind field anomalies the active phase over South Asia is found to be characterised by a strengthening and an eastward extension of the Somali jet whereas during the break phase the Somali jet is weakened near southern India, while the monsoon trough in northern India also weakens. Interpretation is aided using the APHRODITE gridded land precipitation product for monsoon Asia. The effect of large-scale seasonal mean monsoon and lower boundary forcing, in the form of ENSO, is also investigated and discussed. The outcome here is that ENSO is shown to perturb the intraseasonal regimes, in agreement with conceptual ideas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

JASMIN is a super-data-cluster designed to provide a high-performance high-volume data analysis environment for the UK environmental science community. Thus far JASMIN has been used primarily by the atmospheric science and earth observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment. This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the use of a particle filter for data assimilation with a full scale coupled ocean–atmosphere general circulation model. Synthetic twin experiments are performed to assess the performance of the equivalent weights filter in such a high-dimensional system. Artificial 2-dimensional sea surface temperature fields are used as observational data every day. Results are presented for different values of the free parameters in the method. Measures of the performance of the filter are root mean square errors, trajectories of individual variables in the model and rank histograms. Filter degeneracy is not observed and the performance of the filter is shown to depend on the ability to keep maximum spread in the ensemble.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Bee populations and other pollinators face multiple, synergistically acting threats, which have led to population declines, loss of local species richness and pollination services, and extinctions. However, our understanding of the degree, distribution and causes of declines is patchy, in part due to inadequate monitoring systems, with the challenge of taxonomic identification posing a major logistical barrier. Pollinator conservation would benefit from a high-throughput identification pipeline. 2. We show that the metagenomic mining and resequencing of mitochondrial genomes (mitogenomics) can be applied successfully to bulk samples of wild bees. We assembled the mitogenomes of 48 UK bee species and then shotgun-sequenced total DNA extracted from 204 whole bees that had been collected in 10 pan-trap samples from farms in England and been identified morphologically to 33 species. Each sample data set was mapped against the 48 reference mitogenomes. 3. The morphological and mitogenomic data sets were highly congruent. Out of 63 total species detections in the morphological data set, the mitogenomic data set made 59 correct detections (93�7% detection rate) and detected six more species (putative false positives). Direct inspection and an analysis with species-specific primers suggested that these putative false positives were most likely due to incorrect morphological IDs. Read frequency significantly predicted species biomass frequency (R2 = 24�9%). Species lists, biomass frequencies, extrapolated species richness and community structure were recovered with less error than in a metabarcoding pipeline. 4. Mitogenomics automates the onerous task of taxonomic identification, even for cryptic species, allowing the tracking of changes in species richness and istributions. A mitogenomic pipeline should thus be able to contain costs, maintain consistently high-quality data over long time series, incorporate retrospective taxonomic revisions and provide an auditable evidence trail. Mitogenomic data sets also provide estimates of species counts within samples and thus have potential for tracking population trajectories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first results of a study investigating the processes that control concentrations and sources of Pb and particulate matter in the atmosphere of Sao Paulo City Brazil Aerosols were collected with high temporal resolution (3 hours) during a four-day period in July 2005 The highest Pb concentrations measured coincided with large fireworks during celebration events and associated to high traffic occurrence Our high-resolution data highlights the impact that a singular transient event can have on air quality even in a megacity Under meteorological conditions non-conducive to pollutant dispersion Pb and particulate matter concentrations accumulated during the night leading to the highest concentrations in aerosols collected early in the morning of the following day The stable isotopes of Pb suggest that emissions from traffic remain an Important source of Pb in Sao Paulo City due to the large traffic fleet despite low Pb concentrations in fuels (C) 2010 Elsevier BV All rights reserved

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To have good data quality with high complexity is often seen to be important. Intuition says that the higher accuracy and complexity the data have the better the analytic solutions becomes if it is possible to handle the increasing computing time. However, for most of the practical computational problems, high complexity data means that computational times become too long or that heuristics used to solve the problem have difficulties to reach good solutions. This is even further stressed when the size of the combinatorial problem increases. Consequently, we often need a simplified data to deal with complex combinatorial problems. In this study we stress the question of how the complexity and accuracy in a network affect the quality of the heuristic solutions for different sizes of the combinatorial problem. We evaluate this question by applying the commonly used p-median model, which is used to find optimal locations in a network of p supply points that serve n demand points. To evaluate this, we vary both the accuracy (the number of nodes) of the network and the size of the combinatorial problem (p). The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 supply points we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000 (which is aggregated from the 1.5 million nodes). To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when the accuracy in the road network increase and the combinatorial problem (low p) is simple. When the combinatorial problem is complex (large p) the improvements of increasing the accuracy in the road network are much larger. The results also show that choice of the best accuracy of the network depends on the complexity of the combinatorial (varying p) problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retirado do blog de Marc Pickren do dia 13 jun. 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sound the vuvuzelas, the World Cup is officially here. The biggest sporting event in the world is set to break all kinds of viewing records. Sporting in the digital world is just as much about stats as it is about the game itself. Enter Brandwatch. The social media analytics company has taken it upon itself to track social media statistics for the entire run of the World Cup with their new real-time data visualization tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the modern Continental Shelf to the north of Rio Grande do Norte state (NE Brazil) is located a paleo-valley, submerged during the last glacial sea-level lowstand, that marks continuation of the most important river of this area (Açu River). Despite the high level of exploration activity of oil industry, there is few information about shallow stratigraphy. Aiming to fill this gap, situated on the Neogene, was worked a marine seismic investigation, the development of a processing flow for high resolution data seismic, and the recognition of the main feature morphology of the study area: the incised valley of the River Açu. The acquisition of shallow seismic data was undertaken in conjunction with the laboratory of Marine Geology/Geophysics and Environmental Monitoring - GGEMMA of Federal University of Rio Grande do Norte UFRN, in SISPLAT project, where the geomorphological structure of the Rio paleovale Açu was the target of the investigation survey. The acquisition of geophysical data has been over the longitudinal and transverse sections, which were subsequently submitted to the processing, hitherto little-used and / or few addressed in the literature, which provided a much higher quality result with the raw data. Once proposed for the flow data was developed and applied to the data of X-Star (acoustic sensor), using available resources of the program ReflexW 4.5 A surface fluvial architecture has been constructed from the bathymetric data and remote sensing image fused and draped over Digital Elevation Models to create three-dimensional (3D) perspective views that are used to analyze the 3D geometry geological features and provide the mapping morphologically defined. The results are expressed in the analysis of seismic sections that extend over the region of the continental shelf and upper slope from mouth of the Açu River to the shelf edge, providing the identification / quantification of geometrical features such as depth, thickness, horizons and units seismic stratigraphyc area, with emphasis has been placed on the palaeoenvironmental interpretation of discordance limit and fill sediment of the incised valley, control by structural elements, and marked by the influence of changes in the sea level. The interpretation of the evolution of this river is worth can bring information to enable more precise descriptions and interpretations, which describes the palaeoenvironmental controls influencing incised valley evolution and preservation to provide a better comprehensive understanding of this reservoir analog system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase in the number of spatial data collected has motivated the development of geovisualisation techniques, aiming to provide an important resource to support the extraction of knowledge and decision making. One of these techniques are 3D graphs, which provides a dynamic and flexible increase of the results analysis obtained by the spatial data mining algorithms, principally when there are incidences of georeferenced objects in a same local. This work presented as an original contribution the potentialisation of visual resources in a computational environment of spatial data mining and, afterwards, the efficiency of these techniques is demonstrated with the use of a real database. The application has shown to be very interesting in interpreting obtained results, such as patterns that occurred in a same locality and to provide support for activities which could be done as from the visualisation of results. © 2013 Springer-Verlag.