925 resultados para data-types


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have recently developed a principled approach to interactive non-linear hierarchical visualization [8] based on the Generative Topographic Mapping (GTM). Hierarchical plots are needed when a single visualization plot is not sufficient (e.g. when dealing with large quantities of data). In this paper we extend our system by giving the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in [8], whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of GTMs is used. The latter is particularly useful when the plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a data set of 2300 18-dimensional points and mention extension of our system to accommodate discrete data types.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We develop and study the concept of dataflow process networks as used for exampleby Kahn to suit exact computation over data types related to real numbers, such as continuous functions and geometrical solids. Furthermore, we consider communicating these exact objectsamong processes using protocols of a query-answer nature as introduced in our earlier work. This enables processes to provide valid approximations with certain accuracy and focusing on certainlocality as demanded by the receiving processes through queries. We define domain-theoretical denotational semantics of our networks in two ways: (1) directly, i. e. by viewing the whole network as a composite process and applying the process semantics introduced in our earlier work; and (2) compositionally, i. e. by a fixed-point construction similarto that used by Kahn from the denotational semantics of individual processes in the network. The direct semantics closely corresponds to the operational semantics of the network (i. e. it iscorrect) but very difficult to study for concrete networks. The compositional semantics enablescompositional analysis of concrete networks, assuming it is correct. We prove that the compositional semantics is a safe approximation of the direct semantics. Wealso provide a method that can be used in many cases to establish that the two semantics fully coincide, i. e. safety is not achieved through inactivity or meaningless answers. The results are extended to cover recursively-defined infinite networks as well as nested finitenetworks. A robust prototype implementation of our model is available.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The domestication of plants and animals marks one of the most significant transitions in human, and indeed global, history. Traditionally, study of the domestication process was the exclusive domain of archaeologists and agricultural scientists; today it is an increasingly multidisciplinary enterprise that has come to involve the skills of evolutionary biologists and geneticists. Although the application of new information sources and methodologies has dramatically transformed our ability to study and understand domestication, it has also generated increasingly large and complex datasets, the interpretation of which is not straightforward. In particular, challenges of equifinality, evolutionary variance, and emergence of unexpected or counter-intuitive patterns all face researchers attempting to infer past processes directly from patterns in data. We argue that explicit modeling approaches, drawing upon emerging methodologies in statistics and population genetics, provide a powerful means of addressing these limitations. Modeling also offers an approach to analyzing datasets that avoids conclusions steered by implicit biases, and makes possible the formal integration of different data types. Here we outline some of the modeling approaches most relevant to current problems in domestication research, and demonstrate the ways in which simulation modeling is beginning to reshape our understanding of the domestication process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Phytoplankton are crucial to marine ecosystem functioning and are important indicators of environmental change. Phytoplankton data are also essential for informing management and policy, particularly in supporting the new generation of marine legislative drivers, which take a holistic ecosystem approach to management. The Marine Strategy Framework Directive (MSFD) seeks to achieve Good Environmental Status (GES) of European seas through the implementation of such a management approach. This is a regional scale directive which recognises the importance of plankton communities in marine ecosystems; plankton data at the appropriate spatial, temporal and taxonomic scales are therefore required for implementation. The Continuous Plankton Recorder (CPR) survey is a multidecadal, North Atlantic basin scale programme which routinely records approximately 300 phytoplankton taxa. Because of these attributes, the survey plays a key role in the implementation of the MSFD and the assessment of GES in the Northeast Atlantic region. This paper addresses the role of the CPR's phytoplankton time-series in delivering GES through the development and informing of MSFD indicators, the setting of targets against a background of climate change and the provision of supporting information used to interpret change in non-plankton indicators. We also discuss CPR data in the context of other phytoplankton data types that may contribute to GES, as well as explore future possibilities for the use of new and innovative applications of CPR phytoplankton datasets in delivering GES. Efforts must be made to preserve long-term time series, such as the CPR, which supply vital ecological information used to informed evidence-based environmental policy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Phytoplankton are crucial to marine ecosystem functioning and are important indicators of environmental change. Phytoplankton data are also essential for informing management and policy, particularly in supporting the new generation of marine legislative drivers, which take a holistic ecosystem approach to management. The Marine Strategy Framework Directive (MSFD) seeks to achieve Good Environmental Status (GES) of European seas through the implementation of such a management approach. This is a regional scale directive which recognises the importance of plankton communities in marine ecosystems; plankton data at the appropriate spatial, temporal and taxonomic scales are therefore required for implementation. The Continuous Plankton Recorder (CPR) survey is a multidecadal, North Atlantic basin scale programme which routinely records approximately 300 phytoplankton taxa. Because of these attributes, the survey plays a key role in the implementation of the MSFD and the assessment of GES in the Northeast Atlantic region. This paper addresses the role of the CPR's phytoplankton time-series in delivering GES through the development and informing of MSFD indicators, the setting of targets against a background of climate change and the provision of supporting information used to interpret change in non-plankton indicators. We also discuss CPR data in the context of other phytoplankton data types that may contribute to GES, as well as explore future possibilities for the use of new and innovative applications of CPR phytoplankton datasets in delivering GES. Efforts must be made to preserve long-term time series, such as the CPR, which supply vital ecological information used to informed evidence-based environmental policy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Analysis of data without labels is commonly subject to scrutiny by unsupervised machine learning techniques. Such techniques provide more meaningful representations, useful for better understanding of a problem at hand, than by looking only at the data itself. Although abundant expert knowledge exists in many areas where unlabelled data is examined, such knowledge is rarely incorporated into automatic analysis. Incorporation of expert knowledge is frequently a matter of combining multiple data sources from disparate hypothetical spaces. In cases where such spaces belong to different data types, this task becomes even more challenging. In this paper we present a novel immune-inspired method that enables the fusion of such disparate types of data for a specific set of problems. We show that our method provides a better visual understanding of one hypothetical space with the help of data from another hypothetical space. We believe that our model has implications for the field of exploratory data analysis and knowledge discovery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

GLUT4 protein expression in white adipose tissue (WAT) and skeletal muscle (SM) was investigated in 2-month-old, 12-month-old spontaneously obese or 12-month-old calorie-restricted lean Wistar rats, by considering different parameters of analysis, such as tissue and body weight, and total protein yield of the tissue. In WAT, a ~70% decrease was observed in plasma membrane and microsomal GLUT4 protein, expressed as µg protein or g tissue, in both 12-month-old obese and 12-month-old lean rats compared to 2-month-old rats. However, when plasma membrane and microsomal GLUT4 tissue contents were expressed as g body weight, they were the same. In SM, GLUT4 protein content, expressed as µg protein, was similar in 2-month-old and 12-month-old obese rats, whereas it was reduced in 12-month-old obese rats, when expressed as g tissue or g body weight, which may play an important role in insulin resistance. Weight loss did not change the SM GLUT4 content. These results show that altered insulin sensitivity is accompanied by modulation of GLUT4 protein expression. However, the true role of WAT and SM GLUT4 contents in whole-body or tissue insulin sensitivity should be determined considering not only GLUT4 protein expression, but also the strong morphostructural changes in these tissues, which require different types of data analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pteropods are a group of holoplanktonic gastropods for which global biomass distribution patterns remain poorly resolved. The aim of this study was to collect and synthesize existing pteropod (Gymnosomata, Thecosomata and Pseudothecosomata) abundance and biomass data, in order to evaluate the global distribution of pteropod carbon biomass, with a particular emphasis on its seasonal, temporal and vertical patterns. We collected 25 902 data points from several online databases and a number of scientific articles. The biomass data has been gridded onto a 360 x 180° grid, with a vertical resolution of 33 WOA depth levels. Data has been converted to NetCDF format. Data were collected between 1951-2010, with sampling depths ranging from 0-1000 m. Pteropod biomass data was either extracted directly or derived through converting abundance to biomass with pteropod specific length to weight conversions. In the Northern Hemisphere (NH) the data were distributed evenly throughout the year, whereas sampling in the Southern Hemisphere was biased towards the austral summer months. 86% of all biomass values were located in the NH, most (42%) within the latitudinal band of 30-50° N. The range of global biomass values spanned over three orders of magnitude, with a mean and median biomass concentration of 8.2 mg C l-1 (SD = 61.4) and 0.25 mg C l-1, respectively for all data points, and with a mean of 9.1 mg C l-1 (SD = 64.8) and a median of 0.25 mg C l-1 for non-zero biomass values. The highest mean and median biomass concentrations were located in the NH between 40-50° S (mean biomass: 68.8 mg C l-1 (SD = 213.4) median biomass: 2.5 mg C l-1) while, in the SH, they were within the 70-80° S latitudinal band (mean: 10.5 mg C l-1 (SD = 38.8) and median: 0.2 mg C l-1). Biomass values were lowest in the equatorial regions. A broad range of biomass concentrations was observed at all depths, with the biomass peak located in the surface layer (0-25 m) and values generally decreasing with depth. However, biomass peaks were located at different depths in different ocean basins: 0-25 m depth in the N Atlantic, 50-100 m in the Pacific, 100-200 m in the Arctic, 200-500 m in the Brazilian region and >500 m in the Indo-Pacific region. Biomass in the NH was relatively invariant over the seasonal cycle, but more seasonally variable in the SH. The collected database provides a valuable tool for modellers for the study of ecosystem processes and global biogeochemical cycles.