922 resultados para data storage concept


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the aims of the Science and Technology Committee (STC) of the Group on Earth Observations (GEO) was to establish a GEO Label- a label to certify geospatial datasets and their quality. As proposed, the GEO Label will be used as a value indicator for geospatial data and datasets accessible through the Global Earth Observation System of Systems (GEOSS). It is suggested that the development of such a label will significantly improve user recognition of the quality of geospatial datasets and that its use will help promote trust in datasets that carry the established GEO Label. Furthermore, the GEO Label is seen as an incentive to data providers. At the moment GEOSS contains a large amount of data and is constantly growing. Taking this into account, a GEO Label could assist in searching by providing users with visual cues of dataset quality and possibly relevance; a GEO Label could effectively stand as a decision support mechanism for dataset selection. Currently our project - GeoViQua, - together with EGIDA and ID-03 is undertaking research to define and evaluate the concept of a GEO Label. The development and evaluation process will be carried out in three phases. In phase I we have conducted an online survey (GEO Label Questionnaire) to identify the initial user and producer views on a GEO Label or its potential role. In phase II we will conduct a further study presenting some GEO Label examples that will be based on Phase I. We will elicit feedback on these examples under controlled conditions. In phase III we will create physical prototypes which will be used in a human subject study. The most successful prototypes will then be put forward as potential GEO Label options. At the moment we are in phase I, where we developed an online questionnaire to collect the initial GEO Label requirements and to identify the role that a GEO Label should serve from the user and producer standpoint. The GEO Label Questionnaire consists of generic questions to identify whether users and producers believe a GEO Label is relevant to geospatial data; whether they want a single "one-for-all" label or separate labels that will serve a particular role; the function that would be most relevant for a GEO Label to carry; and the functionality that users and producers would like to see from common rating and review systems they use. To distribute the questionnaire, relevant user and expert groups were contacted at meetings or by email. At this stage we successfully collected over 80 valid responses from geospatial data users and producers. This communication will provide a comprehensive analysis of the survey results, indicating to what extent the users surveyed in Phase I value a GEO Label, and suggesting in what directions a GEO Label may develop. Potential GEO Label examples based on the results of the survey will be presented for use in Phase II.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to study the effect of washcoat composition on lean NOx trap (LNT) aging characteristics, fully formulated monolithic LNT catalysts containing varying amounts of La-stabilized CeO2 (5 wt% La2O3) or CeO2-ZrO2 (Ce:Zr = 70:30) were subjected to accelerated aging on a bench reactor. Subsequent catalyst evaluation revealed that aging resulted in deterioration of the NOx storage, NOx release and NOx reduction functions, whereas the observation of lean phase NO2 slip for all of the aged catalysts indicated that LNT performance was not limited by the kinetics of NO oxidation. After aging, all of the catalysts showed increased selectivity to NH3 in the temperature range 250–450 °C. TEM, H2 chemisorption, XPS and elemental analysis data revealed two main changes which can explain the degradation in LNT performance. First, residual sulfur in the catalysts, present as BaSO4, decreased catalyst NOx storage capacity. Second, sintering of the precious metals in the washcoat was observed, which can be expected to decrease the rate of NOx reduction. Additionally, sintering is hypothesized to result in segregation of the precious metal and Ba phases, resulting in less efficient NOx spillover from Pt to Ba during NOx adsorption, as well as decreased rates of reductant spillover from Pt to Ba and reverse NOx spillover during catalyst regeneration. Spectacular improvement in LNT durability was observed for catalysts containing CeO2 or CeO2-ZrO2 relative to their non-ceria containing analog. This was attributed to (i) the ability of ceria to participate in NOx storage/reduction as a supplement to the main Ba NOx storage component; (ii) the fact that Pt and CeO2(-ZrO2) are not subject to phase segregation; and (iii) the ability of ceria to trap sulfur, resulting in decreased sulfur accumulation on the Ba component.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of data independence designates the techniques that allow data to be changed without affecting the applications that process it. The different structures of the information bases require corresponded tools for supporting data independence. A kind of information bases (the Multi-dimensional Numbered Information Spaces) are pointed in the paper. The data independence in such information bases is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of a supply chain depends critically on the coordinating actions and decisions undertaken by the trading partners. The sharing of product and process information plays a central role in the coordination and is a key driver for the success of the supply chain. In this paper we propose the concept of "Linked pedigrees" - linked datasets, that enable the sharing of traceability information of products as they move along the supply chain. We present a distributed and decentralised, linked data driven architecture that consumes real time supply chain linked data to generate linked pedigrees. We then present a communication protocol to enable the exchange of linked pedigrees among trading partners. We exemplify the utility of linked pedigrees by illustrating examples from the perishable goods logistics supply chain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AMS Subj. Classification: 68U05, 68P30

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and, through assessment of specific parameters (like accelerations, uterine contractions, ...), permit to assess conditions such as foetal distress. An incorrect evaluation of foetal status can be of course very dangerous. In the last decades, to improve interpretation of cardiotocographic recordings, great interest has been dedicated to FHRV spectral analysis. It is worth reminding that FHR is intrinsically an uneven series and that to obtain evenly sampled series, many commercial cardiotocographs use a zero-order interpolation (storage rate of CTG data equal to 4 Hz). This is not suitable for frequency analyses because interpolation introduces alterations in the FHR power spectrum. In particular, this interpolation process can produce artifacts and an attenuation of the high-frequency components of the PSD that, for example, affects the estimation of the sympatho-vagal balance (SVB - computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations due to zero-order interpolation and other CTG storage rates, in this work, we simulated uneven FHR series with set characteristics, their evenly spaced versions (with different storage rates) and computed SVB values by PSD. For PSD estimation, we chose the Lomb method, as suggested by other authors in application to uneven HR series. ©2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy data envelopment analysis (DEA) models emerge as another class of DEA models to account for imprecise inputs and outputs for decision making units (DMUs). Although several approaches for solving fuzzy DEA models have been developed, there are some drawbacks, ranging from the inability to provide satisfactory discrimination power to simplistic numerical examples that handles only triangular fuzzy numbers or symmetrical fuzzy numbers. To address these drawbacks, this paper proposes using the concept of expected value in generalized DEA (GDEA) model. This allows the unification of three models - fuzzy expected CCR, fuzzy expected BCC, and fuzzy expected FDH models - and the ability of these models to handle both symmetrical and asymmetrical fuzzy numbers. We also explored the role of fuzzy GDEA model as a ranking method and compared it to existing super-efficiency evaluation models. Our proposed model is always feasible, while infeasibility problems remain in certain cases under existing super-efficiency models. In order to illustrate the performance of the proposed method, it is first tested using two established numerical examples and compared with the results obtained from alternative methods. A third example on energy dependency among 23 European Union (EU) member countries is further used to validate and describe the efficacy of our approach under asymmetric fuzzy numbers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Directions the outcomes of the OpenAIRE project, which implements the EC Open Access (OA) pilot. Capitalizing on the OpenAIRE infrastructure, built for managing FP7 and ERC funded articles, and the associated supporting mechanism of the European Helpdesk System, OpenAIREplus will “develop an open access, participatory infrastructure for scientific information”. It will significantly expand its base of harvested publications to also include all OA publications indexed by the DRIVER infrastructure (more than 270 validated institutional repositories) and any other repository containing “peer-reviewed literature” that complies with certain standards. It will also generically harvest and index the metadata of scientific datasets in selected diverse OA thematic data repositories. It will support the concept of linked publications by deploying novel services for “linking peer- reviewed literature and associated data sets and collections”, from link discovery based on diverse forms of mining (textual, usage, etc.), to storage, visual representation, and on-line exploration. It will offer both user-level services to experts and “non-scientists” alike as well as programming interfaces for “providers of value-added services” to build applications on its content. Deposited articles and data will be openly accessible through an enhanced version of the OpenAIRE portal, together with any available relevant information on associated project funding and usage statistics. OpenAIREplus will retain its European footprint, engaging people and scientific repositories in almost all 27 EU member states and beyond. The technical work will be complemented by a suite of studies and associated research efforts that will partly proceed in collaboration with “different European initiatives” and investigate issues of “intellectual property rights, efficient financing models, and standards”.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present one approach for extending the learning set of a classification algorithm with additional metadata. It is used as a base for giving appropriate names to found regularities. The analysis of correspondence between connections established in the attribute space and existing links between concepts can be used as a test for creation of an adequate model of the observed world. Meta-PGN classifier is suggested as a possible tool for establishing these connections. Applying this approach in the field of content-based image retrieval of art paintings provides a tool for extracting specific feature combinations, which represent different sides of artists' styles, periods and movements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the work is to claim that engineers can be motivated to study statistical concepts by using the applications in their experience connected with Statistical ideas. The main idea is to choose a data from the manufacturing factility (for example, output from CMM machine) and explain that even if the parts used do not meet exact specifications they are used in production. By graphing the data one can show that the error is random but follows a distribution, that is, there is regularily in the data in statistical sense. As the error distribution is continuous, we advocate that the concept of randomness be introducted starting with continuous random variables with probabilities connected with areas under the density. The discrete random variables are then introduced in terms of decision connected with size of the errors before generalizing to abstract concept of probability. Using software, they can then be motivated to study statistical analysis of the data they encounter and the use of this analysis to make engineering and management decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microposts are small fragments of social media content that have been published using a lightweight paradigm (e.g. Tweets, Facebook likes, foursquare check-ins). Microposts have been used for a variety of applications (e.g., sentiment analysis, opinion mining, trend analysis), by gleaning useful information, often using third-party concept extraction tools. There has been very large uptake of such tools in the last few years, along with the creation and adoption of new methods for concept extraction. However, the evaluation of such efforts has been largely consigned to document corpora (e.g. news articles), questioning the suitability of concept extraction tools and methods for Micropost data. This report describes the Making Sense of Microposts Workshop (#MSM2013) Concept Extraction Challenge, hosted in conjunction with the 2013 World Wide Web conference (WWW'13). The Challenge dataset comprised a manually annotated training corpus of Microposts and an unlabelled test corpus. Participants were set the task of engineering a concept extraction system for a defined set of concepts. Out of a total of 22 complete submissions 13 were accepted for presentation at the workshop; the submissions covered methods ranging from sequence mining algorithms for attribute extraction to part-of-speech tagging for Micropost cleaning and rule-based and discriminative models for token classification. In this report we describe the evaluation process and explain the performance of different approaches in different contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper looks at the issue of privacy and anonymity through the prism of Scott's concept of legibility i.e. the desire of the state to obtain an ever more accurate mapping of its domain and the actors in its domain. We argue that privacy was absent in village life in the past, and it has arisen as a temporary phenomenon arising from the lack of appropriate technology to make all life in the city legible. Cities have been the loci of creativity for the major part of human civilisation. There is something specific about the illegibility of cities which facilitates creativity and innovation. By providing the technology to catalogue and classify all objects and ideas around us, this leads to a consideration of semantic web technologies, Linked Data and the Internet of Things as unwittingly furthering this ever greater legibility. There is a danger that the over description of a domain will lead to a loss in creativity and innovation. We conclude by arguing that our prime concern must be to preserve illegibility because the survival of some form, any form, of civilisation depends upon it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A kockázat statisztikai értelemben közvetlenül nem mérhető, azaz látens fogalom éppen úgy, mint a gazdasági fejlettség, a szervezettség vagy az intelligencia. Mi bennünk a közös? A kockázat is komplex fogalom, több mérhető tényezőt foglal magában, és bár sok tényezőjét mérjük, fel sem tételezzük, hogy pontos eredményt kapunk. Ebben a megközelítésben az elemző kezdettől fogva tudja, hogy hiányos az ismerete. Ezt Bélyácz [2011[ nyomán úgy is megfogalmazhatjuk: „A statisztikusok tudják, hogy valamit éppen nem tudnak.” / === / From statistical point of view risk, like economic development is a latent concept. Typically there is no one number which can explicitly estimate or project risk. Variance is used as a proxy in finance to measure risk. Other professions are using other concepts for risk. Underwriting is the most important step in insurance business to analyse exposure. Actuaries evaluate average claim size and the probability of claim to calculate risk. Bayesian credibility can be used to calculate insurance premium combining frequencies and empirical knowledge, as a prior. Different types of risks can be classified into a risk matrix to separate insurable risk. Only this category can be analysed by multivariate statistical methods, which are based on statistical data. Sample size and frequency of events are relevant not only in insurance, but in pension and investment decisions as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently a crisis in science education in the United States. This statement is based on the National Science Foundation's report stating that the nation's students, on average, still rank near the bottom in science and math achievement internationally. ^ This crisis is the background of the problem for this study. This investigation studied learner variables that were thought to play a role in teaching chemistry at the secondary school level, and related them to achievement in the chemistry classroom. Among these, cognitive style (field dependence/independence), attitudes toward science, and self-concept had been given considerable attention by researchers in recent years. These variables were related to different competencies that could be used to measure the various types of achievement in the chemistry classroom at the secondary school level. These different competencies were called academic, laboratory, and problem solving achievement. Each of these chemistry achievement components may be related to a different set of learner variables, and the main purpose of this study was to investigate the nature of these relationships. ^ Three instruments to determine attitudes toward science, cognitive style, and self-concept were used for data collection. Teacher grades were used to determine chemistry achievement for each student. ^ Research questions were analyzed using Pearson Product Moment Correlation Coefficients and t-tests. Results indicated that field independence was significantly correlated with problem solving, academic, and laboratory achievement. Educational researchers should therefore investigate how to teach students to be more field independent so they can achieve at higher levels in chemistry. ^ It was also true that better attitudes toward the social benefits and problems that accompany scientific progress were significantly correlated with higher achievement on all three academic measures in chemistry. This suggests that educational researchers should investigate how students might be guided to manifest more favorable attitudes toward science so they will achieve at higher levels in chemistry. ^ An overall theme that emerged from this study was that findings refuted the idea that female students believed that science was for males only and was an inappropriate and unfeminine activity. This was true because when the means of males and females were compared on the three measures of chemistry achievement, there was no statistically significant difference between them on problem solving or academic achievement. However, females were significantly better in laboratory achievement. ^