925 resultados para data-types


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans le but d’optimiser la représentation en mémoire des enregistrements Scheme dans le compilateur Gambit, nous avons introduit dans celui-ci un système d’annotations de type et des vecteurs contenant une représentation abrégée des enregistrements. Ces derniers omettent la référence vers le descripteur de type et l’entête habituellement présents sur chaque enregistrement et utilisent plutôt un arbre de typage couvrant toute la mémoire pour retrouver le vecteur contenant une référence. L’implémentation de ces nouvelles fonctionnalités se fait par le biais de changements au runtime de Gambit. Nous introduisons de nouvelles primitives au langage et modifions l’architecture existante pour gérer correctement les nouveaux types de données. On doit modifier le garbage collector pour prendre en compte des enregistrements contenants des valeurs hétérogènes à alignements irréguliers, et l’existence de références contenues dans d’autres objets. La gestion de l’arbre de typage doit aussi être faite automatiquement. Nous conduisons ensuite une série de tests de performance visant à déterminer si des gains sont possibles avec ces nouvelles primitives. On constate une amélioration majeure de performance au niveau de l’allocation et du comportement du gc pour les enregistrements typés de grande taille et des vecteurs d’enregistrements typés ou non. De légers surcoûts sont toutefois encourus lors des accès aux champs et, dans le cas des vecteurs d’enregistrements, au descripteur de type.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diagnosis of Hridroga (cardiac disorders) in Ayurveda requires the combination of many different types of data, including personal details, patient symptoms, patient histories, general examination results, Ashtavidha pareeksha results etc. Computer-assisted decision support systems must be able to combine these data types into a seamless system. Intelligent agents, an approach that has been used chiefly in business applications, is used in medical diagnosis in this case. This paper is about a multi-agent system named “Distributed Ayurvedic Diagnosis and Therapy System for Hridroga using Agents” (DADTSHUA). It describes the architecture of the DADTSHUA model .This system is using mobile agents and ontology for passing data through the network. Due to this, transport delay can be minimized. It is a system which will be very helpful for the beginning physicians to eliminate his ambiguity in diagnosis and therapy. The system is implemented using Java Agent DEvelopment framework (JADE), which is a java-complaint mobile agent platform from TILab.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As the number of resources on the web exceeds by far the number of documents one can track, it becomes increasingly difficult to remain up to date on ones own areas of interest. The problem becomes more severe with the increasing fraction of multimedia data, from which it is difficult to extract some conceptual description of their contents. One way to overcome this problem are social bookmark tools, which are rapidly emerging on the web. In such systems, users are setting up lightweight conceptual structures called folksonomies, and overcome thus the knowledge acquisition bottleneck. As more and more people participate in the effort, the use of a common vocabulary becomes more and more stable. We present an approach for discovering topic-specific trends within folksonomies. It is based on a differential adaptation of the PageRank algorithm to the triadic hypergraph structure of a folksonomy. The approach allows for any kind of data, as it does not rely on the internal structure of the documents. In particular, this allows to consider different data types in the same analysis step. We run experiments on a large-scale real-world snapshot of a social bookmarking system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Auszeichnungssprache XML dient zur Annotation von Dokumenten und hat sich als Standard-Datenaustauschformat durchgesetzt. Dabei entsteht der Bedarf, XML-Dokumente nicht nur als reine Textdateien zu speichern und zu transferieren, sondern sie auch persistent in besser strukturierter Form abzulegen. Dies kann unter anderem in speziellen XML- oder relationalen Datenbanken geschehen. Relationale Datenbanken setzen dazu bisher auf zwei grundsätzlich verschiedene Verfahren: Die XML-Dokumente werden entweder unverändert als binäre oder Zeichenkettenobjekte gespeichert oder aber aufgespalten, sodass sie in herkömmlichen relationalen Tabellen normalisiert abgelegt werden können (so genanntes „Flachklopfen“ oder „Schreddern“ der hierarchischen Struktur). Diese Dissertation verfolgt einen neuen Ansatz, der einen Mittelweg zwischen den bisherigen Lösungen darstellt und die Möglichkeiten des weiterentwickelten SQL-Standards aufgreift. SQL:2003 definiert komplexe Struktur- und Kollektionstypen (Tupel, Felder, Listen, Mengen, Multimengen), die es erlauben, XML-Dokumente derart auf relationale Strukturen abzubilden, dass der hierarchische Aufbau erhalten bleibt. Dies bietet zwei Vorteile: Einerseits stehen bewährte Technologien, die aus dem Bereich der relationalen Datenbanken stammen, uneingeschränkt zur Verfügung. Andererseits lässt sich mit Hilfe der SQL:2003-Typen die inhärente Baumstruktur der XML-Dokumente bewahren, sodass es nicht erforderlich ist, diese im Bedarfsfall durch aufwendige Joins aus den meist normalisierten und auf mehrere Tabellen verteilten Tupeln zusammenzusetzen. In dieser Arbeit werden zunächst grundsätzliche Fragen zu passenden, effizienten Abbildungsformen von XML-Dokumenten auf SQL:2003-konforme Datentypen geklärt. Darauf aufbauend wird ein geeignetes, umkehrbares Umsetzungsverfahren entwickelt, das im Rahmen einer prototypischen Applikation implementiert und analysiert wird. Beim Entwurf des Abbildungsverfahrens wird besonderer Wert auf die Einsatzmöglichkeit in Verbindung mit einem existierenden, ausgereiften relationalen Datenbankmanagementsystem (DBMS) gelegt. Da die Unterstützung von SQL:2003 in den kommerziellen DBMS bisher nur unvollständig ist, muss untersucht werden, inwieweit sich die einzelnen Systeme für das zu implementierende Abbildungsverfahren eignen. Dabei stellt sich heraus, dass unter den betrachteten Produkten das DBMS IBM Informix die beste Unterstützung für komplexe Struktur- und Kollektionstypen bietet. Um die Leistungsfähigkeit des Verfahrens besser beurteilen zu können, nimmt die Arbeit Untersuchungen des nötigen Zeitbedarfs und des erforderlichen Arbeits- und Datenbankspeichers der Implementierung vor und bewertet die Ergebnisse.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses the requirements on the numerical precision for a practical Multiband Ultra-Wideband (UWB) consumer electronic solution. To this end we first present the possibilities that UWB has to offer to the consumer electronics market and the possible range of devices. We then show the performance of a model of the UWB baseband system implemented using floating point precision. Then, by simulation we find the minimal numerical precision required to maintain floating-point performance for each of the specific data types and signals present in the UWB baseband. Finally, we present a full description of the numerical requirements for both the transmit and receive components of the UWB baseband. The numerical precision results obtained in this paper can then be used by baseband designers to implement cost effective UWB systems using System-on-Chip (SoC), FPGA and ASIC technology solutions biased toward the competitive consumer electronics market(1).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5) and UK climatic projections 2009 (UKCP09) using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs) are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For the first time, we introduce a class of transformed symmetric models to extend the Box and Cox models to more general symmetric models. The new class of models includes all symmetric continuous distributions with a possible non-linear structure for the mean and enables the fitting of a wide range of models to several data types. The proposed methods offer more flexible alternatives to Box-Cox or other existing procedures. We derive a very simple iterative process for fitting these models by maximum likelihood, whereas a direct unconditional maximization would be more difficult. We give simple formulae to estimate the parameter that indexes the transformation of the response variable and the moments of the original dependent variable which generalize previous published results. We discuss inference on the model parameters. The usefulness of the new class of models is illustrated in one application to a real dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Macrófitas são importantes produtoras primárias do ecossistema aquático, e o desequilíbrio do ambiente pode ocasionar seu crescimento acelerado. Portanto, levantamentos de dados relacionados a macrófitas submersas são importantes para contribuir na gestão de corpos de água. Contudo, a amostragem dessa vegetação requer um enorme esforço físico. Nesse sentido, a técnica hidroacústica é apropriada para o estudo de macrófitas submersas. Assim, os objetivos deste trabalho foram avaliar os tipos de dados gerados pelo ecobatímetro e analisar como esses dados caracterizam a vegetação. Utilizou-se o ecobatímetro BioSonics DT-X acoplado a um GPS. A área de estudo é um trecho do Rio Uberaba, MG. A amostragem foi feita por meio de transectos, navegando de uma margem à outra. Depois de processar os dados, obteve-se informação a respeito de ocorrência de macrófitas submersas, profundidade, altura média das plantas, porcentagem da cobertura vegetal e posição. A partir desse conjunto de dados, foi possível extrair outras duas métricas: biovolume e altura efetiva do dossel. Os dados foram importados de um Sistema de Informação Geográfica e geraram-se mapas ilustrativos das variáveis estudadas. Além disso, quatro perfis foram selecionados para analisar a diferença entre as grandezas de representação de macrófitas. O ecobatímetro mostrou-se uma ferramenta eficaz no mapeamento de macrófitas submersas. Cada uma das medidas - altura do dossel, ECH ou biovolume - caracteriza de forma diferente a vegetação submersa. Dessa forma, a escolha do tipo de representação depende da aplicação desejada.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Smart card applications represent a growing market. Usually this kind of application manipulate and store critical information that requires some level of security, such as financial or confidential information. The quality and trustworthiness of smart card software can be improved through a rigorous development process that embraces formal techniques of software engineering. In this work we propose the BSmart method, a specialization of the B formal method dedicated to the development of smart card Java Card applications. The method describes how a Java Card application can be generated from a B refinement process of its formal abstract specification. The development is supported by a set of tools, which automates the generation of some required refinements and the translation to Java Card client (host) and server (applet) applications. With respect to verification, the method development process was formalized and verified in the B method, using the Atelier B tool [Cle12a]. We emphasize that the Java Card application is translated from the last stage of refinement, named implementation. This translation process was specified in ASF+SDF [BKV08], describing the grammar of both languages (SDF) and the code transformations through rewrite rules (ASF). This specification was an important support during the translator development and contributes to the tool documentation. We also emphasize the KitSmart library [Dut06, San12], an essential component of BSmart, containing models of all 93 classes/interfaces of Java Card API 2:2:2, of Java/Java Card data types and machines that can be useful for the specifier, but are not part of the standard Java Card library. In other to validate the method, its tool support and the KitSmart, we developed an electronic passport application following the BSmart method. We believe that the results reached in this work contribute to Java Card development, allowing the generation of complete (client and server components), and less subject to errors, Java Card applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Satellite remote sensing of ocean colour is the only method currently available for synoptically measuring wide-area properties of ocean ecosystems, such as phytoplankton chlorophyll biomass. Recently, a variety of bio-optical and ecological methods have been established that use satellite data to identify and differentiate between either phytoplankton functional types (PFTs) or phytoplankton size classes (PSCs). In this study, several of these techniques were evaluated against in situ observations to determine their ability to detect dominant phytoplankton size classes (micro-, nano- and picoplankton). The techniques are applied to a 10-year ocean-colour data series from the SeaWiFS satellite sensor and compared with in situ data (6504 samples) from a variety of locations in the global ocean. Results show that spectral-response, ecological and abundance-based approaches can all perform with similar accuracy. Detection of microplankton and picoplankton were generally better than detection of nanoplankton. Abundance-based approaches were shown to provide better spatial retrieval of PSCs. Individual model performance varied according to PSC, input satellite data sources and in situ validation data types. Uncertainty in the comparison procedure and data sources was considered. Improved availability of in situ observations would aid ongoing research in this field. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEIS