942 resultados para Geospatial Data Model
Resumo:
Florida International University's Spring 2009 Map and User Imagery Services Newsletter.
Resumo:
Florida International University's Fall 2009 Map and User Imagery Services Newsletter.
Resumo:
Florida International University's Fall 2009 Map and User Imagery Services Newsletter; Vol. 3, issue 2.
Resumo:
Florida International University's Spring 2010 Map and User Imagery Services Newsletter.
Resumo:
Florida International University's Fall 2012 Map and User Imagery Services Newsletter.
Resumo:
Florida International University's Spring/Summer 2013 Map and User Imagery Services Newsletter.
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.
Resumo:
La organización del conocimiento en el contexto de las Ciencias de la Información tiene como esencia la información y el conocimiento debidamente documentado o registrado. La organización del conocimiento como proceso, envuelve tanto la descripción física como de los contenidos de los objetos informacionales. Y el producto de ese proceso descriptivo es la representación de los atributos de un objeto o conjunto de objetos. Las representaciones son construidas por lenguajes elaborados específicamente para los objetivos de la organización en los sistemas de información. Lenguajes que se subdividen en lenguajes que describen el documento (el soporte físico del objeto) y lenguajes que describen la información (los contenidos).A partir de esta premisa la siguiente investigación tiene como objetivo general analizarlos sistemas de Gestión de Información y Conocimiento Institucional principalmente los que proponen utilizar el Currículum Vitae del profesor como única fuente de información, medición y representación de la información y el conocimiento de una organización. Dentro delos principales resultados se muestra la importancia de usar el currículo personal como fuente de información confiable y normalizada; una síntesis de los principales sistemas curriculares que existen a nivel internacional y regional; así como el gráfico del modelo de datos del caso de estudio; y por último, la propuesta del uso de las ontologías como principal herramienta para la organización semántica de la información en un sistema de gestión de información y conocimiento.
Resumo:
Der Zugang zu Datenbanken über die universelle Abfragesprache SQL stellt für Nicht-Spezialisten eine große Herausforderung dar. Als eine benutzerfreundliche Alternative wurden daher seit den 1970er-Jahren unterschiedliche visuelle Abfragesprachen (Visual Query Languages, kurz VQLs) für klassische PCs erforscht. Ziel der vorliegenden Arbeit ist es, eine generische VQL zu entwickeln und zu erproben, die eine gestenbasierte Exploration von Datenbanken auf Schema- und Instanzdatenebene für mobile Endgeräte, insbesondere Tablets, ermöglicht. Dafür werden verschiedene Darstellungsformen, Abfragestrategien und visuelle Hints für Fremdschlüsselbeziehungen untersucht, die den Benutzer bei der Navigation durch die Daten unterstützen. Im Rahmen einer Anforderungsanalyse erwies sich die Visualisierung der Daten und Beziehungen mittels einer platzsparenden geschachtelten NF2-Darstellung als besonders vorteilhaft. Zur Steuerung der Datenbankexploration wird eine geeignete Gestensprache, bestehend aus Stroke-, Multitouch- und Mid-Air-Gesten, vorgestellt. Das Gesamtkonzept aus Darstellung und Gestensteuerung wurde anhand des im Rahmen dieser Arbeit entwickelten GBXT-Prototyps auf seine reale Umsetzbarkeit hin, als plattformunabhängige Single-Page-Application für verschiedene mobile Endgeräte mittels JavaScript und HTML5/CSS3 untersucht.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Este estudo destaca os benefícios da análise metódica da cartografia de base que suporta a produção de Cartografia Geológica. Em certas regiões, as cartas base publicadas estão ainda associadas a redes geodésicas clássicas e, com frequência, são introduzidos erros quando se desconsideram parâmetros essenciais como a Projecção Cartográfica e o Datum Geodésico. Com o uso sistemático dos dispositivos de GPS e dos Sistemas de Informação Geográfica para a elaboração das cartas geológicas, é imprescindível o conhecimento prévio do Sistema de Coordenadas ao qual devem estar ajustados os dados geo-espaciais. Neste estudo de caso, as diferenças e os erros associados à aquisição de coordenadas entre os Data geocêntricos WGS84 e SIRGAS2000 são residuais, atendendo aos parâmetros das cartas base, à região do globo, o campo de acção e a escala, minimizando assim a propagação de erros de posicionamento e georreferenciação subsequentes.
Proposition de nouvelles fonctionnalités WikiSIG pour supporter le travail collaboratif en Geodesign
Resumo:
L’émergence du Web 2.0 se matérialise par de nouvelles technologies (API, Ajax…), de nouvelles pratiques (mashup, geotagging…) et de nouveaux outils (wiki, blog…). Il repose principalement sur le principe de participation et de collaboration. Dans cette dynamique, le Web à caractère spatial et cartographique c’est-à-dire, le Web géospatial (ou GéoWeb) connait lui aussi de fortes transformations technologiques et sociales. Le GéoWeb 2.0 participatif se matérialise en particulier par des mashups entre wikis et géobrowsers (ArgooMap, Geowiki, WikiMapia, etc.). Les nouvelles applications nées de ces mashups évoluent vers des formes plus interactives d’intelligence collective. Mais ces applications ne prennent pas en compte les spécificités du travail collaboratif, en particulier la gestion de traçabilité ou l’accès dynamique à l’historique des contributions. Le Geodesign est un nouveau domaine fruit de l’association des SIG et du design, permettant à une équipe multidisciplinaire de travailler ensemble. Compte tenu de son caractère émergent, le Geodesign n’est pas assez défini et il requiert une base théorique innovante, de nouveaux outils, supports, technologies et pratiques afin de s’adapter à ses exigences complexes. Nous proposons dans cette thèse de nouvelles fonctionnalités de type WikiSIG, bâties sur les principes et technologies du GéoWeb 2.0 et visant en particulier à supporter la dimension collaborative du processus de Geodesign. Le WikiSIG est doté de fonctionnalités wiki dédiées à la donnée géospatiale (y compris dans sa composante géométrique : forme et localisation) permettant d’assurer, de manière dynamique, la gestion documentée des versions des objets et l’accès à ces versions (et de leurs métadonnées), facilitant ainsi le travail collaboratif en Geodesign. Nous proposons également la deltification qui consiste en la capacité de comparer et d’afficher les différences entre deux versions de projets. Finalement la pertinence de quelques outils du géotraitement et « sketching » est évoquée. Les principales contributions de cette thèse sont d’une part d’identifier les besoins, les exigences et les contraintes du processus de Geodesign collaboratif, et d’autre part de proposer des nouvelles fonctionnalités WikiSIG répondant au mieux à la dimension collaborative du processus. Pour ce faire, un cadre théorique est dressé où nous avons identifié les exigences du travail collaboratif de Geodesign et proposé certaines fonctionnalités WikiSIG innovantes qui sont par la suite formalisés en diagrammes UML. Une maquette informatique est aussi développée de façon à mettre en oeuvre ces fonctionnalités, lesquelles sont illustrées à partir d’un cas d’étude simulé, traité comme preuve du concept. La pertinence de ces fonctionnalités développées proposées est finalement validée par des experts à travers un questionnaire et des entrevues. En résumé, nous montrons dans cette thèse l’importance de la gestion de la traçabilité et comment accéder dynamiquement à l’historique dans un processus de Geodesign. Nous proposons aussi d’autres fonctionnalités comme la deltification, le volet multimédia supportant l’argumentation, les paramètres qualifiant les données produites, et la prise de décision collective par consensus, etc.
Resumo:
This study aims to investigate the influence of the asset class and the breakdown of tangibility as determinant factors of the capital structure of companies listed on the BM & FBOVESPA in the period of 2008-2012. Two current assets classes were composed and once they were grouped by liquidity, they were also analyzed by the financial institutions for credit granting: current resources (Cash, Bank and Financial Applications) and operations with duplicates (Stocks and Receivables). The breakdown of the tangible assets was made based on its main components provided as warrantees for loans like Machinery & Equipment and Land & Buildings. For an analysis extension, three metrics for leverage (accounting, financial and market) were applied and the sample was divided into economic sectors, adopted by BM&FBOVESPA. The data model in dynamic panel estimated by a systemic GMM of two levels was used in this study due its strength to problems of endogenous relationship as well as the omitted variables bias. The found results suggest that current resources are determinants of the capital structure possibly because they re characterized as proxies for financial solvency, being its relationship with debt positive. The sectorial analysis confirmed the results for current resources. The tangibility of assets has inverse proportional relationship with the leverage. As it is disintegrated in its main components, the significant and negative influence of machinery & equipment was more marked in the Industrial Goods sector. This result shows that, on average, the most specific assets from operating activities of a company compete for a less use of third party resources. As complementary results, it was observed that the leverage has persistence, which is linked with the static trade-off theory. Specifically for financial leverage, it was observed that the persistence is relevant when it is controlled for the lagged current assets classes variables. The proxy variable for growth opportunities, measured by the Market -to -Book, has the sign of its contradictory coefficient. The company size has a positive relationship with debt, in favor of static trade-off theory. Profitability is the most consistent variable in all the performed estimations, showing strong negative and significant relationship with leverage, as the pecking order theory predicts