978 resultados para tecnologie web rest restful database
Resumo:
In his discussion - Database As A Tool For Hospitality Management - William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, O’Brien offers at the outset, “Database systems offer sweeping possibilities for better management of information in the hospitality industry. The author discusses what such systems are capable of accomplishing.” The author opens with a bit of background on database system development, which also lends an impression as to the complexion of the rest of the article; uh, it’s a shade technical. “In early 1981, Ashton-Tate introduced dBase 11. It was the first microcomputer database management processor to offer relational capabilities and a user-friendly query system combined with a fast, convenient report writer,” O’Brien informs. “When 16-bit microcomputers such as the IBM PC series were introduced late the following year, more powerful database products followed: dBase 111, Friday!, and Framework. The effect on the entire business community, and the hospitality industry in particular, has been remarkable”, he further offers with his informed outlook. Professor O’Brien offers a few anecdotal situations to illustrate how much a comprehensive data-base system means to a hospitality operation, especially when billing is involved. Although attitudes about computer systems, as well as the systems themselves have changed since this article was written, there is pertinent, fundamental information to be gleaned. In regards to the digression of the personal touch when a customer is engaged with a computer system, O’Brien says, “A modern data processing system should not force an employee to treat valued customers as numbers…” He also cautions, “Any computer system that decreases the availability of the personal touch is simply unacceptable.” In a system’s ability to process information, O’Brien suggests that in the past businesses were so enamored with just having an automated system that they failed to take full advantage of its capabilities. O’Brien says that a lot of savings, in time and money, went un-noticed and/or under-appreciated. Today, everyone has an integrated system, and the wise business manager is the business manager who takes full advantage of all his resources. O’Brien invokes the 80/20 rule, and offers, “…the last 20 percent of results costs 80 percent of the effort. But times have changed. Everyone is automating data management, so that last 20 percent that could be ignored a short time ago represents a significant competitive differential.” The evolution of data systems takes center stage for much of the article; pitfalls also emerge.
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
La tesi tratta di un web service capace di estrapolare i dati da un database Oracle utilizzato da un gestionale già esistente, per darli come output ad un portale da cui il cittadino si potrà collegare, visionandoli. Il gestionale si occupa della fase coattiva, ovvero quando un contribuente non paga una multa o una qualsiasi tassa; fino ad oggi non era possibile far vedere in via telematica i dati della situazione coattiva di un contribuente. Grazie al web service da me creato un portale potrà fare vedere i dati al contribuente tramite esso. Inoltre grazie a questo web service in futuro si potranno collegare anche con altri portali o app smartphone per dare ulteriori servizi al cittadino.
Resumo:
This brief note announces the publication of a nonmetric cranial trait database as a freely available resource on the Internet at: http://library.queensu.ca/web- doc/ssdc/cntd. The files were constructed in the program Excel, and are available also in comma-delimited format. These one-observer data on 38 traits were recorded in 1963–2003 in skeletal collections curated at many muse- ums. The 8,016 crania represent individuals from a broad geographic and temporal range of recent human popula- tions, with regions best represented being the Arctic and northwestern North America. Am J Phys Anthropol 152:551–553, 2013.
Resumo:
Photosynthetic eukaryotes have a critical role as the main producers in most ecosystems of the biosphere. The ongoing environmental metabarcoding revolution opens the perspective for holistic ecosystems biological studies of these organisms, in particular the unicellular microalgae that often lack distinctive morphological characters and have complex life cycles. To interpret environmental sequences, metabarcoding necessarily relies on taxonomically curated databases containing reference sequences of the targeted gene (or barcode) from identified organisms. To date, no such reference framework exists for photosynthetic eukaryotes. In this study, we built the PhytoREF database that contains 6490 plastidial 16S rDNA reference sequences that originate from a large diversity of eukaryotes representing all known major photosynthetic lineages. We compiled 3333 amplicon sequences available from public databases and 879 sequences extracted from plastidial genomes, and generated 411 novel sequences from cultured marine microalgal strains belonging to different eukaryotic lineages. A total of 1867 environmental Sanger 16S rDNA sequences were also included in the database. Stringent quality filtering and a phylogeny-based taxonomic classification were applied for each 16S rDNA sequence. The database mainly focuses on marine microalgae, but sequences from land plants (representing half of the PhytoREF sequences) and freshwater taxa were also included to broaden the applicability of PhytoREF to different aquatic and terrestrial habitats. PhytoREF, accessible via a web interface (http://phytoref.fr), is a new resource in molecular ecology to foster the discovery, assessment and monitoring of the diversity of photosynthetic eukaryotes using high-throughput sequencing.
Resumo:
Photosynthetic eukaryotes have a critical role as the main producers in most ecosystems of the biosphere. The ongoing environmental metabarcoding revolution opens the perspective for holistic ecosystems biological studies of these organisms, in particular the unicellular microalgae that often lack distinctive morphological characters and have complex life cycles. To interpret environmental sequences, metabarcoding necessarily relies on taxonomically curated databases containing reference sequences of the targeted gene (or barcode) from identified organisms. To date, no such reference framework exists for photosynthetic eukaryotes. In this study, we built the PhytoREF database that contains 6490 plastidial 16S rDNA reference sequences that originate from a large diversity of eukaryotes representing all known major photosynthetic lineages. We compiled 3333 amplicon sequences available from public databases and 879 sequences extracted from plastidial genomes, and generated 411 novel sequences from cultured marine microalgal strains belonging to different eukaryotic lineages. A total of 1867 environmental Sanger 16S rDNA sequences were also included in the database. Stringent quality filtering and a phylogeny-based taxonomic classification were applied for each 16S rDNA sequence. The database mainly focuses on marine microalgae, but sequences from land plants (representing half of the PhytoREF sequences) and freshwater taxa were also included to broaden the applicability of PhytoREF to different aquatic and terrestrial habitats. PhytoREF, accessible via a web interface (http://phytoref.fr), is a new resource in molecular ecology to foster the discovery, assessment and monitoring of the diversity of photosynthetic eukaryotes using high-throughput sequencing.
Resumo:
Il lavoro sviluppato deriva dalla creazione, in sede di tirocinio, di un piccolo database, creato a partire dalla ricerca dei dati fino alla scelta di informazioni di rilievo e alla loro conseguente archiviazione. L’obiettivo dell’elaborato è rappresentato dalla volontà di ampliare quella conoscenza basilare posseduta sul mondo dell’informazione dal punto di vista gestionale. Infatti, considerando lo scenario odierno, si può affermare che lo studio del cliente attraverso delle informazioni rilevanti, di vario tipo, è una delle conoscenze fondamentali nel mondo dell’ingegneria gestionale. Il metodo di studio utilizzato è basato sulla comprensione delle diverse tipologie di dati presenti nel mondo aziendale e, di conseguenza, al loro legame con il mondo del web e soprattutto con i metodi di archiviazione più moderni e più utilizzati oggi sia dalle aziende, che non dai privati stessi; le piattaforme cloud. L’elaborato si suddivide in tre argomenti differenti ma strettamente collegati tra loro; la prima parte tratta di come l’informazione più basilare vada raccolta ed analizzata, la sezione centrale è legata al tema chiave dell’internet come mezzo di archiviazione e non più solo come piattaforma di ricerca del dato, mentre nel capitolo finale viene chiarito il concetto di cloud computing, comodo veloce ed efficiente, considerato da qualche anno il punto d’incontro fra i primi due argomenti. Nello specifico si andranno a presentare alcuni di applicazione reale del cloud da parte di aziende come Amazon, Google e Facebook, multinazionali che ad oggi sono riuscite a fare dell’archiviazione e della manipolazione dei dati, a scopi industriali, una delle loro fonti di guadagno. Il risultato è rappresentato da una panoramica sul funzionamento e sulle tecniche di utilizzo dell’informazione, partendo dal dato più irrilevante fino ad arrivare ai database condivisi utilizzati, se non addirittura controllati, dalle più rinomate aziende nazionali ed internazionali.
Resumo:
Artificial Immune Systems have been used successfully to build recommender systems for film databases. In this research, an attempt is made to extend this idea to web site recommendation. A collection of more than 1000 individuals' web profiles (alternatively called preferences / favourites / bookmarks file) will be used. URLs will be classified using the DMOZ (Directory Mozilla) database of the Open Directory Project as our ontology. This will then be used as the data for the Artificial Immune Systems rather than the actual addresses. The first attempt will involve using a simple classification code number coupled with the number of pages within that classification code. However, this implementation does not make use of the hierarchical tree-like structure of DMOZ. Consideration will then be given to the construction of a similarity measure for web profiles that makes use of this hierarchical information to build a better-informed Artificial Immune System.
Resumo:
SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.
Resumo:
Artificial Immune Systems have been used successfully to build recommender systems for film databases. In this research, an attempt is made to extend this idea to web site recommendation. A collection of more than 1000 individuals' web profiles (alternatively called preferences / favourites / bookmarks file) will be used. URLs will be classified using the DMOZ (Directory Mozilla) database of the Open Directory Project as our ontology. This will then be used as the data for the Artificial Immune Systems rather than the actual addresses. The first attempt will involve using a simple classification code number coupled with the number of pages within that classification code. However, this implementation does not make use of the hierarchical tree-like structure of DMOZ. Consideration will then be given to the construction of a similarity measure for web profiles that makes use of this hierarchical information to build a better-informed Artificial Immune System.
Resumo:
Repeat photography is an efficient, effective and useful method to identify trends of changes in the landscapes. It was used to illustrate long-term changes occurring in the landscapes. In the Northeast of Portugal, landscapes changes is currently driven mostly by agriculture abandonment and agriculture and energy policy. However, there is a need to monitoring changes in the region using a multitemporal and multiscale approach. This project aimed to establish an online repository of oblique digital photography from the region to be used to register the condition of the landscape as recorded in historical and contemporary photography over time as well as to support qualitative and quantitative assessment of change in the landscape using repeat photography techniques and methods. It involved the development of a relational database and a series of web-based services using PHP: Hypertext Preprocessor language, and the development of an interface, with Joomla, of pictures uploading and downloading by users. The repository will make possible to upload, store, search by location, theme, or date, display, and download pictures for Northeastern Portugal. The website service is devoted to help researchers to obtain quickly the photographs needed to apply RP through a developed search engine. It can be accessed at: http://esa.ipb.pt/digitalandscape/.
Resumo:
Replication of eukaryotic chromosomes initiates at multiple sites called replication origins. Replication origins are best understood in the budding yeast Saccharomyces cerevisiae, where several complementary studies have mapped their locations genome-wide. We have collated these datasets, taking account of the resolution of each study, to generate a single list of distinct origin sites. OriDB provides a web-based catalogue of these confirmed and predicted S.cerevisiae DNA replication origin sites. Each proposed or confirmed origin site appears as a record in OriDB, with each record comprising seven pages. These pages provide, in text and graphical formats, the following information: genomic location and chromosome context of the origin site; time of origin replication; DNA sequence of proposed or experimentally confirmed origin elements; free energy required to open the DNA duplex (stress-induced DNA duplex destabilization or SIDD); and phylogenetic conservation of sequence elements. In addition, OriDB encourages community submission of additional information for each origin site through a User Notes facility. Origin sites are linked to several external resources, including the Saccharomyces Genome Database (SGD) and relevant publications at PubMed. Finally, a Chromosome Viewer utility allows users to interactively generate graphical representations of DNA replication data genome-wide. OriDB is available at www.oridb.org.
Resumo:
I fattori biometrici per il riconoscimento sono oggi uno degli elementi simbolo dell'innovazione, in questa tesi propongo un'implementazione web del Riconoscimento Facciale il cui scopo è quello di mostrare quali siano le possibili applicazioni in ambito bancario di questa tecnologia.
Resumo:
With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.