998 resultados para Web Warehouse (WW)
Resumo:
The process of building Data Warehouses (DW) is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW) are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM) System allows modeling and executing Business Processes (BPs) providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.
Resumo:
The management of main material prices of provincial highway project quota has problems of lag and blindness. Framework of provincial highway project quota data MIS and main material price data warehouse were established based on WEB firstly. Then concrete processes of provincial highway project main material prices were brought forward based on BP neural network algorithmic. After that standard BP algorithmic, additional momentum modify BP network algorithmic, self-adaptive study speed improved BP network algorithmic were compared in predicting highway project main prices. The result indicated that it is feasible to predict highway main material prices using BP NN, and using self-adaptive study speed improved BP network algorithmic is the relatively best one.
Resumo:
DeepBlue is much more than just an orchestra. Their innovative approach to audience engagement led it to develop ESP, their Electronic Show Programme web app which allows for real-time (synchronous) and delayed (asynchronous) audience interaction, customer feedback and research. The show itself is driven invisibly by a music technology operating system (currently QUT's Yodel) that allows them to adapt to a wide range of performance venues and varied types of presentation. DeepBlue's community engagement program has enabled over 5,500 young musicians and community choristers to participate in professional productions, it is also a cornerstone of DeepBlue's successful business model. You can view the ESP mobile web app at m.deepblue.net.au if you view this and only the landing page is active, there is not a show taking place or imminent. ESP prototype has already been used for 18 months. Imagine knowing what your audience really thinks – in real time so you can track their feelings and thoughts through the show. This tool has been developed and used by the performing group DeepBlue since late 2012 in Australia and Asia (even translated into Vietnamese). It has mostly superseded DeepBlue's SMS realtime communication during a show. It enables an event presenter or performance group to take the pulse of an audience through a series of targeted questions that can be anonymous or attributed. This will help build better, long-lasting, and more meaningful relationships with groups and individuals in the community. This can take place on a tablet, mobile phone or future platforms. There are three organisations trialling it so far.
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.
Resumo:
Revenue Management’s most cited definitions is probably “to sell the right accommodation to the right customer, at the right time and the right price, with optimal satisfaction for customers and hoteliers”. Smart Revenue Management (SRM) is a project, which aims the development of smart automatic techniques for an efficient optimization of occupancy and rates of hotel accommodations, commonly referred to, as revenue management. One of the objectives of this project is to demonstrate that the collection of Big Data, followed by an appropriate assembly of functionalities, will make possible to generate a Data Warehouse necessary to produce high quality business intelligence and analytics. This will be achieved through the collection of data extracted from a variety of sources, including from the web. This paper proposes a three stage framework to develop the Big Data Warehouse for the SRM. Namely, the compilation of all available information, in the present case, it was focus only the extraction of information from the web by a web crawler – raw data. The storing of that raw data in a primary NoSQL database, and from that data the conception of a set of functionalities, rules, principles and semantics to select, combine and store in a secondary relational database the meaningful information for the Revenue Management (Big Data Warehouse). The last stage will be the principal focus of the paper. In this context, clues will also be giving how to compile information for Business Intelligence. All these functionalities contribute to a holistic framework that, in the future, will make it possible to anticipate customers and competitor’s behavior, fundamental elements to fulfill the Revenue Management
Resumo:
Tesi riguardante le differenze tra Semantic Web e Web Tradizionale
Resumo:
Currently there are an overwhelming number of scientific publications in Life Sciences, especially in Genetics and Biotechnology. This huge amount of information is structured in corporate Data Warehouses (DW) or in Biological Databases (e.g. UniProt, RCSB Protein Data Bank, CEREALAB or GenBank), whose main drawback is its cost of updating that makes it obsolete easily. However, these Databases are the main tool for enterprises when they want to update their internal information, for example when a plant breeder enterprise needs to enrich its genetic information (internal structured Database) with recently discovered genes related to specific phenotypic traits (external unstructured data) in order to choose the desired parentals for breeding programs. In this paper, we propose to complement the internal information with external data from the Web using Question Answering (QA) techniques. We go a step further by providing a complete framework for integrating unstructured and structured information by combining traditional Databases and DW architectures with QA systems. The great advantage of our framework is that decision makers can compare instantaneously internal data with external data from competitors, thereby allowing taking quick strategic decisions based on richer data.
Resumo:
This project examined the pathways of mercury (Hg) bioaccumulation and its relation to trophic position and hydroperiod in the Everglades. I described fish-diet differences across habitats and seasons by analyzing stomach contents of 4,000 fishes of 32 native and introduced species. Major foods included periphyton, detritus/algal conglomerate, small invertebrates, aquatic insects, decapods, and fishes. Florida gar, largemouth bass, pike killifish, and bowfin were at the top of the piscine food web. Using prey volumes, I quantitatively classified the fishes into trophic groups of herbivores, omnivores, and carnivores. Stable-isotope analysis of fishes and invertebrates gave an independent and similar assessment of trophic placement. Trophic patterns were similar to those from tropical communities. I tested for correlations of trophic position and total mercury. Over 4,000 fish, 620 invertebrate, and 46 plant samples were analyzed for mercury with an atomic-fluorescence spectrometer. Mercury varied within and among taxa. Invertebrates ranged from 25–200 ng g −1 ww. Small-bodied fishes varied from 78–>400 ng g −1 ww. Large predatory fishes were highest, reaching a maximum of 1,515 ng−1 ww. Hg concentrations in both fishes and invertebrates were positively correlated with trophic position. I examined the effects of season and hydroperiod on mercury in wild and caged mosquitofish at three pairs of marshes. Nine monthly collections of wild mosquitofish were analyzed. Hydroperiod-within-site significantly affected concentrations but it interacted with sampling period. To control for wild-fish dispersal, and to measure in situ uptake and growth, I placed captive-reared, neonate mosquitofish with mercury levels from 7–14 ng g−1 ww into field cages in the six study marshes in six trials. Uptake rates ranged from 0.25–3.61 ng g−1 ww d −1. As with the wild fish, hydroperiod-within-site was a significant main effect that also interacted with sampling period. Survival exceeded 80%. Growth varied with season and hydroperiod, with greatest growth in short-hydroperiod marshes. The results suggest that dietary bioaccumulation determined mercury levels in Everglades aquatic animals, and that, although hydroperiod affected mercury uptake, its effect varied with season. ^