938 resultados para Business intelligence, data warehouse, sql server
Resumo:
El projecte tracte d' implementar una solució de Business Intelligence sota la plataforma Microsoft.Aquest projecte va destinat al Departament de Comptabilitat de l' Ajuntament de Cambrils, i està relacionat amb la funció del control de les despeses i els ingressos
Resumo:
In the last few years, a new generation of Business Intelligence (BI) tools called BI 2.0 has emerged to meet the new and ambitious requirements of business users. BI 2.0 not only introduces brand new topics, but in some cases it re-examines past challenges according to new perspectives depending on the market changes and needs. In this context, the term pervasive BI has gained increasing interest as an innovative and forward-looking perspective. This thesis investigates three different aspects of pervasive BI: personalization, timeliness, and integration. Personalization refers to the capacity of BI tools to customize the query result according to the user who takes advantage of it, facilitating the fruition of BI information by different type of users (e.g., front-line employees, suppliers, customers, or business partners). In this direction, the thesis proposes a model for On-Line Analytical Process (OLAP) query personalization to reduce the query result to the most relevant information for the specific user. Timeliness refers to the timely provision of business information for decision-making. In this direction, this thesis defines a new Data Warehuose (DW) methodology, Four-Wheel-Drive (4WD), that combines traditional development approaches with agile methods; the aim is to accelerate the project development and reduce the software costs, so as to decrease the number of DW project failures and favour the BI tool penetration even in small and medium companies. Integration refers to the ability of BI tools to allow users to access information anywhere it can be found, by using the device they prefer. To this end, this thesis proposes Business Intelligence Network (BIN), a peer-to-peer data warehousing architecture, where a user can formulate an OLAP query on its own system and retrieve relevant information from both its local system and the DWs of the net, preserving its autonomy and independency.
Resumo:
Il presente elaborato ha come oggetto l’analisi delle prestazioni e il porting di un sistema di SBI sulla distribuzione Hadoop di Cloudera. Nello specifico è stato fatto un porting dei dati del progetto WebPolEU. Successivamente si sono confrontate le prestazioni del query engine Impala con quelle di ElasticSearch che, diversamente da Oracle, sfrutta la stessa componente hardware (cluster).
Resumo:
Business Intelligence (BI) applications have been gradually ported to the Web in search of a global platform for the consumption and publication of data and services. On the Internet, apart from techniques for data/knowledge management, BI Web applications need interfaces with a high level of interoperability (similar to the traditional desktop interfaces) for the visualisation of data/knowledge. In some cases, this has been provided by Rich Internet Applications (RIA). The development of these BI RIAs is a process traditionally performed manually and, given the complexity of the final application, it is a process which might be prone to errors. The application of model-driven engineering techniques can reduce the cost of development and maintenance (in terms of time and resources) of these applications, as they demonstrated by other types of Web applications. In the light of these issues, the paper introduces the Sm4RIA-B methodology, i.e., a model-driven methodology for the development of RIA as BI Web applications. In order to overcome the limitations of RIA regarding knowledge management from the Web, this paper also presents a new RIA platform for BI, called RI@BI, which extends the functionalities of traditional RIAs by means of Semantic Web technologies and B2B techniques. Finally, we evaluate the whole approach on a case study—the development of a social network site for an enterprise project manager.
Resumo:
Context: Global Software Development (GSD) allows companies to take advantage of talent spread across the world. Most research has been focused on the development aspect. However, little if any attention has been paid to the management of GSD projects. Studies report a lack of adequate support for management’s decisions made during software development, further accentuated in GSD since information is scattered throughout multiple factories, stored in different formats and standards. Objective: This paper aims to improve GSD management by proposing a systematic method for adapting Business Intelligence techniques to software development environments. This would enhance the visibility of the development process and enable software managers to make informed decisions regarding how to proceed with GSD projects. Method: A combination of formal goal-modeling frameworks and data modeling techniques is used to elicitate the most relevant aspects to be measured by managers in GSD. The process is described in detail and applied to a real case study throughout the paper. A discussion regarding the generalisability of the method is presented afterwards. Results: The application of the approach generates an adapted BI framework tailored to software development according to the requirements posed by GSD managers. The resulting framework is capable of presenting previously inaccessible data through common and specific views and enabling data navigation according to the organization of software factories and projects in GSD. Conclusions: We can conclude that the proposed systematic approach allows us to successfully adapt Business Intelligence techniques to enhance GSD management beyond the information provided by traditional tools. The resulting framework is able to integrate and present the information in a single place, thereby enabling easy comparisons across multiple projects and factories and providing support for informed decisions in GSD management.
Resumo:
The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
I Big Data stanno guidando una rivoluzione globale. In tutti i settori, pubblici o privati, e le industrie quali Vendita al dettaglio, Sanità, Media e Trasporti, i Big Data stanno influenzando la vita di miliardi di persone. L’impatto dei Big Data è sostanziale, ma così discreto da passare inosservato alla maggior parte delle persone. Le applicazioni di Business Intelligence e Advanced Analytics vogliono studiare e trarre informazioni dai Big Data. Si studia il passaggio dalla prima alla seconda, mettendo in evidenza aspetti simili e differenze.
Resumo:
Lo scopo del presente elaborato è ottenere dati grezzi dai maggiori offerwalls affinché si renda possibile elaborarli ed analizzarli per metterli a disposizione delle figure che si occupano di account management di un potenziale Ad Network quale è MyAppFree. Il primo Ad Network competitor a venire integrato nel presente tool di Business Intelligence è OfferToro, seguito da AdGem, il quale è attualmente in fase di integrazione. Prima di presentare i risultati del tool, a cui è stato dedicato l’ultimo capitolo dell’elaborato, sono stati approfonditi ed analizzati ampiamente i concetti fondamentali per la comprensione del progetto insieme agli strumenti utilizzati per la costituzione dell’architettura software. Successivamente, viene presentata l'architettura dei singoli microservizi oltre a quella sistemistica generale, la quale tratta come le parti che compongono iBiT, interagiscono tra loro. Infine, l’ultima parte della trattazione è dedicata al funzionamento del Front End Side per la figura account manager, che rappresenta l’utente finale del progetto. Unita alle analisi dei risultati ottenuti tramite una fase di benchmark testing, metrica che misura un insieme ripetibile di risultati quantificabili che serve come punto di riferimento perché prodotti e servizi possano essere confrontati. Lo scopo dei risultati dei test di benchmark è quello di confrontare le versioni presenti e future del software tramite i rispettivi benchmark.
Resumo:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
Resumo:
This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Tecnologias do Conhecimento e Decisão
Resumo:
O presente documento de dissertação retrata o desenvolvimento do projeto PDS-Portal Institucional cujo cerne é um sistema para recolha, armazenamento e análise de dados (plataforma de Business Intelligence). Este portal está enquadrado na área da saúde e é uma peça fundamental no sistema da Plataforma de dados da Saúde, que é constituído por quatro portais distintos. Esta plataforma tem como base um sistema totalmente centrado no utente, que agrega dados de saúde dos utentes e distribui pelos diversos intervenientes: utente, profissionais de saúde nacionais e internacionais e organizações de saúde. O objetivo principal deste projeto é o desenvolvimento do PDS-Portal Institucional, recorrendo a uma plataforma de Business Intelligence, com o intuito de potenciar os utilizadores de uma ferramenta analítica para análise de dados. Estando a informação armazenada em dois dos portais da Plataforma de dados da Saúde (PDS-Portal Utente e PDS-Portal Profissional), é necessário modular um armazém de dados que agregue a informação de ambos e, através do PDS-PI, distribua um conjunto de análises ao utilizador final. Para tal este sistema comtempla um mecanismo totalmente automatizado para extração, tratamento e carregamento de dados para o armazém central, assim como uma plataforma de BI que disponibiliza os dados armazenados sobre a forma de análises específicas. Esta plataforma permite uma evolução constante e é extremamente flexível, pois fornece um mecanismo de gestão de utilizadores e perfis, assim como capacita o utilizador de um ambiente Web para análise de dados, permitindo a partilha e acesso a partir de dispositivos móveis. Após a implementação deste sistema foi possível explorar os dados e tirar diversas conclusões que são de extrema importância tanto para a evolução da PDS como para os métodos de praticar os cuidados de saúde em Portugal. Por fim são identificados alguns pontos de melhoria do sistema atual e delineada uma perspetiva de evolução futura. É certo que a partir do momento que este projeto seja lançado para produção, novas oportunidades surgirão e o contributo dos utilizadores será útil para evoluir o sistema progressivamente.
Resumo:
Web 2.0 software in general and wikis in particular have been receiving growing attention as they constitute new and powerful tools, capable of supporting information sharing, creation of knowledge and a wide range of collaborative processes and learning activities. This paper introduces briefly some of the new opportunities made possible by Web 2.0 or the social Internet, focusing on those offered by the use of wikis as learning spaces. A wiki allows documents to be created, edited and shared on a group basis; it has a very easy and efficient markup language, using a simple Web browser. One of the most important characteristics of wiki technology is the ease with which pages are created and edited. The facility for wiki content to be edited by its users means that its pages and structure form a dynamic entity, in permanent evolution, where users can insert new ideas, supplement previously existing information and correct errors and typos in a document at any time, up to the agreed final version. This paper explores wikis as a collaborative learning and knowledge-building space and its potential for supporting Virtual Communities of Practice (VCoPs). In the academic years (2007/8 and 2008/9), students of the Business Intelligence module at the Master's programme of studies on Knowledge Management and Business Intelligence at Instituto Superior de Estatistica e Gestao de Informacao of the Universidade Nova de Lisboa, Portugal, have been actively involved in the creation of BIWiki - a wiki for Business Intelligence in the Portuguese language. Based on usage patterns and feedback from students participating in this experience, some conclusions are drawn regarding the potential of this technology to support the emergence of VCoPs; some provisional suggestions will be made regarding the use of wikis to support information sharing, knowledge creation and transfer and collaborative learning in Higher Education.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação