798 resultados para Business Intelligence, ETL, Data Warehouse, Metadati, Reporting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il primo capitolo prevede un’introduzione sul modello relazionale e sulle difficoltà che possono nascere nel tentativo di conformare le esigenze attuali di applicazioni ed utenti ai vincoli da esso imposti per lasciare poi spazio ad un’ampia descrizione del movimento NoSQL e delle tecnologie che ne fanno parte; il secondo capitolo sarà invece dedicato a MongoDB, alla presentazione delle sue caratteristiche e peculiarità, cercando di fornirne un quadro apprezzabile ed approfondito seppure non completo e del tutto esaustivo; infine nel terzo ed ultimo capitolo verrà approfondito il tema della ricerca di testo in MongoDB e verranno presentati e discussi i risultati ottenuti dai nostri test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sviluppo e analisi di un dataset campione, composto da circa 3 mln di entry ed estratto da un data warehouse di informazioni riguardanti il consumo energetico di diverse smart home.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente elaborato ha come oggetto la progettazione e lo sviluppo di una soluzione Elasticsearch come piattaforma di analisi in un contesto di Social Business Intelligence. L’elaborato si inserisce all’interno di un progetto del Business Intelligence Group dell’Università di Bologna, incentrato sul monitoraggio delle discussioni online sul tema politico nel periodo delle elezioni europee del 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi anni la biologia ha fatto ricorso in misura sempre maggiore all’informatica per affrontare analisi complesse che prevedono l’utilizzo di grandi quantità di dati. Fra le scienze biologiche che prevedono l’elaborazione di una mole di dati notevole c’è la genomica, una branca della biologia molecolare che si occupa dello studio di struttura, contenuto, funzione ed evoluzione del genoma degli organismi viventi. I sistemi di data warehouse sono una tecnologia informatica che ben si adatta a supportare determinati tipi di analisi in ambito genomico perché consentono di effettuare analisi esplorative e dinamiche, analisi che si rivelano utili quando si vogliono ricavare informazioni di sintesi a partire da una grande quantità di dati e quando si vogliono esplorare prospettive e livelli di dettaglio diversi. Il lavoro di tesi si colloca all’interno di un progetto più ampio riguardante la progettazione di un data warehouse in ambito genomico. Le analisi effettuate hanno portato alla scoperta di dipendenze funzionali e di conseguenza alla definizione di una gerarchia nei dati. Attraverso l’inserimento di tale gerarchia in un modello multidimensionale relativo ai dati genomici sarà possibile ampliare il raggio delle analisi da poter eseguire sul data warehouse introducendo un contenuto informativo ulteriore riguardante le caratteristiche dei pazienti. I passi effettuati in questo lavoro di tesi sono stati prima di tutto il caricamento e filtraggio dei dati. Il fulcro del lavoro di tesi è stata l’implementazione di un algoritmo per la scoperta di dipendenze funzionali con lo scopo di ricavare dai dati una gerarchia. Nell’ultima fase del lavoro di tesi si è inserita la gerarchia ricavata all’interno di un modello multidimensionale preesistente. L’intero lavoro di tesi è stato svolto attraverso l’utilizzo di Apache Spark e Apache Hadoop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are complex software systems written using multiple technologies. Moreover they are usually distributed systems and use a database to deal with persistence. A particular problem that appears in the design of these systems is the lack of a rich business model. In this paper we propose a technique to support the recovery of such rich business objects starting from anemic Data Transfer Objects (DTOs). Exposing the code duplications in the application's elements using the DTOs we suggest which business logic can be moved into the DTOs from the other classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, organizations have plenty of data stored in DB databases, which contain invaluable information. Decision Support Systems DSS provide the support needed to manage this information and planning médium and long-term ?the modus operandi? of these organizations. Despite the growing importance of these systems, most proposals do not include its total evelopment, mostly limiting itself on the development of isolated parts, which often have serious integration problems. Hence, methodologies that include models and processes that consider every factor are necessary. This paper will try to fill this void as it proposes an approach for developing spatial DSS driven by the development of their associated Data Warehouse DW, without forgetting its other components. To the end of framing the proposal different Engineering Software focus (The Software Engineering Process and Model Driven Architecture) are used, and coupling with the DB development methodology, (and both of them adapted to DW peculiarities). Finally, an example illustrates the proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Securities and Exchange Commission (SEC) in the United States and in particular its immediately past chairman, Christopher Cox, has been actively promoting an upgrade of the EDGAR system of disseminating filings. The new generation of information provision has been dubbed by Chairman Cox, "Interactive Data" (SEC, 2006). In October this year the Office of Interactive Disclosure was created(http://www.sec.gov/news/press/2007/2007-213.htm). The focus of this paper is to examine the way in which the non-professional investor has been constructed by various actors. We examine the manner in which Interactive Data has been sold as the panacea for financial market 'irregularities' by the SEC and others. The academic literature shows almost no evidence of researching non-professional investors in any real sense (Young, 2006). Both this literature and the behaviour of representatives of institutions such as the SEC and FSA appears to find it convenient to construct this class of investor in a particular form and to speak for them. We theorise the activities of the SEC and its chairman in particular over a period of about three years, both following and prior to the 'credit crunch'. Our approach is to examine a selection of the policy documents released by the SEC and other interested parties and the statements made by some of the policy makers and regulators central to the programme to advance the socio-technical project that is constituted by Interactive Data. We adopt insights from ANT and more particularly the sociology of translation (Callon, 1986; Latour, 1987, 2005; Law, 1996, 2002; Law & Singleton, 2005) to show how individuals and regulators have acted as spokespersons for this malleable class of investor. We theorise the processes of accountability to investors and others and in so doing reveal the regulatory bodies taking the regulated for granted. The possible implications of technological developments in digital reporting have been identified also by the CEO's of the six biggest audit firms in a discussion document on the role of accounting information and audit in the future of global capital markets (DiPiazza et al., 2006). The potential for digital reporting enabled through XBRL to "revolutionize the entire company reporting model" (p.16) is discussed and they conclude that the new model "should be driven by the wants of investors and other users of company information,..." (p.17; emphasis in the original). Here rather than examine the somewhat illusive and vexing question of whether adding interactive functionality to 'traditional' reports can achieve the benefits claimed for nonprofessional investors we wish to consider the rhetorical and discursive moves in which the SEC and others have engaged to present such developments as providing clearer reporting and accountability standards and serving the interests of this constructed and largely unknown group - the non-professional investor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainable development support, balanced scorecard development and business process modeling are viewed from the position of systemology. Extensional, intentional and potential properties of a system are considered as necessary to satisfy functional requirements of a meta-system. The correspondence between extensional, intentional and potential properties of a system and sustainable, unsustainable, crisis and catastrophic states of a system is determined. The inaccessibility cause of the system mission is uncovered. The correspondence between extensional, intentional and potential properties of a system and balanced scorecard perspectives is showed. The IDEF0 function modeling method is checked against balanced scorecard perspectives. The correspondence between balanced scorecard perspectives and IDEF0 notations is considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pesquisa tem o objetivo de contribuir para os estudos relacionados ao desenvolvimento de software, mais especificamente à fase de levantamento de requisitos da Engenharia de Software, ao esclarecer como um método não muito popular, a construção de Ontologias de Domínio, pode ajudar na definição de requisitos de qualidade, que consequentemente contribuem para o sucesso de projetos de implementação de sistemas de informação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proceso de toma de decisiones en las bibliotecas universitarias es de suma importancia, sin embargo, se encuentra complicaciones como la gran cantidad de fuentes de datos y los grandes volúmenes de datos a analizar. Las bibliotecas universitarias están acostumbradas a producir y recopilar una gran cantidad de información sobre sus datos y servicios. Las fuentes de datos comunes son el resultado de sistemas internos, portales y catálogos en línea, evaluaciones de calidad y encuestas. Desafortunadamente estas fuentes de datos sólo se utilizan parcialmente para la toma de decisiones debido a la amplia variedad de formatos y estándares, así como la falta de métodos eficientes y herramientas de integración. Este proyecto de tesis presenta el análisis, diseño e implementación del Data Warehouse, que es un sistema integrado de toma de decisiones para el Centro de Documentación Juan Bautista Vázquez. En primer lugar se presenta los requerimientos y el análisis de los datos en base a una metodología, esta metodología incorpora elementos claves incluyendo el análisis de procesos, la calidad estimada, la información relevante y la interacción con el usuario que influyen en una decisión bibliotecaria. A continuación, se propone la arquitectura y el diseño del Data Warehouse y su respectiva implementación la misma que soporta la integración, procesamiento y el almacenamiento de datos. Finalmente los datos almacenados se analizan a través de herramientas de procesamiento analítico y la aplicación de técnicas de Bibliomining ayudando a los administradores del centro de documentación a tomar decisiones óptimas sobre sus recursos y servicios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La importancia del proceso de toma de decisiones en la determinación del éxito de las compañías; genera la necesidad de contar con una fuente de información confiable que permita la generación de conocimiento oportuno y a disposición de quien lo necesita. El propósito de esta investigación es establecer un marco de referencia de la utilización de Business Intelligence como soporte de las decisiones tácticas, estratégicas y operacionales en las empresas. Iniciando con la descripción de la evolución de los sistemas de información utilizados en el proceso de toma de decisiones, impulsada por los diferentes cambios tecnológicos que han marcado el camino del establecimiento de Business Intelligence como una solución integral para los desafíos que se presentan a diario relacionados con la búsqueda de generación de valor mediante la implementación de decisiones óptimas. Luego se describe la arquitectura de un sistema de inteligencia de negocios en la cual se define elementos básicos para el correcto funcionamiento, como lo son: almacenamiento de datos, funciones empresariales, sistemas de gestión y las interfaces de usuario. Además de describir el proceso y alcance de su correcta implementación, y poder así obtener los beneficios que estos sistemas ofrecen. La metodología desarrollada en la investigación fue descriptiva, y se fundamentó en identificar el grado de utilización de Business Intelligence por los tomadores de decisiones, representados por egresados y graduados de la Maestría en Administración Financiera de la Universidad de El Salvador en el período 2006-2015.