923 resultados para Business Intelligence, ETL, Data Warehouse, Metadati, Reporting
Resumo:
El volumen de datos en bibliotecas ha aumentado enormemente en los últimos años, así como también la complejidad de sus fuentes y formatos de información, dificultando su gestión y acceso, especialmente como apoyo en la toma de decisiones. Sabiendo que una buena gestión de bibliotecas involucra la integración de indicadores estratégicos, la implementación de un Data Warehouse (DW), que gestione adecuadamente tal cantidad de información, así como su compleja mezcla de fuentes de datos, se convierte en una alternativa interesante a considerar. El artículo describe el diseño e implementación de un sistema de soporte de decisiones (DSS) basado en técnicas de DW para la biblioteca de la Universidad de Cuenca. Para esto, el estudio utiliza una metodología holística, propuesto por Siguenza-Guzman et al. (2014) para la evaluación integral de bibliotecas. Dicha metodología evalúa la colección y los servicios, incorporando importantes elementos para la gestión de bibliotecas, tales como: el desempeño de los servicios, el control de calidad, el uso de la colección y la interacción con el usuario. A partir de este análisis, se propone una arquitectura de DW que integra, procesa y almacena los datos. Finalmente, estos datos almacenados son analizados y visualizados a través de herramientas de procesamiento analítico en línea (OLAP). Las pruebas iniciales de implementación confirman la viabilidad y eficacia del enfoque propuesto, al integrar con éxito múltiples y heterogéneas fuentes y formatos de datos, facilitando que los directores de bibliotecas generen informes personalizados, e incluso permitiendo madurar los procesos transaccionales que diariamente se llevan a cabo.
Resumo:
The majority of the organizations store their historical business information in data warehouses which are queried to make strategic decisions by using online analytical processing (OLAP) tools. This information has to be correctly assured against unauthorized accesses, but nevertheless there are a great amount of legacy OLAP applications that have been developed without considering security aspects or these have been incorporated once the system was implemented. This work defines a reverse engineering process that allows us to obtain the conceptual model corresponding to a legacy OLAP application, and also analyses and represents the security aspects that could have established. This process has been aligned with a model-driven architecture for developing secure OLAP applications by defining the transformations needed to automatically apply it. Once the conceptual model has been extracted, it can be easily modified and improved with security, and automatically transformed to generate the new implementation.
Resumo:
El presente proyecto:Inteligencia de negocios, aplicando la metodología RFM a las cuentas de los socios de la COAC Jardín Azuayo, se desarrolla sobre la necesidad de la institución de contar con herramientas eficientes y eficaces para la toma de decisiones y conocimiento del socio. Primero, se determina la importancia de construir una herramienta de Inteligencia de Negocios dentro de Jardín Azuayo que permita obtener información clara y concisa en tiempo real para la toma de decisiones. Segundo, se continúa con el desarrollo de metodologías para la gestión del valor del socio a través del conocimiento de sus necesidades analizando la información histórica de su última transacción realizada, la frecuencia con la que acude para acceder a los servicios que ofrece la Cooperativa y el monto promedio por transacción. Finalmente, al combinar la herramienta de Inteligencia de Negocios para la obtención de información y la aplicación de metodologías para el conocimiento del socio, se ha podido plantear dos estrategias básicas para la afianzar la fidelización del socio con la Cooperativa.
Resumo:
El presente estudio de caso documenta los planes y actividades relacionadas con el tema de Responsabilidad Social Empresarial que la multinacional colombiana Crepes & Waffles implementa en sus operaciones. El documento contiene una investigación que incluye antecedentes, análisis del sector, temas estadísticos y una entrevista.
Resumo:
Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.
Resumo:
Il carcinoma epatocellulare (HCC) rappresenta il tumore epatico primitivo più comune con una incidenza fino all’85%. È uno dei tumori più frequenti al mondo ed è noto per l’elevata letalità soprattutto in stadio avanzato. La diagnosi precoce attraverso la sorveglianza ecografica è necessaria per migliorare la sopravvivenza dei pazienti a rischio. Il mezzo di contrasto ecografico migliora la sensibilità e la specificità diagnostica dell’ecografia convenzionale. L’ecografia con mezzo di contrasto (contrast-enhanced ultrasound, CEUS) è pertanto considerata una metodica valida per la diagnosi di HCC a livello globale per la sua ottima specificità anche a fronte di una sensibilità subottimale. L’aspetto contrastografico delle lesioni focali epatiche ha portato un team di esperti allo sviluppo del sistema Liver Imaging Reporting and Data System (LI-RADS) con l’obiettivo di standardizzare la raccolta dati e la refertazione delle metodiche di imaging per la diagnosi di HCC. La CEUS è una metodica operatore-dipendente e le discordanze diagnostiche con gli imaging panoramici lasciano spazio a nuove tecniche (Dynamic Contrast Enhanced UltraSound, DCE-US) volte a migliorare l’accuratezza diagnostica della metodica e in particolare la sensibilità. Un software di quantificazione della perfusione tissutale potrebbe essere di aiuto nella pratica clinica per individuare il wash-out non visibile anche all’occhio dell’operatore più esperto. Il nostro studio ha due obiettivi: 1) validare il sistema CEUS LI-RADS nella diagnosi di carcinoma epatocellulare in pazienti ad alto rischio di HCC usando come gold-standard l’istologia quando disponibile oppure metodiche di imaging radiologico accettate da tutte le linee guida (tomografia computerizzata o risonanza magnetica con aspetto tipico) eseguite entro quattro settimane dalla CEUS; 2) valutare l’efficacia di un software di quantificazione della perfusione tissutale nel riscontro di wash-out per la diagnosi di HCC in CEUS.
Resumo:
The strategic management of information plays a fundamental role in the organizational management process since the decision-making process depend on the need for survival in a highly competitive market. Companies are constantly concerned about information transparency and good practices of corporate governance (CG) which, in turn, directs relations between the controlling power of the company and investors. In this context, this article presents the relationship between the disclosing of information of joint-stock companies by means of using XBRL, the open data model adopted by the Brazilian government, a model that boosted the publication of Information Access Law (Lei de Acesso à Informação), nº 12,527 of 18 November 2011. Information access should be permeated by a mediation policy in order to subsidize the knowledge construction and decision-making of investors. The XBRL is the main model for the publishing of financial information. The use of XBRL by means of new semantic standard created for Linked Data, strengthens the information dissemination, as well as creates analysis mechanisms and cross-referencing of data with different open databases available on the Internet, providing added value to the data/information accessed by civil society.
Resumo:
Government agencies use information technology extensively to collect business data for regulatory purposes. Data communication standards form part of the infrastructure with which businesses must conform to survive. We examine the development of, and emerging competition between, two open business reporting data standards adopted by government bodies in France; EDIFACT (incumbent) and XBRL (challenger). The research explores whether an incumbent may be displaced in a setting in which the contention is unresolved. We apply Latour’s (1992) translation map to trace the enrolments and detours in the battle. We find that regulators play an important role as allies in the development of the standards. The antecedent networks in which the standards are located embed strong beliefs that become barriers to collaboration and fuel the battle. One of the key differentiating attitudes is whether speed is more important than legitimacy. The failure of collaboration encourages competition. The newness of XBRL’s technology just as regulators need to respond to an economic crisis and its adoption by French regulators not using EDIFACT create an opportunity for the challenger to make significant network gains over the longer term. ANT also highlights the importance of the preservation of key components of EDIFACT in ebXML.
Resumo:
A global italian pharmaceutical company has to provide two work environments that favor different needs. The environments will allow to develop solutions in a controlled, secure and at the same time in an independent manner on a state-of-the-art enterprise cloud platform. The need of developing two different environments is dictated by the needs of the working units. Indeed, the first environment is designed to facilitate the creation of application related to genomics, therefore, designed more for data-scientists. This environment is capable of consuming, producing, retrieving and incorporating data, furthermore, will support the most used programming languages for genomic applications (e.g., Python, R). The proposal was to obtain a pool of ready-togo Virtual Machines with different architectures to provide best performance based on the job that needs to be carried out. The second environment has more of a traditional trait, to obtain, via ETL (Extract-Transform-Load) process, a global datamodel, resembling a classical relational structure. It will provide major BI operations (e.g., analytics, performance measure, reports, etc.) that can be leveraged both for application analysis or for internal usage. Since, both architectures will maintain large amounts of data regarding not only pharmaceutical informations but also internal company informations, it would be possible to digest the data by reporting/ analytics tools and also apply data-mining, machine learning technologies to exploit intrinsic informations. The thesis work will introduce, proposals, implementations, descriptions of used technologies/platforms and future works of the above discussed environments.
Resumo:
With the proliferation of relational database programs for PC's and other platforms, many business end-users are creating, maintaining, and querying their own databases. More importantly, business end-users use the output of these queries as the basis for operational, tactical, and strategic decisions. Inaccurate data reduce the expected quality of these decisions. Implementing various input validation controls, including higher levels of normalisation, can reduce the number of data anomalies entering the databases. Even in well-maintained databases, however, data anomalies will still accumulate. To improve the quality of data, databases can be queried periodically to locate and correct anomalies. This paper reports the results of two experiments that investigated the effects of different data structures on business end-users' abilities to detect data anomalies in a relational database. The results demonstrate that both unnormalised and higher levels of normalisation lower the effectiveness and efficiency of queries relative to the first normal form. First normal form databases appear to provide the most effective and efficient data structure for business end-users formulating queries to detect data anomalies.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
O XBRL - eXtensible Business Report Language - é uma linguagem que está sendo implementada em vários países para divulgação das informações contábil-financeiras pela internet. Este artigo mostra o estado-da-arte do XBRL e como se deu sua evolução, bem como avalia o estágio atual do Brasil na divulgação de informações contábil-financeiras pela internet. Foi realizada uma pesquisa do tipo survey com empresas de capital aberto no Brasil. A pesquisa revelou uma forte aceitação do meio eletrônico para divulgação de informações financeiras e também que ainda é muito pequeno o conhecimento da linguagem XBRL no país e, conseqüentemente, menor ainda o número de entidades que já iniciaram formalmente os estudos para sua implementação. Mostrou ainda a inexistência de um padrão de divulgação de informações eletrônicas, tendo predominado os formatos PDF, HTML e DOC, o que dificulta a análise e comparação de informações entre órgãos reguladores e com o público em geral.