938 resultados para Business intelligence, data warehouse, sql server


Relevância:

100.00% 100.00%

Publicador:

Resumo:

I dispositivi mobili, dagli smartphone ai tablet, sono entrati a far parte della nostra quotidianità. Controllando l’infrastruttura delle comunicazioni, rispetto a qualsiasi altro settore, si ha un maggiore accesso a informazioni relative alla geo-localizzazione degli utenti e alle loro interazioni. Questa grande mole di informazioni può aiutare a costruire città intelligenti e sostenibili, che significa modernizzare ed innovare le infrastrutture, migliorare la qualità della vita e soddisfare le esigenze di cittadini, imprese e istituzioni. Vodafone offre soluzioni concrete nel campo dell’info-mobilità consentendo la trasformazione delle nostre città in Smart City. Obiettivo della tesi e del progetto Proactive è cercare di sviluppare strumenti che, a partire da dati provenienti dalla rete mobile Vodafone, consentano di ricavare e di rappresentare su cartografia dati indicanti la presenza dei cittadini in determinati punti d’interesse, il profilo di traffico di determinati segmenti viari e le matrici origine/destinazione. Per fare questo verranno prima raccolti e filtrati i dati della città di Milano e della regione Lombardia provenienti dalla rete mobile Vodafone per poi, in un secondo momento, sviluppare degli algoritmi e delle procedure in PL/SQL che siano in grado di ricevere questo tipo di dato, di analizzarlo ed elaborarlo restituendo i risultati prestabiliti. Questi risultati saranno poi rappresentati su cartografia grazie a QGis e grazie ad una Dashboard aziendale interna di Vodafone. Lo sviluppo delle procedure e la rappresentazione cartografica dei risultati verranno eseguite in ambiente di Test e se i risultati soddisferanno i requisiti di progetto verrà effettuato il porting in ambiente di produzione. Grazie a questo tipo di soluzioni, che forniscono dati in modalità anonima e aggregata in ottemperanza alle normative di privacy, le aziende di trasporto pubblico, ad esempio, potranno essere in grado di gestire il traffico in modo più efficiente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Business and Information Technologies (BIT) project strives to reveal new insights into how modern IT impacts organizational structures and business practices using empirical methods. Due to its international scope, it allows for inter-country comparison of empirical results. Germany — represented by the European School of Management and Technologies (ESMT) and the Institute of Information Systems at Humboldt-Universität zu Berlin — joined the BIT project in 2006. This report presents the result of the first survey conducted in Germany during November–December 2006. The key results are as follows: • The most widely adopted technologies and systems in Germany are websites, wireless hardware and software, groupware/productivity tools, and enterprise resource planning (ERP) systems. The biggest potential for growth exists for collaboration and portal tools, content management systems, business process modelling, and business intelligence applications. A number of technological solutions have not yet been adopted by many organizations but also bear some potential, in particular identity management solutions, Radio Frequency Identification (RFID), biometrics, and third-party authentication and verification. • IT security remains on the top of the agenda for most enterprises: budget spending was increasing in the last 3 years. • The workplace and work requirements are changing. IT is used to monitor employees' performance in Germany, but less heavily compared to the United States (Karmarkar and Mangal, 2007).1 The demand for IT skills is increasing at all corporate levels. Executives are asking for more and better structured information and this, in turn, triggers the appearance of new decision-making tools and online technologies on the market. • The internal organization of companies in Germany is underway: organizations are becoming flatter, even though the trend is not as pronounced as in the United States (Karmarkar and Mangal, 2007), and the geographical scope of their operations is increasing. Modern IT plays an important role in enabling this development, e.g. telecommuting, teleconferencing, and other web-based collaboration formats are becoming increasingly popular in the corporate context. • The degree to which outsourcing is being pursued is quite limited with little change expected. IT services, payroll, and market research are the most widely outsourced business functions. This corresponds to the results from other countries. • Up to now, the adoption of e-business technologies has had a rather limited effect on marketing functions. Companies tend to extract synergies from traditional printed media and on-line advertising. • The adoption of e-business has not had a major impact on marketing capabilities and strategy yet. Traditional methods of customer segmentation are still dominating. The corporate identity of most organizations does not change significantly when going online. • Online sales channel are mainly viewed as a complement to the traditional distribution means. • Technology adoption has caused production and organizational costs to decrease. However, the costs of technology acquisition and maintenance as well as consultancy and internal communication costs have increased.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The RCSB Protein Data Bank (PDB) provides public access to experimentally determined 3D-structures of biological macromolecules (proteins, peptides and nucleic acids). While various tools are available to explore the PDB, options to access the global structural diversity of the entire PDB and to perceive relationships between PDB structures remain very limited. Methods A 136-dimensional atom pair 3D-fingerprint for proteins (3DP) counting categorized atom pairs at increasing through-space distances was designed to represent the molecular shape of PDB-entries. Nearest neighbor searches examples were reported exemplifying the ability of 3DP-similarity to identify closely related biomolecules from small peptides to enzyme and large multiprotein complexes such as virus particles. The principle component analysis was used to obtain the visualization of PDB in 3DP-space. Results The 3DP property space groups proteins and protein assemblies according to their 3D-shape similarity, yet shows exquisite ability to distinguish between closely related structures. An interactive website called PDB-Explorer is presented featuring a color-coded interactive map of PDB in 3DP-space. Each pixel of the map contains one or more PDB-entries which are directly visualized as ribbon diagrams when the pixel is selected. The PDB-Explorer website allows performing 3DP-nearest neighbor searches of any PDB-entry or of any structure uploaded as protein-type PDB file. All functionalities on the website are implemented in JavaScript in a platform-independent manner and draw data from a server that is updated daily with the latest PDB additions, ensuring complete and up-to-date coverage. The essentially instantaneous 3DP-similarity search with the PDB-Explorer provides results comparable to those of much slower 3D-alignment algorithms, and automatically clusters proteins from the same superfamilies in tight groups. Conclusion A chemical space classification of PDB based on molecular shape was obtained using a new atom-pair 3D-fingerprint for proteins and implemented in a web-based database exploration tool comprising an interactive color-coded map of the PDB chemical space and a nearest neighbor search tool. The PDB-Explorer website is freely available at www.​cheminfo.​org/​pdbexplorer and represents an unprecedented opportunity to interactively visualize and explore the structural diversity of the PDB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Project justification is regarded as one of the major methodological deficits in Data Warehousing practice. As reasons for applying inappropriate methods, performing incomplete evaluations, or even entirely omitting justifications, the special nature of Data Warehousing benefits and the large portion of infrastructure-related activities are stated. In this paper, the economic justification of Data Warehousing projects is analyzed, and first results from a large academiaindustry collaboration project in the field of non-technical issues of Data Warehousing are presented. As conceptual foundations, the role of the Data Warehouse system in corporate application architectures is analyzed, and the specific properties of Data Warehousing projects are discussed. Based on an applicability analysis of traditional approaches to economic IT project justification, basic steps and responsibilities for the justification of Data Warehousing projects are derived.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The software Multibeam Converter is a tool to convert files or folders of files (ascii/tab-separated data files with or without metaheader), downloaded from PANGAEA via the search engine or the data warehouse to the ODV import format, e.g. for visualization or further processing. MultibeamConverter is distributed as freeware for the operating systems Microsoft Windows, Apple OS X and Linux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the authors introduce a novel mechanism for data management in a middleware for smart home control, where a relational database and semantic ontology storage are used at the same time in a Data Warehouse. An annotation system has been designed for instructing the storage format and location, registering new ontology concepts and most importantly, guaranteeing the Data Consistency between the two storage methods. For easing the data persistence process, the Data Access Object (DAO) pattern is applied and optimized to enhance the Data Consistency assurance. Finally, this novel mechanism provides an easy manner for the development of applications and their integration with BATMP. Finally, an application named "Parameter Monitoring Service" is given as an example for assessing the feasibility of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Secure access to patient data is becoming of increasing importance, as medical informatics grows in significance, to both assist with population health studies, and patient specific medicine in support of treatment. However, assembling the many different types of data emanating from the clinic is in itself a difficulty, and doing so across national borders compounds the problem. In this paper we present our solution: an easy to use distributed informatics platform embedding a state of the art data warehouse incorporating a secure pseudonymisation system protecting access to personal healthcare data. Using this system, a whole range of patient derived data, from genomics to imaging to clinical records, can be assembled and linked, and then connected with analytics tools that help us to understand the data. Research performed in this environment will have immediate clinical impact for personalised patient healthcare.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tras los distintos análisis diseñados por Jorge Beltrán Luna en el proyecto "Aplicación de Inteligencia de Negocio a la Gestión Educativa" [Beltrán2014] sobre el comportamiento de los alumnos de la Universidad Politécnica de Madrid en las asignaturas cursadas por estos durante el curso 2013-2014, se llegó a la conclusión que se debía desarrollar una aplicación web mediante la cual pudiesen configurarse estos análisis con distintos parámetros para adecuarlos a los requerimientos del usuario. Este proyecto ha cumplido con el objetivo anteriormente mencionado. Se ha desarrollado una aplicación web capaz de mostrar por medio de un navegador web, las gráficas y tablas generadas por el programa de minería de datos. Mediante esta aplicación el usuario puede realizar diversas funciones. Una de ellas es la de solicitar mediante el formulario recibido en la interfaz principal de la aplicación, la visualización de los resultados generados por el sistema de acuerdo con los parámetros seleccionados por el diseñador de los análisis. El usuario conseguirá observar los resultados que obtendría si ejecutase directamente los análisis desarrollados en el proyecto de Jorge Beltrán Luna [Beltrán2014] en la herramienta Rapidminer. Otra de las funciones que podría realizar el usuario sería la de realizar estos mismos análisis pero modificando sus parámetros de configuración para adecuar dichos análisis a los resultados que se quiere obtener. El resultado será el que se habría conseguido en la aplicación Rapidminer si se cambiasen los mismos parámetros que los modificados en la página web de este prototipo. Por último, se ha diseñado un botón con el cual el usuario podrá recuperar el último análisis realizado, con el fin de que no sea necesario esperar el tiempo que tarde en realizarse el análisis para visualizar los resultados. También se ha realizado una explicación detallada de la aplicación de la inteligencia de negocio en el ámbito educacional. ABSTRACT. After different analysis designed by Jorge Beltran Luna in the "Aplicación de Inteligencia de Negocio a la Gestión Educativa" [Beltrán2014] project on the behaviour of the students at the Universidad Politécnica de Madrid during the course 2013-14, the tutor of this project concluded that it should be interesting to develop a web application through which teachers could view and configure these analysis with different parameters. This project has fulfilled the aforementioned objective. A web application has been develop to show through a web browser, the graphs and charts generated by the data mining tool. Using this application, the user can perform various features. One of this features is to request, employing the formulary received in the main interface, to display an analysis according to the chosen parameters. The user will see the results that would be observed in case that the analysis had been directly executed using the project designed by Jorge Beltrán Luna [Beltrán2014] in the RapidMiner tool. Another feature that the user could perform would be to make these analysis modifying its settings Similar result would be obtained in the RapidMiner tool in the case that identical modifications were carried out in the configuration parameters. Finally, a button to allow with recall the last analysis has been implemented. It is not necessary to wait for the execution of this analysis to see newly the results. A detailed explanation on the usage of business intelligence in the educational field has also been performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vivimos en una sociedad en la que la información ha adquirido una vital importancia. El uso de Internet y el desarrollo de nuevos sistemas de la información han generado un ferviente interés tanto de empresas como de instituciones en la búsqueda de nuevos patrones que les proporcione la clave del éxito. La Analítica de Negocio reúne un conjunto de herramientas, estrategias y técnicas orientadas a la explotación de la información con el objetivo de crear conocimiento útil dentro de un marco de trabajo y facilitar la optimización de los recursos tanto de empresas como de instituciones. El presente proyecto se enmarca en lo que se conoce como Gestión Educativa. Se aplicará una arquitectura y modelo de trabajo similar a lo que se ha venido haciendo en los últimos años en el entorno empresarial con la Inteligencia de Negocio. Con esta variante, se pretende mejorar la calidad de la enseñanza, agilizar las decisiones dentro de la institución académica, fortalecer las capacidades del cuerpo docente y en definitiva favorecer el aprendizaje del alumnado. Para lograr el objetivo se ha decidido seguir las etapas del Knowledge Discovery in Databases (KDD), una de las metodologías más conocidas dentro de la Inteligencia de Negocio, que describe el procedimiento que va desde la selección de la información y su carga en sistemas de almacenamiento, hasta la aplicación de técnicas de minería de datos para la obtención nuevo conocimiento. Los estudios se realizan a partir de la información de la activad de los usuarios dentro la plataforma de Tele-Enseñanza de la Universidad Politécnica de Madrid (Moodle). Se desarrollan trabajos de extracción y preprocesado de la base de datos en crudo y se aplican técnicas de minería de datos. En la aplicación de técnicas de minería de datos, uno de los factores más importantes a tener en cuenta es el tipo de información que se va a tratar. Por este motivo, se trabaja con la Minería de Datos Educativa, en inglés, Educational Data Mining (EDM) que consiste en la aplicación de técnicas de minería optimizadas para la información que se genera en entornos educativos. Dentro de las posibilidades que ofrece el EDM, se ha decidido centrar los estudios en lo que se conoce como analítica predictiva. El objetivo fundamental es conocer la influencia que tienen las interacciones alumno-plataforma en las calificaciones finales y descubrir nuevas reglas que describan comportamientos que faciliten al profesorado discriminar si un estudiante va a aprobar o suspender la asignatura, de tal forma que se puedan tomar medidas que mejoren su rendimiento. Toda la información tratada en el presente proyecto ha sido previamente anonimizada para evitar cualquier tipo de intromisión que atente contra la privacidad de los elementos participantes en el estudio. ABSTRACT. We live in a society dominated by data. The use of the Internet accompanied by developments in information systems has generated a sustained interest among companies and institutions to discover new patterns to succeed in their business ventures. Business Analytics (BA) combines tools, strategies and techniques focused on exploiting the available information, to optimize resources and create useful insight. The current project is framed under Educational Management. A Business Intelligence (BI) architecture and business models taught up to date will be applied with the aim to accelerate the decision-making in academic institutions, strengthen teacher´s skills and ultimately improve the quality of teaching and learning. The best way to achieve this is to follow the Knowledge Discovery in Databases (KDD), one of the best-known methodologies in B.I. This process describes data preparation, selection, and cleansing through to the application of purely Data Mining Techniques in order to incorporate prior knowledge on data sets and interpret accurate solutions from the observed results. The studies will be performed using the information extracted from the Universidad Politécnica de Madrid Learning Management System (LMS), Moodle. The stored data is based on the user-platform interaction. The raw data will be extracted and pre-processed and afterwards, Data Mining Techniques will be applied. One of the crucial factors in the application of Data Mining Techniques is the kind of information that will be processed. For this reason, a new Data Mining perspective will be taken, called Educational Data Mining (EDM). EDM consists of the application of Data Mining Techniques but optimized for the raw data generated by the educational environment. Within EDM, we have decided to drive our research on what is called Predictive Analysis. The main purpose is to understand the influence of the user-platform interactions in the final grades of students and discover new patterns that explain their behaviours. This could allow teachers to intervene ahead of a student passing or failing, in such a way an action could be taken to improve the student performance. All the information processed has been previously anonymized to avoid the invasion of privacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho teve por objetivo o desenvolvimento de uma proposta de um modelo de sistema de apoio à decisão em vendas e sua aplicação. O levantamento sobre o perfil das vendas no mercado corporativo - de empresas-para-empresas, as técnicas de vendas, informações necessárias para a realização de uma venda eficiente, tal qual o controle das ações e resultados dos vendedores com a ajuda de relatórios, tudo isso aliado às tecnologias de data warehouse, data mart, OLAP foram essenciais na elaboração de uma proposta de modelo genérico e sua implantação. Esse modelo genérico foi aplicado levando-se em conta uma editora de listas e guias telefônicos hipotética, e foi construído buscando-se suprir os profissionais de vendas com informações que poderão melhorar a efetividade de suas vendas e dar-lhes maior conhecimento sobre seus produtos, clientes, usuários de listas e o mercado como um todo, além de suprir os gerentes de uma ferramenta rápida e confiável de auxílio à análise e coordenação dos esforços de vendas. A possibilidade de visualização rápida, confiável e personalizada das diversas informações permitidas por esse sistema, tal qual o êxito em responder às perguntas de pesquisas apresentadas no trabalho, comprova que essa aplicação poderá ser útil à empresa e em específico aos profissionais de vendas e gerentes tomadores de decisão.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the untitled, historic paper manuscript map: [Plan of New Orleans]. It was created ca. 1810. Scale [ca. 1:7,300]. Covers portions of the French Quarter, Central Business District, and Warehouse District. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Louisiana State Place Coordinate System, South NAD83 (in Feet) (Fipszone 1702). All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, or other information associated with the principal map. This map shows features such as roads, drainage, selected buildings and fortifications, and more. Includes list of references to names of streets and buildings. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.