909 resultados para Business Intelligence,Data Warehouse,Sistemi Informativi


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Government agencies use information technology extensively to collect business data for regulatory purposes. Data communication standards form part of the infrastructure with which businesses must conform to survive. We examine the development of, and emerging competition between, two open business reporting data standards adopted by government bodies in France; EDIFACT (incumbent) and XBRL (challenger). The research explores whether an incumbent may be displaced in a setting in which the contention is unresolved. We apply Latour’s (1992) translation map to trace the enrolments and detours in the battle. We find that regulators play an important role as allies in the development of the standards. The antecedent networks in which the standards are located embed strong beliefs that become barriers to collaboration and fuel the battle. One of the key differentiating attitudes is whether speed is more important than legitimacy. The failure of collaboration encourages competition. The newness of XBRL’s technology just as regulators need to respond to an economic crisis and its adoption by French regulators not using EDIFACT create an opportunity for the challenger to make significant network gains over the longer term. ANT also highlights the importance of the preservation of key components of EDIFACT in ebXML.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensity of global competition and ever-increasing economic uncertainties has led organizations to search for more efficient and effective ways to manage their business operations. Data envelopment analysis (DEA) has been widely used as a conceptually simple yet powerful tool for evaluating organizational productivity and performance. Fuzzy DEA (FDEA) is a promising extension of the conventional DEA proposed for dealing with imprecise and ambiguous data in performance measurement problems. This book is the first volume in the literature to present the state-of-the-art developments and applications of FDEA. It is designed for students, educators, researchers, consultants and practicing managers in business, industry, and government with a basic understanding of the DEA and fuzzy logic concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic publishing exploits numerous possibilities to present or exchange information and to communicate via most current media like the Internet. By utilizing modern Web technologies like Web Services, loosely coupled services, and peer-to-peer networks we describe the integration of an intelligent business news presentation and distribution network. Employing semantics technologies enables the coupling of multinational and multilingual business news data on a scalable international level and thus introduce a service quality that is not achieved by alternative technologies in the news distribution area so far. Architecturally, we identified the loose coupling of existing services as the most feasible way to address multinational and multilingual news presentation and distribution networks. Furthermore we semantically enrich multinational news contents by relating them using AI techniques like the Vector Space Model. Summarizing our experiences we describe the technical integration of semantics and communication technologies in order to create a modern international news network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the SINOPS project, an optimal state of the art simulation of the marine silicon cycle is attempted employing a biogeochemical ocean general circulation model (BOGCM) through three particular time steps relevant for global (paleo-) climate. In order to tune the model optimally, results of the simulations are compared to a comprehensive data set of 'real' observations. SINOPS' scientific data management ensures that data structure becomes homogeneous throughout the project. Practical work routine comprises systematic progress from data acquisition, through preparation, processing, quality check and archiving, up to the presentation of data to the scientific community. Meta-information and analytical data are mapped by an n-dimensional catalogue in order to itemize the analytical value and to serve as an unambiguous identifier. In practice, data management is carried out by means of the online-accessible information system PANGAEA, which offers a tool set comprising a data warehouse, Graphical Information System (GIS), 2-D plot, cross-section plot, etc. and whose multidimensional data model promotes scientific data mining. Besides scientific and technical aspects, this alliance between scientific project team and data management crew serves to integrate the participants and allows them to gain mutual respect and appreciation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A presente investigação propõe-se a atuar no sector turístico, uma vez que este é bombardeado diariamente por uma quantidade considerável de dados e informações. Atualmente, usufrui-se significativamente mais da tecnologia com a finalidade de promover e vender os produtos/serviços disponíveis no mercado. A par da evolução tecnológica, os utilizadores/clientes conseguem comprar, cada vez mais, à distancia de um clique os produtos turísticos que desejam. No entanto, há um variado leque de aplicações sobre o turismo que permitem entender os gostos e as necessidades dos turistas assim como a sua atitude para com o mesmo. Porém, nem as entidades nem os gestores turísticos usufruem inteligentemente dos dados que lhes são facultados. Estes tendem normalmente a prender-se pelo turismo em Portugal e de que forma é que a sua entidade é apresentada acabando por esquecer que os dados podem e devem ser utilizados para expandir o mercado assim como entender/conhecer potenciais mercados. Deste modo, o fundamento principal desta investigação remete para a criação de uma plataforma infocomunicacional que analise na totalidade os dados obtidos, assim como fornecer as ferramentas pertinentes para que se consiga fazer esta análise, nomeadamente através de uma representação infográfica adequada e estratégias de a comunicar aos stakeholders.. Para tal foi aplicada no âmbito desta dissertação a metodologia investigação/ação, vista como um processo cíclico que para além de incluir simultaneamente estas duas vertentes, vai alternando entre a ação e a reflexão critica sendo sustentada por bases teóricas. A criação do protótipo da plataforma Smart Tourism, resultou num sistema inovador que tenta responder aos indicadores escolhidos no Dashbord e ao problema infocomunicacional, tentando criar as bases necessárias para que as entidades consigam analisar de forma mais integrada/sistematizada e racional a atividade turística. Foi por isso, desenvolvido e avaliado qualitativamente um protótipo de base infocomunicacional visual (dashboard visual) que para além do que para além do que já foi referido, consegue proporcionar a gestão dos produtos, clientes, staff e parceiros, aumentando assim o valor deste sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The business environment context points at the necessity of new forms of management for the sustainable competitiveness of organizations through time. Coopetition is characterized as an alternative in the interaction of different actors, which compete and cooperate simultaneously, in the pursuit of common goals. This dual relation, within a gain-increasing perspective, converts competitors into partners and fosters competitiveness, especially that of organizations within a specific sector. The field of competitive intelligence has, in its turn, assisted organizations, individually, in the systematization of information valuable to decision-making processes, which benefits competitiveness. It follows that it is possible to combine coopetition and competitive intelligence in a systematized process of sectorial intelligence for coopetitive relations. The general aim of this study is, therefore, to put forth a model of sectorial coopetitive intelligence. The methodological outlining of the study is characterized as a mixed approach (quantitative and qualitative methods), of an applied nature, of exploratory and descriptive aims. The Coordination of the Strategic Roadmapping Project for the Future of Paraná's Industry is the selected object of investigation. Protocols have been designed to collect primary and secondary data. In the collection of the primary ata, online questionary were sent to the sectors selected for examination. A total of 149 answers to the online questionary were obtained, and interviews were performed with all embers of the technical team of the Coordination, in a total of five interviewees. After the collection, all the data were tabulated, analyzed and validated by means of focal groups with the same five members of the Coordination technical team, and interviews were performed with a representative of each of the four sectors selected, in a total of nine participants in the validation. The results allowed the systematization of a sectorial coopetitive intelligence model called ICoops. This model is characterized by five stages, namely, planning, collection, nalysis, project development, dissemination and evaluation. Each stage is detailed in inputs, activities and outputs. The results suggest that sectorial coopetition is motivated mainly by knowledge sharing, technological development, investment in R&D, innovation, chain integration and resource complementation. The importance of a neutral institution has been recognized as a facilitator and incentive to the approximation of organizations. Among the main difficulties are the financing of the projects, the adhesion of new members, the lack of tools for the analysis of information and the dissemination of the actions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of building Data Warehouses (DW) is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW) are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM) System allows modeling and executing Business Processes (BPs) providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 2: Behaviour and Coordination

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos dias de hoje o acesso à informação por parte das empresas é vital para o bom desempenho das suas funções. As empresas de telecomunicações não fogem à regra, a sua posição no mercado está dependente das decisões que são tomadas com base na avaliação dessa informação. Para suportar os processos de apoio à decisão é coerente recorrer-se a Data Warehouses que permitem integrar informação de diversas fontes, verificando a sua qualidade, actualização e coerência, organizando-a para um fácil acesso e consulta de vários pontos de vista. Numa empresa de telecomunicações móvel, um Data Mart geográfico baseado na informação de tráfego da companhia que pode identificar as localizações preferenciais dos utilizadores na rede é muito importante porque fornece indicadores muito úteis para o departamento de marketing e negócio da empresa de maneira a que se saiba onde e como actuar para permitir que esta se desenvolva e ganhe vantagem no mercado. ABSTRACT: Today the access to information by enterprises is vital for the company’s performance. Telecommunications companies are no exception. Their position in the market is dependent on the decisions that are taken based on the evaluation of such information. To support the decision making process Data Warehouse is today an extremely useful tool; it integrates information from different sources, checking on its validity, quality and update, coherence, organizing it for an easy access and search from various perspectives. ln a mobile telecommunications company a geographical Data Mart-based traffic information that can identify the preferential locations of users on the network is very important It provides useful indicators to the Department of Marketing and Business there by allowing you to know where and how to act and boosting the development of the company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una gestione, un’analisi e un’interpretazione efficienti dei big data possono cambiare il modello lavorativo, modificare i risultati, aumentare le produzioni, e possono aprire nuove strade per l’assistenza sanitaria moderna. L'obiettivo di questo studio è incentrato sulla costruzione di una dashboard interattiva di un nuovo modello e nuove prestazioni nell’ambito della Sanità territoriale. Lo scopo è quello di fornire al cliente una piattaforma di Data Visualization che mostra risultati utili relativi ai dati sanitari in modo da fornire agli utilizzatori sia informazioni descrittive che statistiche sulla attuale gestione delle cure e delle terapie somministrate. Si propone uno strumento che consente la navigazione dei dati analizzando l’andamento di un set di indicatori di fine vita calcolati a partire da pazienti oncologici della Regione Emilia Romagna in un arco temporale che va dal 2010 ad oggi.