699 resultados para data warehouse tuning aggregato business intelligence performance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Denna studie syftar till att undersöka hur en stor organisation arbetar med förvaltning av information genom att undersöka dess nuvarande informationsförvaltning, samt undersöka eventuella förslag till framtida informationsförvaltning. Vidare syftar studien också till att undersöka hur en stor organisation kan etablera en tydlig styrning, samverkan, hantering och ansvars- och rollfördelning kring informationsförvaltning. Denna studie är kvalitativ, där datainsamlingen sker genom dokumentstudier och intervjuer. Studien bedrivs med abduktion och är en normativ fallstudie då studiens mål är att ge vägledning och föreslå åtgärder till det fall som uppdragsgivaren har bett mig att studera. Fallet i denna studie är ett typiskt fall, då studiens resultat kan vara i intresse för fler än studiens uppdragsgivare, exempelvis organisationer med liknande informationsmiljö. För att samla teori till studien så har jag genomfört litteraturstudier om ämnen som är relevanta för studiens syfte: Informationsförvaltning, Business Intelligence, Data Warehouse och dess arkitektur, samt Business Intelligence Competency Center. Denna studie bidrar med praktiskt kunskapsbidrag, då studien ger svar på praktiska problem. Uppdragsgivaren har haft praktiska problem i och med en icke fungerade informationsförvaltning, och denna studie har bidragit med förslag på framtida informationsförvaltning. Förslaget på framtida informationsförvaltning involverar ett centraliserat Data Warehouse, samt utvecklingen utav en verksamhet som hanterar informationsförvaltning och styrningen kring informationsförvaltningen inom hela organisationen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet users consume online targeted advertising based on information collected about them and voluntarily share personal information in social networks. Sensor information and data from smart-phones is collected and used by applications, sometimes in unclear ways. As it happens today with smartphones, in the near future sensors will be shipped in all types of connected devices, enabling ubiquitous information gathering from the physical environment, enabling the vision of Ambient Intelligence. The value of gathered data, if not obvious, can be harnessed through data mining techniques and put to use by enabling personalized and tailored services as well as business intelligence practices, fueling the digital economy. However, the ever-expanding information gathering and use undermines the privacy conceptions of the past. Natural social practices of managing privacy in daily relations are overridden by socially-awkward communication tools, service providers struggle with security issues resulting in harmful data leaks, governments use mass surveillance techniques, the incentives of the digital economy threaten consumer privacy, and the advancement of consumergrade data-gathering technology enables new inter-personal abuses. A wide range of fields attempts to address technology-related privacy problems, however they vary immensely in terms of assumptions, scope and approach. Privacy of future use cases is typically handled vertically, instead of building upon previous work that can be re-contextualized, while current privacy problems are typically addressed per type in a more focused way. Because significant effort was required to make sense of the relations and structure of privacy-related work, this thesis attempts to transmit a structured view of it. It is multi-disciplinary - from cryptography to economics, including distributed systems and information theory - and addresses privacy issues of different natures. As existing work is framed and discussed, the contributions to the state-of-theart done in the scope of this thesis are presented. The contributions add to five distinct areas: 1) identity in distributed systems; 2) future context-aware services; 3) event-based context management; 4) low-latency information flow control; 5) high-dimensional dataset anonymity. Finally, having laid out such landscape of the privacy-preserving work, the current and future privacy challenges are discussed, considering not only technical but also socio-economic perspectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado, Direção e Gestão Hoteleira, Escola Superior de Gestão, Hotelaria e Turismo, Universidade do Algarve, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Normalmente el desarrollo de un país se ha explicado desde una perspectiva tradicional en términos de su crecimiento económico, teniendo en cuenta indicadores macroeconómicos como el PIB, la inflación y el desempleo. Poca atención se le ha puesto a la importancia que para el desarrollo de un país representan el capital humano y el proceso de liderazgo. Debido a lo anterior, mediante este estudio de caso, se pretende entender el éxito de la estrategia de crecimiento por exportaciones de Japón entre los años 1960-1980 teniendo en cuenta estos aspectos. Así, se busca sustentar que la incorporación de un tipo de liderazgo transformacional- transaccional y los elementos propios de su cultura como el confucianismo y el budismo, le imprimieron una perspectiva no economicista al éxito del modelo de desarrollo como parte de la triada empresa-estado-universidad. Lo anterior se realizará partiendo de un análisis cualitativo y con un enfoque en la economía política internacional y en el liderazgo. Este último estudiado desde las disciplinas de la administración, la sociología y la psicología

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos dias de hoje o acesso à informação por parte das empresas é vital para o bom desempenho das suas funções. As empresas de telecomunicações não fogem à regra, a sua posição no mercado está dependente das decisões que são tomadas com base na avaliação dessa informação. Para suportar os processos de apoio à decisão é coerente recorrer-se a Data Warehouses que permitem integrar informação de diversas fontes, verificando a sua qualidade, actualização e coerência, organizando-a para um fácil acesso e consulta de vários pontos de vista. Numa empresa de telecomunicações móvel, um Data Mart geográfico baseado na informação de tráfego da companhia que pode identificar as localizações preferenciais dos utilizadores na rede é muito importante porque fornece indicadores muito úteis para o departamento de marketing e negócio da empresa de maneira a que se saiba onde e como actuar para permitir que esta se desenvolva e ganhe vantagem no mercado. ABSTRACT: Today the access to information by enterprises is vital for the company’s performance. Telecommunications companies are no exception. Their position in the market is dependent on the decisions that are taken based on the evaluation of such information. To support the decision making process Data Warehouse is today an extremely useful tool; it integrates information from different sources, checking on its validity, quality and update, coherence, organizing it for an easy access and search from various perspectives. ln a mobile telecommunications company a geographical Data Mart-based traffic information that can identify the preferential locations of users on the network is very important It provides useful indicators to the Department of Marketing and Business there by allowing you to know where and how to act and boosting the development of the company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The objective of this report was to use the Sydney Opera House as a case study of the application of Building Information Modelling (BIM). The Sydney opera House is a complex, large building with very irregular building configuration, that makes it a challenging test. A number of key concerns are evident at SOH: • the building structure is complex, and building service systems - already the major cost of ongoing maintenance - are undergoing technology change, with new computer based services becoming increasingly important. • the current “documentation” of the facility is comprised of several independent systems, some overlapping and is inadequate to service current and future services required • the building has reached a milestone age in terms of the condition and maintainability of key public areas and service systems, functionality of spaces and longer term strategic management. • many business functions such as space or event management require up-to-date information of the facility that are currently inadequately delivered, expensive and time consuming to update and deliver to customers. • major building upgrades are being planned that will put considerable strain on existing Facilities Portfolio services, and their capacity to manage them effectively While some of these concerns are unique to the House, many will be common to larger commercial and institutional portfolios. The work described here supported a complementary task which sought to identify if a building information model – an integrated building database – could be created, that would support asset & facility management functions (see Sydney Opera House – FM Exemplar Project, Report Number: 2005-001-C-4 Building Information Modelling for FM at Sydney Opera House), a business strategy that has been well demonstrated. The development of the BIMSS - Open Specification for BIM has been surprisingly straightforward. The lack of technical difficulties in converting the House’s existing conventions and standards to the new model based environment can be related to three key factors: • SOH Facilities Portfolio – the internal group responsible for asset and facility management - have already well established building and documentation policies in place. The setting and adherence to well thought out operational standards has been based on the need to create an environment that is understood by all users and that addresses the major business needs of the House. • The second factor is the nature of the IFC Model Specification used to define the BIM protocol. The IFC standard is based on building practice and nomenclature, widely used in the construction industries across the globe. For example the nomenclature of building parts – eg ifcWall, corresponds to our normal terminology, but extends the traditional drawing environment currently used for design and documentation. This demonstrates that the international IFC model accurately represents local practice for building data representation and management. • a BIM environment sets up opportunities for innovative processes that can exploit the rich data in the model and improve services and functions for the House: for example several high-level processes have been identified that could benefit from standardized Building Information Models such as maintenance processes using engineering data, business processes using scheduling, venue access, security data and benchmarking processes using building performance data. The new technology matches business needs for current and new services. The adoption of IFC compliant applications opens the way forward for shared building model collaboration and new processes, a significant new focus of the BIM standards. In summary, SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. These BIM standards and their application to the Opera House are intended as a template for other organisations to adopt for the own procurement and facility management activities. Appendices provide an overview of the IFC Integrated Object Model and an understanding IFC Model Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The management of main material prices of provincial highway project quota has problems of lag and blindness. Framework of provincial highway project quota data MIS and main material price data warehouse were established based on WEB firstly. Then concrete processes of provincial highway project main material prices were brought forward based on BP neural network algorithmic. After that standard BP algorithmic, additional momentum modify BP network algorithmic, self-adaptive study speed improved BP network algorithmic were compared in predicting highway project main prices. The result indicated that it is feasible to predict highway main material prices using BP NN, and using self-adaptive study speed improved BP network algorithmic is the relatively best one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a model to estimate travel time using cumulative plots. Three different cases considered are i) case-Det, for only detector data; ii) case-DetSig, for detector data and signal controller data and iii) case-DetSigSFR: for detector data, signal controller data and saturation flow rate. The performance of the model for different detection intervals is evaluated. It is observed that detection interval is not critical if signal timings are available. Comparable accuracy can be obtained from larger detection interval with signal timings or from shorter detection interval without signal timings. The performance for case-DetSig and for case-DetSigSFR is consistent with accuracy generally more than 95% whereas, case-Det is highly sensitive to the signal phases in the detection interval and its performance is uncertain if detection interval is integral multiple of signal cycles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network-based Intrusion Detection Systems (NIDSs) analyse network traffic to detect instances of malicious activity. Typically, this is only possible when the network traffic is accessible for analysis. With the growing use of Virtual Private Networks (VPNs) that encrypt network traffic, the NIDS can no longer access this crucial audit data. In this paper, we present an implementation and evaluation of our approach proposed in Goh et al. (2009). It is based on Shamir's secret-sharing scheme and allows a NIDS to function normally in a VPN without any modifications and without compromising the confidentiality afforded by the VPN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.