820 resultados para Aleph Analytics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

“La Business Intelligence per il monitoraggio delle vendite: il caso Ducati Motor Holding”. L’obiettivo di questa tesi è quello di illustrare cos’è la Business Intelligence e di mostrare i cambiamenti verificatisi in Ducati Motor Holding, in seguito alla sua adozione, in termini di realizzazione di report e dashboard per il monitoraggio delle vendite. L’elaborato inizia con una panoramica generale sulla storia e gli utilizzi della Business Intelligence nella quale vengono toccati i principali fondamenti teorici: Data Warehouse, data mining, analisi what-if, rappresentazione multidimensionale dei dati, costruzione del team di BI eccetera. Si proseguirà mediante un focus sui Big Data convogliando l’attenzione sul loro utilizzo e utilità nel settore dell’automotive (inteso nella sua accezione più generica e cioè non solo come mercato delle auto, ma anche delle moto), portando in questo modo ad un naturale collegamento con la realtà Ducati. Si apre così una breve overview sull’azienda descrivendone la storia, la struttura commerciale attraverso la quale vengono gestite le vendite e la gamma dei prodotti. Dal quarto capitolo si entra nel vivo dell’argomento: la Business Intelligence in Ducati. Si inizia descrivendo le fasi che hanno fino ad ora caratterizzato il progetto di Business Analytics (il cui obiettivo è per l'appunto introdurre la BI i azienda) per poi concentrarsi, a livello prima teorico e poi pratico, sul reporting sales e cioè sulla reportistica basata sul monitoraggio delle vendite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is the final report from the NGL project implementing the flipped classroom approach into the undergraduate course "Organization Theory". The report describes the implementation and evaluates the outcomes of flipped classroom teaching/learning using the students' survey and statistics from YouTube analytics and the learning management platform Fronter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Perinteisten kilpailuetujen katoaminen ja kilpailun kiristyminen haastavat yrityksiä etsimään keinoja kilpailukyvyn säilyttämiseksi. Tietotekniikan nopea kehitys ja liiketoiminnassa syntyvän datan määrän kasvu luovat yrityksille mahdollisuuden hyödyntää analytiikkaa päätöksenteon tukena ja liiketoiminnan tehostamisessa. Työ on kirjallisuuskatsaus ja sen tavoitteena on selvittää analytiikkajärjestelmän käyttöönottoprojektin vaiheet, käyttöönottoon liittyvät kustannukset ja miten kustannuksia voidaan hallita. Lisäksi esitetään tiivis katsaus analytiikan kehitykseen ja nykytilaan sekä tarkastellaan hankintamalleja, hankkeiden taloudellista arviointia ja käyttöönottoprojektin kriittisiä menestystekijöitä. Käyttöönottoprojekti on monivaiheinen ja se alkaa liiketoiminnan analysoinnista sekä järjestelmän suunnittelusta ulottuen aina sen toteutukseen ja jälkiarviointiin. Käyttöönottoon liittyy useita kustannuseriä, joita voidaan luokitella niiden ominaisuuksien perusteella. Projektin kustannusten hallinnan prosesseja ovat kustannusten hallinnan suunnittelu, kustannusten arviointi, budjetin määrittäminen ja kustannusten valvonta, jotka limittyvät käyttöönoton vaiheiden kanssa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this research is to check if the strategic planning developed between 2001 and 2009 into the State University of Para (Universidade Federal do Pará - UFPA) was consolidated into its Academic Centers as a management practice. To this end, we identified the planning formalization degree of the Academic Centers, the conceived tools for the planning, the conception and the methodological process used in the tools elaboration, as also its implementation. The research used a qualitative approach: it is descriptive and it uses the case study technique. The data were gathered from primary and secondary sources, through bibliography, documents, and field researches through semi-structure interviews. The analysis and data interpretation were done by each investigated Academic Center from the analytics categories guided by the specifics goals. We used theoretic fundamental based principles and the university as a study empiric reference based on its structure analysis, organizational processes and institutional strategic plan. We inspected how the strategic planning process was developed throughout the fixed period and how the investigated Academic Centers are from the collected documents and interviews. The theoretical foundation was built from three axis: the Brazilian undergraduate and posgraduate education system; the university itself including its singularity and complexity as an organization; and the planning as a strategic management process. The main results show us that the UFPA has updated regulatory milestones, presenting organizational structure, laws, instructions, manuals and deployed management model that give the strategic planning development conditions beyond its administration, i. e., into its Academic Centers. The centers also present those established milestones and develop the basic planning processes of the institution. Those processes are conceived based on the institutional strategic planning and the managers mainly use the procedural orientation defined by the university management, from where the conceptual foundation is originated and propagated. According to the literature and to the research done in this work, we can conclude that the Academic Centers from the UFPA developed the strategic planning practice. This planning is organized and founded and guided the plans and decisions which avoided the disordered management and, according to the managers, allowed the advances and performance improvement. We can conclude that the UFPA built an important foundation with respect to the management professionalization. On the other hand, we can not conclude that the management practice is consolidated since there are weaknesses into the structuring of the technical teams and there is not any management tool for the implementation of the elaborated plans

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims to analyze risks related to information technology (IT) in procedures related to data migration. This is done considering ALEPH, Integrated Libray System (ILS) that migrated data to the Library Module present in the software called Sistema Integrado de Gestão de Atividades Acadêmicas (SIGAA) at the Zila Mamede Central Library at the Federal University of Rio Grande do Norte (UFRN) in Natal/Brazil. The methodological procedure used was of a qualitative exploratory research with the realization of case study at the referred library in order to better understand this phenomenon. Data collection was able once there was use of a semi-structured interview that was applied with (11) subjects that are employed at the library as well as in the Technology Superintendence at UFRN. In order to examine data Content analysis as well as thematic review process was performed. After data migration the results of the interview were then linked to both analysis units and their system register with category correspondence. The main risks detected were: data destruction; data loss; data bank communication failure; user response delay; data inconsistency and duplicity. These elements point out implication and generate disorders that affect external and internal system users and lead to stress, work duplicity and hassles. Thus, some measures were taken related to risk management such as adequate planning, central management support, and pilot test simulations. For the advantages it has reduced of: risk, occurrence of problems and possible unforeseen costs, and allows achieving organizational objectives, among other. It is inferred therefore that the risks present in data bank conversion in libraries exist and some are predictable, however, it is seen that librarians do not know or ignore and are not very worried in the identification risks in data bank conversion, their acknowledge would minimize or even extinguish them. Another important aspect to consider is the existence of few empirical research that deal specifically with this subject and thus presenting the new of new approaches in order to promote better understanding of the matter in the corporate environment of the information units

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human factor is often recognised as a major aspect of cyber-security research. Risk and situational perception are identified as key factors in the decision making process, often playing a lead role in the adoption of security mechanisms. However, risk awareness and perception have been poorly investigated in the field of eHealth wearables. Whilst end-users often have limited understanding of privacy and security of wearables, assessing the perceived risks and consequences will help shape the usability of future security mechanisms. This paper present a survey of the the risks and situational awareness in eHealth services. An analysis of the lack of security and privacy measures in connected health devices is described with recommendations to circumvent critical situations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kandidaatintyö on toteutettu kirjallisuuskatsauksena, jonka tavoitteena on selvittää data-analytiikan käyttökohteita ja datan hyödyntämisen vaikutusta liiketoimintaan. Työ käsittelee data-analytiikan käyttöä ja datan tehokkaan hyödyntämisen haasteita. Työ on rajattu tarkastelemaan yrityksen talouden ohjausta, jossa analytiikkaa käytetään johdon ja rahoituksen laskentatoimessa. Datan määrän eksponentiaalinen kasvunopeus luo data-analytiikan käytölle uusia haasteita ja mahdollisuuksia. Datalla itsessään ei kuitenkaan ole suurta arvoa yritykselle, vaan arvo syntyy prosessoinnin kautta. Vaikka data-analytiikkaa tutkitaan ja käytetään jo runsaasti, se tarjoaa paljon nykyisiä sovelluksia suurempia mahdollisuuksia. Yksi työn keskeisimmistä tuloksista on, että data-analytiikalla voidaan tehostaa johdon laskentatoimea ja helpottaa rahoituksen laskentatoimen tehtäviä. Tarjolla olevan datan määrä kasvaa kuitenkin niin nopeasti, että käytettävissä oleva teknologia ja osaamisen taso eivät pysy kehityksessä mukana. Varsinkin big datan laajempi käyttöönotto ja sen tehokas hyödyntäminen vaikuttavat jatkossa talouden ohjauksen käytäntöihin ja sovelluksiin yhä enemmän.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MEDEIROS, Rildeci; MELO, Erica S. F.; NASCIMENTO, M. S. Hemeroteca digital temática: socialização da informação em cinema.In:SEMINÁRIO NACIONAL DE BIBLIOTECAS UNIVERSITÁRIAS,15.,2008,São Paulo. Anais eletrônicos... São Paulo:CRUESP,2008. Disponível em: http://www.sbu.unicamp.br/snbu2008/anais/site/pdfs/3018.pdf

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In November 2015-March 2016, I assigned my Graduate Assistant, David Durden, a project to compile usage statistics and trends for digitized collections between 2013-2015 from UMD Digital Collections and our contributions to the Internet Archive between 2008-2015. The original intent of the project was to provide usage metrics to assist the Digitization Initiatives Committee in prioritizing projects or content areas. The project also uncovered trends that should impact how we think about making digital collections discoverable and accessible. For example, if 50-60% of traffic into UMD Digital Collections comes from outside the University or College Park, MD, how will this impact the potential usage of content when access is restricted to campus due to licensing, copyright, or ownership restrictions? With a growing population using mobile browsers, how will a flash-based viewer restrict users’ access to content? How might we develop content or its discoverability for a growing social media user base? In this talk, I will briefly discuss the usage trends for the represented collections, how we may use these in prioritizing future projects, and issues I will discuss with collection managers as we develop project plans and the Manager of Digital Programs and Initiatives as we develop the digital collections repository.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This presentation was one of four during a Mid-Atlantic Regional Archives Conference presentation on April 15, 2016. Digitization of collections can help to improve internal workflows, make materials more accessible, and create new and engaging relationships with users. Laurie Gemmill Arp will discuss the LYRASIS Digitization Collaborative, created to assist institutions with their digitization needs, and how it has worked to help institutions increase connections with users. Robin Pike from the University of Maryland will discuss how they factor requests for access into selection for digitization and how they track the use of digitized materials. Laura Drake Davis of James Madison University will discuss the establishment of a formal digitization program, its impact on users, and the resulting increased use of their collections. Linda Tompkins-Baldwin will discuss Digital Maryland’s partnership with the Digital Public Library of America to provide access to archives held by institutions without a digitization program.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Americans are accustomed to a wide range of data collection in their lives: census, polls, surveys, user registrations, and disclosure forms. When logging onto the Internet, users’ actions are being tracked everywhere: clicking, typing, tapping, swiping, searching, and placing orders. All of this data is stored to create data-driven profiles of each user. Social network sites, furthermore, set the voluntarily sharing of personal data as the default mode of engagement. But people’s time and energy devoted to creating this massive amount of data, on paper and online, are taken for granted. Few people would consider their time and energy spent on data production as labor. Even if some people do acknowledge their labor for data, they believe it is accessory to the activities at hand. In the face of pervasive data collection and the rising time spent on screens, why do people keep ignoring their labor for data? How has labor for data been become invisible, as something that is disregarded by many users? What does invisible labor for data imply for everyday cultural practices in the United States? Invisible Labor for Data addresses these questions. I argue that three intertwined forces contribute to framing data production as being void of labor: data production institutions throughout history, the Internet’s technological infrastructure (especially with the implementation of algorithms), and the multiplication of virtual spaces. There is a common tendency in the framework of human interactions with computers to deprive data and bodies of their materiality. My Introduction and Chapter 1 offer theoretical interventions by reinstating embodied materiality and redefining labor for data as an ongoing process. The middle Chapters present case studies explaining how labor for data is pushed to the margin of the narratives about data production. I focus on a nationwide debate in the 1960s on whether the U.S. should build a databank, contemporary Big Data practices in the data broker and the Internet industries, and the group of people who are hired to produce data for other people’s avatars in the virtual games. I conclude with a discussion on how the new development of crowdsourcing projects may usher in the new chapter in exploiting invisible and discounted labor for data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.