435 resultados para analytics
Resumo:
NanoStreams explores the design, implementation,and system software stack of micro-servers aimed at processingdata in-situ and in real time. These micro-servers can serve theemerging Edge computing ecosystem, namely the provisioningof advanced computational, storage, and networking capabilitynear data sources to achieve both low latency event processingand high throughput analytical processing, before consideringoff-loading some of this processing to high-capacity datacentres.NanoStreams explores a scale-out micro-server architecture thatcan achieve equivalent QoS to that of conventional rack-mountedservers for high-capacity datacentres, but with dramaticallyreduced form factors and power consumption. To this end,NanoStreams introduces novel solutions in programmable & con-figurable hardware accelerators, as well as the system softwarestack used to access, share, and program those accelerators.Our NanoStreams micro-server prototype has demonstrated 5.5×higher energy-efficiency than a standard Xeon Server. Simulationsof the microserver’s memory system extended to leveragehybrid DDR/NVM main memory indicated 5× higher energyefficiencythan a conventional DDR-based system.
Resumo:
Thesis (Master's)--University of Washington, 2016-03
Resumo:
Currently the world around us "reboots" every minute and “staying at the forefront” seems to be a very arduous task. The continuous and “speeded” progress of society requires, from all the actors, a dynamic and efficient attitude both in terms progress monitoring and moving adaptation. With regard to education, no matter how updated we are in relation to the contents, the didactic strategies and technological resources, we are inevitably compelled to adapt to new paradigms and rethink the traditional teaching methods. It is in this context that the contribution of e-learning platforms arises. Here teachers and students have at their disposal new ways to enhance the teaching and learning process, and these platforms are seen, at the present time, as significant virtual teaching and learning supporting environments. This paper presents a Project and attempts to illustrate the potential that new technologies present as a “backing” tool in different stages of teaching and learning at different levels and areas of knowledge, particularly in Mathematics. We intend to promote a constructive discussion moment, exposing our actual perception - that the use of the Learning Management System Moodle, by Higher Education teachers, as supplementary teaching-learning environment for virtual classroom sessions can contribute for greater efficiency and effectiveness of teaching practice and to improve student achievement. Regarding the Learning analytics experience we will present a few results obtained with some assessment Learning Analytics tools, where we profoundly felt that the assessment of students’ performance in online learning environments is a challenging and demanding task.
Resumo:
Atualmente o setor segurador enfrenta diversas dificuldades, não só pela crise económica internacional e pelo mercado cada vez mais competitivo, como também pelas exigências impostas pela entidade reguladora - Instituto de Seguros de Portugal (ISP). Desta forma, apenas as seguradoras que consigam monitorizar os seus riscos, adequando os prémios praticados, conseguirão sobreviver. A forma de o fazer é através de uma adequada tarifação. Neste contexto de elevada instabilidade, as plataformas de Business Intelligence (BI) têm vindo a desempenhar um papel cada vez mais importante no processo de tomada de decisão, nomeadamente, o Business Analytics (BA), que proporciona os métodos e ferramentas de análise. O objetivo deste projeto é desenvolver um protótipo de solução de BA que forneça os inputs necessários ao processo de tomada de decisão, através da monitorização da tarifa em vigor e da simulação do impacto da introdução de uma nova tarifa. A solução desenvolvida apenas abrange a tarifa de responsabilidade civil automóvel (RCA). Ao nível das ferramentas analíticas, o foco foi a análise visual, nomeadamente a construção de dashboards, onde se inclui a análise de sensibilidade ou what-if analysis (WIF). A motivação para o desenvolvimento deste projeto foi a constatação de inexistência de soluções para este fim nos ambientes profissionais em que estive envolvido.
Resumo:
This project attempts to provide an in-depth competitive assessment of the Portuguese indoor location-based analytics market, and to elaborate an entry-pricing strategy for Business Intelligence Positioning System (BIPS) implementation in Portuguese shopping centre stores. The role of industry forces and company’s organizational resources platform to sustain company’s competitive advantage was explored. A customer value-based pricing approach was adopted to assess BIPS value to retailers and maximize Sonae Sierra profitability. The exploratory quantitative research found that there is a market opportunity to explore every store area types with tailored proposals, and to set higher-than-tested membership fees to allow a rapid ROI, concluding there are propitious conditions for Sierra to succeed in BIPS store’s business model in Portugal.
Resumo:
Objectives: An email information literacy program has been effective for over a decade at Université de Montréal’s Health Library. Students periodically receive messages highlighting the content of guides on the library’s website. We wish to evaluate, using Google Analytics, the effects of the program on specific webpage statistics. Using the data collected, we may pinpoint popular guides as well as others that need improvement. Methods: In the program, first and second-year medical (MD) or dental (DMD) students receive eight bi-monthly email messages. The DMD mailing list also includes graduate students and professors. Enrollment to the program is optional for MDs, but mandatory for DMDs. Google Analytics (GA) profiles have been configured for the libraries websites to collect visitor statistics since June 2009. The GA Links Builder was used to design unique links specifically associated with the originating emails. This approach allowed us to gather information on guide usage, such as the visitor’s program of study, duration of page viewing, number of pages viewed per visit, as well as browsing data. We also followed the evolution of clicks on GA unique links over time, as we believed that users may keep the library's emails and refer to them to access specific information. Results: The proportion of students who actually clicked the email links was, on average, less than 5%. MD and DMD students behaved differently regarding guide views, number of pages visited and length of time on the site. The CINAHL guide was the most visited for DMD students whereas MD students consulted the Pharmaceutical information guide most often. We noted that some students visited referred guides several weeks after receiving messages, thus keeping them for future reference; browsing to additional pages on the library website was also frequent. Conclusion: The mitigated success of the program prompted us to directly survey students on the format, frequency and usefulness of messages. The information gathered from GA links as well as from the survey will allow us to redesign our web content and modify our email information literacy program so that messages are more attractive, timely and useful for students.
Resumo:
In this lecture for a second year interdisciplinary course (part of the curriculum innovation programme) We explore the scope of social media analytics and look at two aspects in depth: Analysing for influence (looking at factors such as network structure, propagation of content and interaction), and analysing for trust (looking at different methods including policy, provenance and reputation - both local and global). The lecture notes include a number of short videos, which cannot be included here for copy-write reasons.
Resumo:
Wednesday 26th March 2014 Speaker(s): Dr Trung Dong Huynh Organiser: Dr Tim Chown Time: 26/03/2014 11:00-11:50 Location: B32/3077 File size: 349Mb Abstract Understanding the dynamics of a crowdsourcing application and controlling the quality of the data it generates is challenging, partly due to the lack of tools to do so. Provenance is a domain-independent means to represent what happened in an application, which can help verify data and infer their quality. It can also reveal the processes that led to a data item and the interactions of contributors with it. Provenance patterns can manifest real-world phenomena such as a significant interest in a piece of content, providing an indication of its quality, or even issues such as undesirable interactions within a group of contributors. In this talk, I will present an application-independent methodology for analysing provenance graphs, constructed from provenance records, to learn about such patterns and to use them for assessing some key properties of crowdsourced data, such as their quality, in an automated manner. I will also talk about CollabMap (www.collabmap.org), an online crowdsourcing mapping application, and show how we applied the approach above to the trust classification of data generated by the crowd, achieving an accuracy over 95%.
Resumo:
Resources from the Singapore Summer School 2014 hosted by NUS. ws-summerschool.comp.nus.edu.sg
Resumo:
Real-time geoparsing of social media streams (e.g. Twitter, YouTube, Instagram, Flickr, FourSquare) is providing a new 'virtual sensor' capability to end users such as emergency response agencies (e.g. Tsunami early warning centres, Civil protection authorities) and news agencies (e.g. Deutsche Welle, BBC News). Challenges in this area include scaling up natural language processing (NLP) and information retrieval (IR) approaches to handle real-time traffic volumes, reducing false positives, creating real-time infographic displays useful for effective decision support and providing support for trust and credibility analysis using geosemantics. I will present in this seminar on-going work by the IT Innovation Centre over the last 4 years (TRIDEC and REVEAL FP7 projects) in building such systems, and highlights our research towards improving trustworthy and credible of crisis map displays and real-time analytics for trending topics and influential social networks during major news worthy events.
Resumo:
An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.
Resumo:
Resumen tomado de la publicación
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.