56 resultados para Google Analytics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quantity and quality of spatial data are increasing rapidly. This is particularly evident in the case of movement data. Devices capable of accurately recording the position of moving entities have become ubiquitous and created an abundance of movement data. Valuable knowledge concerning processes occurring in the physical world can be extracted from these large movement data sets. Geovisual analytics offers powerful techniques to achieve this. This article describes a new geovisual analytics tool specifically designed for movement data. The tool features the classic space-time cube augmented with a novel clustering approach to identify common behaviour. These techniques were used to analyse pedestrian movement in a city environment which revealed the effectiveness of the tool for identifying spatiotemporal patterns. © 2014 Taylor & Francis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent technological advances have increased the quantity of movement data being recorded. While valuable knowledge can be gained by analysing such data, its sheer volume creates challenges. Geovisual analytics, which helps the human cognition process by using tools to reason about data, offers powerful techniques to resolve these challenges. This paper introduces such a geovisual analytics environment for exploring movement trajectories, which provides visualisation interfaces, based on the classic space-time cube. Additionally, a new approach, using the mathematical description of motion within a space-time cube, is used to determine the similarity of trajectories and forms the basis for clustering them. These techniques were used to analyse pedestrian movement. The results reveal interesting and useful spatiotemporal patterns and clusters of pedestrians exhibiting similar behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At the formation of the new Republic of Ireland, the construction of new infrastructures was seen as an essential element in the building of the new nation, just as the adoption of international style modernism in architecture was perceived as a way to escape the colonial past. Accordingly, infrastructure became the physical manifestation, the concrete identity of these objectives and architecture formed an integral part of this narrative. Moving between scales and from artefact to context, Infrastructure and the Architectures of Modernity in Ireland 1916-2016 provides critical insights and narratives on what is a complex and hitherto overlooked landscape, one which is often as much international as it is Irish. In doing so, it explores the interaction between the universalising and globalising tendencies of modernisation on one hand and the textures of local architectures on the other.

The book shows how the nature of technology and infrastructure is inherently cosmopolitan. Beginning with the building of the heroic Shannon hydro-electric facility at Ardnacrusha by the German firm of Siemens-Schuckert in the first decade of independence, Ireland became a point of varying types of intersection between imported international expertise and local need. Meanwhile, at the other end of the century, by the year 2000, Ireland had become one of the most globalized countries in the world, site of the European headquarters of multinationals such as Google and Microsoft. Climatically and economically expedient to the storing and harvesting of data, Ireland has subsequently become a repository of digital information farmed in large, single-storey sheds absorbed into anonymous suburbs. In 2013, it became the preferred site for Intel to design and develop its new microprocessor chip: the Galileo. The story of the decades in between, of shifts made manifest in architecture and infrastructure from the policies of economic protectionism, to the opening up of the country to direct foreign investment and the embracing of the EU, is one of the influx of technologies and cultural references into a small country on the edges of Europe as Ireland became both a launch-pad and testing ground for a series of aspects of designed modernity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After an open competition, we were selected to commission, curate and design the Irish pavilion for the Venice biennale 2014. Our proposal engage with the role of infrastructure and architecture in the cultural development of the new Irish state 1914-2014. This curatorial programme was realised in a demountable, open matrix pavilion measuring 12 x 5 x 6 metres.

How modernity is absorbed into national cultures usually presupposes an attachment to previous conditions and a desire to reconcile the two. In an Irish context, due to the processes of de-colonisation and political independence, this relationship is more complicated.

In 1914, Ireland was largely agricultural and lacked any significant industrial complex. The construction of new infrastructures after independence in 1921 became central to the cultural imagining of the new nation. The adoption of modernist architecture was perceived as a way to escape the colonial past. As the desire to reconcile cultural and technological aims developed, these infrastructures became both the physical manifestation and concrete identity of the new nation with architecture an essential element in this construct.

Technology and infrastructure are inherently cosmopolitan. Beginning with the Shannon hydro-electric facility at Ardnacrusha (1929) involving the German firm of Siemens-Schuckert, Ireland became a point of various intersections between imported international expertise and local need. By the turn of the last century, it had become one of the most globalised countries in the world, site of the European headquarters of multinationals such as Google and Microsoft. Climatically and economically expedient to the storing and harvesting of data, Ireland has subsequently become an important repository of digital information farmed in large, single-storey sheds absorbed into dispersed suburbs. In 2013, it became the preferred site for Intel to design and develop its new microprocessor board, the Galileo, a building block for the internet of things.

The story of the decades in between, of shifts made manifest in architecture and infrastructure, from the policies of economic protectionism to the embracing of the EU is one of the influx of technologies and cultural references into a small country on the edges of Europe: Ireland as both a launch-pad and testing ground for a series of aspects of designed modernity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a new event recognition framework, based on the Dempster-Shafer theory of evidence, which combines the evidence from multiple atomic events detected by low-level computer vision analytics. The proposed framework employs evidential network modelling of composite events. This approach can effectively handle the uncertainty of the detected events, whilst inferring high-level events that have semantic meaning with high degrees of belief. Our scheme has been comprehensively evaluated against various scenarios that simulate passenger behaviour on public transport platforms such as buses and trains. The average accuracy rate of our method is 81% in comparison to 76% by a standard rule-based method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an event recognition framework, based on Dempster-Shafer theory, that combines evidence of events from low-level computer vision analytics. The proposed method employing evidential network modelling of composite events, is able to represent uncertainty of event output from low level video analysis and infer high level events with semantic meaning along with degrees of belief. The method has been evaluated on videos taken of subjects entering and leaving a seated area. This has relevance to a number of transport scenarios, such as onboard buses and trains, and also in train stations and airports. Recognition results of 78% and 100% for four composite events are encouraging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experiences from smart grid cyber-security incidents in the past decade have raised questions on the applicability and effectiveness of security measures and protection mechanisms applied to the grid. In this chapter we focus on the security measures applied under real circumstances in today’s smart grid systems. Beginning from real world example implementations, we first review cyber-security facts that affected the electrical grid, from US blackout incidents, to the Dragonfly cyber-espionage campaign currently focusing on US and European energy firms. Provided a real world setting, we give information related to energy management of a smart grid looking also in the optimization techniques that power control engineers perform into the grid components. We examine the application of various security tools in smart grid systems, such as intrusion detection systems, smart meter authentication and key management using Physical Unclonable Functions, security analytics and resilient control algorithms. Furthermore we present evaluation use cases of security tools applied on smart grid infrastructure test-beds that could be proved important prior to their application in the real grid, describing a smart grid intrusion detection system application and security analytics results. Anticipated experimental results from the use-cases and conclusions about the successful transitions of security measures to real world smart grid operations will be presented at the end of this chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This special issue provides the latest research and development on wireless mobile wearable communications. According to a report by Juniper Research, the market value of connected wearable devices is expected to reach $1.5 billion by 2014, and the shipment of wearable devices may reach 70 million by 2017. Good examples of wearable devices are the prominent Google Glass and Microsoft HoloLens. As wearable technology is rapidly penetrating our daily life, mobile wearable communication is becoming a new communication paradigm. Mobile wearable device communications create new challenges compared to ordinary sensor networks and short-range communication. In mobile wearable communications, devices communicate with each other in a peer-to-peer fashion or client-server fashion and also communicate with aggregation points (e.g., smartphones, tablets, and gateway nodes). Wearable devices are expected to integrate multiple radio technologies for various applications' needs with small power consumption and low transmission delays. These devices can hence collect, interpret, transmit, and exchange data among supporting components, other wearable devices, and the Internet. Such data are not limited to people's personal biomedical information but also include human-centric social and contextual data. The success of mobile wearable technology depends on communication and networking architectures that support efficient and secure end-to-end information flows. A key design consideration of future wearable devices is the ability to ubiquitously connect to smartphones or the Internet with very low energy consumption. Radio propagation and, accordingly, channel models are also different from those in other existing wireless technologies. A huge number of connected wearable devices require novel big data processing algorithms, efficient storage solutions, cloud-assisted infrastructures, and spectrum-efficient communications technologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: This review intends to examine current research surrounding economic assessment in the delivery of dental care. Economic evaluation is an acknowledged method of analysing dental care systems by means of efficiency, effectiveness, efficacy and availability. Though this is a widely used method in medicine, it is underappreciated in dentistry. As the delivery of health care changes there has been recent demand by the public, the profession, and those funding dental treatment to investigate current practices regarding programs themselves and resource allocation.
Methods: A meta-analysis was conducted regarding health economics. The initial search was carried out using Pubmed, Google Scholar, Science Direct, and The Cochrane Library with search terms “health AND economics AND dentistry”. A secondary search was conducted with the terms “heath care AND dentistry AND”. The third part of the entry was changed to address the aims and included the following terms: “cost benefit analysis”, “efficiency criteria”, “supply & demand”, “cost-effectiveness”, “cost minimisation”, “cost utility”, “resource allocation”, “QALY”, and “delivery and economics”. Limits were applied to all searches to only include papers published in English within the last eight years.
Results: Preliminary results demonstrated a limited number of economic evaluations conducted in dentistry. Those that were carried out were mainly confined to the United Kingdom. Furthermore analysis was mainly restricted to restorative dentistry, followed by orthodontics, and maxillofacial surgery, thereby demonstrating a need for investigation in all fields of dentistry.
Conclusion: Health economics has been overlooked in the past regarding delivery of dental care and resource allocation. Economic appraisal is a crucial part of generating an effective and efficient dental care system. It is becoming increasingly evident that there is a need for economic evaluation in all dental fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction
The use of video capture of lectures in Higher Education is not a recent occurrence with web based learning technologies including digital recording of live lectures becoming increasing commonly offered by universities throughout the world (Holliman and Scanlon, 2004). However in the past decade the increase in technical infrastructural provision including the availability of high speed broadband has increased the potential and use of videoed lecture capture. This had led to a variety of lecture capture formats including pod casting, live streaming or delayed broadcasting of whole or part of lectures.
Additionally in the past five years there has been a significant increase in the popularity of online learning, specifically via Massive Open Online Courses (MOOCs) (Vardi, 2014). One of the key aspects of MOOCs is the simulated recording of lecture like activities. There has been and continues to be much debate on the consequences of the popularity of MOOCs, especially in relation to its potential uses within established University programmes.
There have been a number of studies dedicated to the effects of videoing lectures.
The clustered areas of research in video lecture capture have the following main themes:
• Staff perceptions including attendance, performance of students and staff workload
• Reinforcement versus replacement of lectures
• Improved flexibility of learning
• Facilitating engaging and effective learning experiences
• Student usage, perception and satisfaction
• Facilitating students learning at their own pace
Most of the body of the research has concentrated on student and faculty perceptions, including academic achievement, student attendance and engagement (Johnston et al, 2012).
Generally the research has been positive in review of the benefits of lecture capture for both students and faculty. This perception coupled with technical infrastructure improvements and student demand may well mean that the use of video lecture capture will continue to increase in frequency in the next number of years in tertiary education. However there is a relatively limited amount of research in the effects of lecture capture specifically in the area of computer programming with Watkins 2007 being one of few studies . Video delivery of programming solutions is particularly useful for enabling a lecturer to illustrate the complex decision making processes and iterative nature of the actual code development process (Watkins et al 2007). As such research in this area would appear to be particularly appropriate to help inform debate and future decisions made by policy makers.
Research questions and objectives
The purpose of the research was to investigate how a series of lecture captures (in which the audio of lectures and video of on-screen projected content were recorded) impacted on the delivery and learning of a programme of study in an MSc Software Development course in Queen’s University, Belfast, Northern Ireland. The MSc is conversion programme, intended to take graduates from non-computing primary degrees and upskill them in this area. The research specifically targeted the Java programming module within the course. It also analyses and reports on the empirical data from attendances and various video viewing statistics. In addition, qualitative data was collected from staff and student feedback to help contextualise the quantitative results.
Methodology, Methods and Research Instruments Used
The study was conducted with a cohort of 85 post graduate students taking a compulsory module in Java programming in the first semester of a one year MSc in Software Development. A pre-course survey of students found that 58% preferred to have available videos of “key moments” of lectures rather than whole lectures. A large scale study carried out by Guo concluded that “shorter videos are much more engaging” (Guo 2013). Of concern was the potential for low audience retention for videos of whole lectures.
The lecturers recorded snippets of the lecture directly before or after the actual physical delivery of the lecture, in a quiet environment and then upload the video directly to a closed YouTube channel. These snippets generally concentrated on significant parts of the theory followed by theory related coding demonstration activities and were faithful in replication of the face to face lecture. Generally each lecture was supported by two to three videos of durations ranging from 20 – 30 minutes.
Attendance
The MSc programme has several attendance based modules of which Java Programming was one element. In order to assess the consequence on attendance for the Programming module a control was established. The control used was a Database module which is taken by the same students and runs in the same semester.
Access engagement
The videos were hosted on a closed YouTube channel made available only to the students in the class. The channel had enabled analytics which reported on the following areas for all and for each individual video; views (hits), audience retention, viewing devices / operating systems used and minutes watched.
Student attitudes
Three surveys were taken in regard to investigating student attitudes towards the videoing of lectures. The first was before the start of the programming module, then at the mid-point and subsequently after the programme was complete.
The questions in the first survey were targeted at eliciting student attitudes towards lecture capture before they had experienced it in the programme. The midpoint survey gathered data in relation to how the students were individually using the system up to that point. This included feedback on how many videos an individual had watched, viewing duration, primary reasons for watching and the result on attendance, in addition to probing for comments or suggestions. The final survey on course completion contained questions similar to the midpoint survey but in summative view of the whole video programme.
Conclusions and Outcomes
The study confirmed findings of other such investigations illustrating that there is little or no effect on attendance at lectures. The use of the videos appears to help promote continual learning but they are particularly accessed by students at assessment periods. Students respond positively to the ability to access lectures digitally, as a means of reinforcing learning experiences rather than replacing them. Feedback from students was overwhelmingly positive indicating that the videos benefited their learning. Also there are significant benefits to part recording of lectures rather than recording whole lectures. The behaviour viewing trends analytics suggest that despite the increase in the popularity of online learning via MOOCs and the promotion of video learning on mobile devices in fact in this study the vast majority of students accessed the online videos at home on laptops or desktops However, in part, this is likely due to the nature of the taught subject, that being programming.
The research involved prerecording the lecture in smaller timed units and then uploading for distribution to counteract existing quality issues with recording entire live lectures. However the advancement and consequential improvement in quality of in situ lecture capture equipment may well help negate the need to record elsewhere. The research has also highlighted an area of potentially very significant use for performance analysis and improvement that could have major implications for the quality of teaching. A study of the analytics of the viewings of the videos could well provide a quick response formative feedback mechanism for the lecturer. If a videoed lecture either recorded live or later is a true reflection of the face to face lecture an analysis of the viewing patterns for the video may well reveal trends that correspond with the live delivery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The worldwide scarcity of women studying or employed in ICT, or in computing related disciplines, continues to be a topic of concern for industry, the education sector and governments. Within Europe while females make up 46% of the workforce only 17% of IT staff are female. A similar gender divide trend is repeated worldwide, with top technology employers in Silicon Valley, including Facebook, Google, Twitter and Apple reporting that only 30% of the workforce is female (Larson 2014). Previous research into this gender divide suggests that young women in Secondary Education display a more negative attitude towards computing than their male counterparts. It would appear that the negative female perception of computing has led to representatively low numbers of women studying ICT at a tertiary level and consequently an under representation of females within the ICT industry. The aim of this study is to 1) establish a baseline understanding of the attitudes and perceptions of Secondary Education pupils in regard to computing and 2) statistically establish if young females in Secondary Education really do have a more negative attitude towards computing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medicines reconciliation is a way to identify and act on discrepancies in patients’ medical histories and it is found to play a key role in patient safety. This review focuses on discrepancies and medical errors that occurred at point of discharge from hospital. Studies were identified through the following electronic databases: PubMed, Sciences Direct, EMBASE, Google Scholar, Cochrane Reviews and CINAHL. Each of the six databases was screened from inception to end of January 2014. To determine eligibility of the studies; the title, abstract and full manuscript were screened to find 15 articles that meet the inclusion criteria. The median number of discrepancies across the articles was found to be 60%. In average patient had between 1.2–5.3 discrepancies when leaving the hospital. More studies also found a relation between the numbers of drugs a patient was on and the number of discrepancies. The variation in the number of discrepancies found in the 15 studies could be due to the fact that some studies excluded patient taking more than 5 drugs at admission. Medication reconciliation would be a way to avoid the high number of discrepancies that was found in this literature review and thereby increase patient safety.