900 resultados para Google Analytics
Resumo:
The worldwide scarcity of women studying or employed in ICT, or in computing related disciplines, continues to be a topic of concern for industry, the education sector and governments. Within Europe while females make up 46% of the workforce only 17% of IT staff are female. A similar gender divide trend is repeated worldwide, with top technology employers in Silicon Valley, including Facebook, Google, Twitter and Apple reporting that only 30% of the workforce is female (Larson 2014). Previous research into this gender divide suggests that young women in Secondary Education display a more negative attitude towards computing than their male counterparts. It would appear that the negative female perception of computing has led to representatively low numbers of women studying ICT at a tertiary level and consequently an under representation of females within the ICT industry. The aim of this study is to 1) establish a baseline understanding of the attitudes and perceptions of Secondary Education pupils in regard to computing and 2) statistically establish if young females in Secondary Education really do have a more negative attitude towards computing.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
Medicines reconciliation is a way to identify and act on discrepancies in patients’ medical histories and it is found to play a key role in patient safety. This review focuses on discrepancies and medical errors that occurred at point of discharge from hospital. Studies were identified through the following electronic databases: PubMed, Sciences Direct, EMBASE, Google Scholar, Cochrane Reviews and CINAHL. Each of the six databases was screened from inception to end of January 2014. To determine eligibility of the studies; the title, abstract and full manuscript were screened to find 15 articles that meet the inclusion criteria. The median number of discrepancies across the articles was found to be 60%. In average patient had between 1.2–5.3 discrepancies when leaving the hospital. More studies also found a relation between the numbers of drugs a patient was on and the number of discrepancies. The variation in the number of discrepancies found in the 15 studies could be due to the fact that some studies excluded patient taking more than 5 drugs at admission. Medication reconciliation would be a way to avoid the high number of discrepancies that was found in this literature review and thereby increase patient safety.
Resumo:
As data analytics are growing in importance they are also quickly becoming one of the dominant application domains that require parallel processing. This paper investigates the applicability of OpenMP, the dominant shared-memory parallel programming model in high-performance computing, to the domain of data analytics. We contrast the performance and programmability of key data analytics benchmarks against Phoenix++, a state-of-the-art shared memory map/reduce programming system. Our study shows that OpenMP outperforms the Phoenix++ system by a large margin for several benchmarks. In other cases, however, the programming model is lacking support for this application domain.
Resumo:
This paper presents a new framework for multi-subject event inference in surveillance video, where measurements produced by low-level vision analytics usually are noisy, incomplete or incorrect. Our goal is to infer the composite events undertaken by each subject from noise observations. To achieve this, we consider the temporal characteristics of event relations and propose a method to correctly associate the detected events with individual subjects. The Dempster–Shafer (DS) theory of belief functions is used to infer events of interest from the results of our vision analytics and to measure conflicts occurring during the event association. Our system is evaluated against a number of videos that present passenger behaviours on a public transport platform namely buses at different levels of complexity. The experimental results demonstrate that by reasoning with spatio-temporal correlations, the proposed method achieves a satisfying performance when associating atomic events and recognising composite events involving multiple subjects in dynamic environments.
Resumo:
In this book, Piotr Blumczynski explores the central role of translation as a key epistemological concept as well as a hermeneutic, ethical, linguistic and interpersonal practice. His argument is three-fold: (1) that translation provides a basis for genuine, exciting, serious, innovative and meaningful exchange between various areas of the humanities through both a concept (the WHAT) and a method (the HOW); (2) that, in doing so, it questions and challenges many of the traditional boundaries and offers a transdisciplinary epistemological paradigm, leading to a new understanding of quality, and thus also meaning, truth, and knowledge; and (3) that translational phenomena are studied by a broad range of disciplines in the humanities (including philosophy, theology, linguistics, and anthropology) using various, often seemingly unrelated concepts which nevertheless display a considerable degree of qualitative proximity. The common thread running through all these convictions and binding them together is the insistence that translational phenomena are ubiquitous. Because of its unconventional and innovative approach, this book will be of interest to translation studies scholars looking to situate their research within a broader transdisciplinary model, as well as to students of translation programs and practicing translators who seek a fuller understanding of why and how translation matters.
Resumo:
The continued use of traditional lecturing across Higher Education as the main teaching and learning approach in many disciplines must be challenged. An increasing number of studies suggest that this approach, compared to more active learning methods, is the least effective. In counterargument, the use of traditional lectures are often justified as necessary given a large student population. By analysing the implementation of a web based broadcasting approach which replaced the traditional lecture within a programming-based module, and thereby removed the student population rationale, it was hoped that the student learning experience would become more active and ultimately enhance learning on the module. The implemented model replaces the traditional approach of students attending an on-campus lecture theatre with a web-based live broadcast approach that focuses on students being active learners rather than passive recipients. Students ‘attend’ by viewing a live broadcast of the lecturer, presented as a talking head, and the lecturer’s desktop, via a web browser. Video and audio communication is primarily from tutor to students, with text-based comments used to provide communication from students to tutor. This approach promotes active learning by allowing student to perform activities on their own computer rather than the passive viewing and listening common encountered in large lecture classes. By analysing this approach over two years (n = 234 students) results indicate that 89.6% of students rated the approach as offering a highly positive learning experience. Comparing student performance across three academic years also indicates a positive change. A small data analytic analysis was conducted into student participation levels and suggests that the student cohort's willingness to engage with the broadcast lectures material is high.
Resumo:
Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.
Resumo:
PURPOSE: New onset diabetes after transplantation (NODAT) is a serious complication following solid organ transplantation. There is a genetic contribution to NODAT and we have conducted comprehensive meta-analysis of available genetic data in kidney transplant populations.
METHODS: Relevant articles investigating the association between genetic markers and NODAT were identified by searching PubMed, Web of Science and Google Scholar. SNPs described in a minimum of three studies were included for analysis using a random effects model. The association between identified variants and NODAT was calculated at the per-study level to generate overall significance values and effect sizes.
RESULTS: Searching the literature returned 4,147 citations. Within the 36 eligible articles identified, 18 genetic variants from 12 genes were considered for analysis. Of these, three were significantly associated with NODAT by meta-analysis at the 5% level of significance; CDKAL1 rs10946398 p = 0.006 OR = 1.43, 95% CI = 1.11-1.85 (n = 696 individuals), KCNQ1 rs2237892 p = 0.007 OR = 1.43, 95% CI = 1.10-1.86 (n = 1,270 individuals), and TCF7L2 rs7903146 p = 0.01 OR = 1.41, 95% CI = 1.07-1.85 (n = 2,967 individuals).
CONCLUSION: Evaluating cumulative evidence for SNPs associated with NODAT in kidney transplant recipients has revealed three SNPs associated with NODAT. An adequately powered, dense genome-wide association study will provide more information using a carefully defined NODAT phenotype.
Resumo:
Background: Traffic light labelling of foods—a system that incorporates a colour-coded assessment of the level of total fat, saturated fat, sugar and salt on the front of packaged foods—has been recommended by the UK Government and is currently in use or being phased in by many UK manufacturers and retailers. This paper describes a protocol for a pilot randomised controlled trial of an intervention designed to increase the use of traffic light labelling during real-life food purchase decisions.
Methods/design: The objectives of this two-arm randomised controlled pilot trial are to assess recruitment, retention and data completion rates, to generate potential effect size estimates to inform sample size calculations for the main trial and to assess the feasibility of conducting such a trial. Participants will be recruited by email from a loyalty card database of a UK supermarket chain. Eligible participants will be over 18 and regular shoppers who frequently purchase ready meals or pizzas. The intervention is informed by a review of previous interventions encouraging the use of nutrition labelling and the broader behaviour change literature. It is designed to impact on mechanisms affecting belief and behavioural intention formation as well as those associated with planning and goal setting and the adoption and maintenance of the behaviour of interest, namely traffic light label use during purchases of ready meals and pizzas. Data will be collected using electronic sales data via supermarket loyalty cards and web-based questionnaires and will be used to estimate the effect of the intervention on the nutrition profile of purchased ready meals and pizzas and the behavioural mechanisms associated with label use. Data collection will take place over 48 weeks. A process evaluation including semi-structured interviews and web analytics will be conducted to assess feasibility of a full trial.
Discussion: The design of the pilot trial allows for efficient recruitment and data collection. The intervention could be generalised to a wider population if shown to be feasible in the main trial.
Resumo:
Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.
Resumo:
The World Health Organization estimates that 13 million children aged 5-15 years worldwide are visually impaired from uncorrected refractive error. School vision screening programs can identify and treat or refer children with refractive error. We concentrate on the findings of various screening studies and attempt to identify key factors in the success and sustainability of such programs in the developing world. We reviewed original and review articles describing children's vision and refractive error screening programs published in English and listed in PubMed, Medline OVID, Google Scholar, and Oxford University Electronic Resources databases. Data were abstracted on study objective, design, setting, participants, and outcomes, including accuracy of screening, quality of refractive services, barriers to uptake, impact on quality of life, and cost-effectiveness of programs. Inadequately corrected refractive error is an important global cause of visual impairment in childhood. School-based vision screening carried out by teachers and other ancillary personnel may be an effective means of detecting affected children and improving their visual function with spectacles. The need for services and potential impact of school-based programs varies widely between areas, depending on prevalence of refractive error and competing conditions and rates of school attendance. Barriers to acceptance of services include the cost and quality of available refractive care and mistaken beliefs that glasses will harm children's eyes. Further research is needed in areas such as the cost-effectiveness of different screening approaches and impact of education to promote acceptance of spectacle-wear. School vision programs should be integrated into comprehensive efforts to promote healthy children and their families.
Resumo:
Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.
Resumo:
In this paper, we propose a malware categorization method that models malware behavior in terms of instructions using PageRank. PageRank computes ranks of web pages based on structural information and can also compute ranks of instructions that represent the structural information of the instructions in malware analysis methods. Our malware categorization method uses the computed ranks as features in machine learning algorithms. In the evaluation, we compare the effectiveness of different PageRank algorithms and also investigate bagging and boosting algorithms to improve the categorization accuracy.
Resumo:
In this paper I explore connections between women, art education and spatial relations drawing on the Deleuzo-Guattarian concept of machinic assemblage as a useful analytical tool for making sense of the heterogeneity and meshwork of life narratives and their social milieus. In focusing on Mary Bradish Titcomb, a fin-de-sie`cle Bostonian woman who lived and worked in the interface of education and art, moving in between differentiated series of social, cultural and geographical spaces, I challenge an image of narratives as unified and coherent representations of lives and subjects; at the same time I am pointing to their importance in opening up microsociological analyses of deterritorializations and lines of flight. What I argue is that an attention to space opens up paths for an analytics of becomings, and enables the theorization of open processes, multiplicities and nomadic subjectivities in the field of gender and education.