73 resultados para Context-aware computing and systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pollen grains are microscopic so their identification and quantification has, for decades, depended upon human observers using light microscopes: a labour-intensive approach. Modern improvements in computing and imaging hardware and software now bring automation of pollen analyses within reach. In this paper, we provide the first review in over 15 yr of progress towards automation of the part of palynology concerned with counting and classifying pollen, bringing together literature published from a wide spectrum of sources. We
consider which attempts offer the most potential for an automated palynology system for universal application across all fields of research concerned with pollen classification and counting. We discuss what is required to make the datasets of these automated systems as acceptable as those produced by human palynologists, and present suggestions for how automation will generate novel approaches to counting and classifying pollen that have hitherto been unthinkable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction and Background

This research was undertaken by an international team of academics from Queen’s University, Belfast, Leeds University and Penn State University (USA) who have examined models of adult social care provision across thirteen jurisdictions. The aim of this research is to present the Commissioner for Older People in Northern Ireland (COPNI) with possible options for legal reform to adult social care provision for older people in Northern Ireland.

Project Objectives

The agreed objectives of this research were to provide:
• Identification of gaps and issues surrounding the current legislative framework including policy provision for adult social care in Northern Ireland.
• Comparison of Northern Ireland with best practice in other jurisdictions to include (but not be limited to): England and Wales, Republic of Ireland, Scotland and at least two other international examples; Recommendations, based on the above, as to whether there is a need for legislative reform – provision of suggestions other than legislative change (if applicable).
• Recommendations or options based on the above, on how to best change the current framework in Northern Ireland to provide better support outcomes for older people.
• Stakeholder engagement via roundtable event to discuss outcomes/ recommendations.

Structure of Report

The findings from this research are based on an international review of adult social care in the local, national and international contexts. The report will, therefore, firstly present the key recommendations for Northern Ireland which have emerged from a systematic examination and review of adult social care in diverse jurisdictions. Each jurisdiction is then examined in the context of legislative and policy provision and examples of best practice are provided. The final section of the report then compares Northern Ireland to best practice from each of these aforementioned jurisdictions and the discussion entails the background to the report’s final Recommendations. The recommendations in this report are thus directly linked in with the evidence we have gathered across different countries with contrasting systems of welfare.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of bipartite bosonic systems, two notions of classicality of correlations can be defined: P-classicality, based on the properties of the Glauber-Sudarshan P-function; and C-classicality, based on the entropic quantum discord. It has been shown that these two notions are maximally inequivalent in a static (metric) sense -- as they coincide only on a set of states of zero measure. We extend and reinforce quantitatively this inequivalence by addressing the dynamical relation between these types of non-classicality in a paradigmatic quantum-optical setting: the linear mixing at a beam splitter of a single-mode Gaussian state with a thermal reference state. Specifically, we show that almost all P-classical input states generate outputs that are not C-classical. Indeed, for the case of zero thermal reference photons, the more P-classical resources at the input the less C-classicality at the output. In addition, we show that the P-classicality at the input -- as quantified by the non-classical depth -- does instead determine quantitatively the potential of generating output entanglement. This endows the non-classical depth with a new operational interpretation: it gives the maximum number of thermal reference photons that can be mixed at a beam splitter without destroying the output entanglement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background
The power of the randomised controlled trial depends upon its capacity to operate in a closed system whereby the intervention is the only causal force acting upon the experimental group and absent in the control group, permitting a valid assessment of intervention efficacy. Conversely, clinical arenas are open systems where factors relating to context, resources, interpretation and actions of individuals will affect implementation and effectiveness of interventions. Consequently, the comparator (usual care) can be difficult to define and variable in multi-centre trials. Hence outcomes cannot be understood without considering usual care and factors that may affect implementation and impact on the intervention.

Methods
Using a fieldwork approach, we describe PICU context, ‘usual’ practice in sedation and weaning from mechanical ventilation, and factors affecting implementation prior to designing a trial involving a sedation and ventilation weaning intervention. We collected data from 23 UK PICUs between June and November 2014 using observation, individual and multi-disciplinary group interviews with staff.

Results
Pain and sedation practices were broadly similar in terms of drug usage and assessment tools. Sedation protocols linking assessment to appropriate titration of sedatives and sedation holds were rarely used (9 % and 4 % of PICUs respectively). Ventilator weaning was primarily a medical-led process with 39 % of PICUs engaging senior nurses in the process: weaning protocols were rarely used (9 % of PICUs). Weaning methods were variably based on clinician preference. No formal criteria or use of spontaneous breathing trials were used to test weaning readiness. Seventeen PICUs (74 %) had prior engagement in multi-centre trials, but limited research nurse availability. Barriers to previous trial implementation were intervention complexity, lack of belief in the evidence and inadequate training. Facilitating factors were senior staff buy-in and dedicated research nurse provision.

Conclusions
We examined and identified contextual and organisational factors that may impact on the implementation of our intervention. We found usual practice relating to sedation, analgesia and ventilator weaning broadly similar, yet distinctively different from our proposed intervention, providing assurance in our ability to evaluate intervention effects. The data will enable us to develop an implementation plan; considering these factors we can more fully understand their impact on study outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laughter is a ubiquitous social signal in human interactions yet it remains understudied from a scientific point of view. The need to understand laughter and its role in human interactions has become more pressing as the ability to create conversational agents capable of interacting with humans has come closer to a reality. This paper reports on three aspects of the human perception of laughter when context has been removed and only the body information from the laughter episode remains. We report on ability to categorise the laugh type and the sex of the laugher; the relationship between personality factors with laughter categorisation and perception; and finally the importance of intensity in the perception and categorisation of laughter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The worldwide scarcity of women studying or employed in ICT, or in computing related disciplines, continues to be a topic of concern for industry, the education sector and governments. Within Europe while females make up 46% of the workforce only 17% of IT staff are female. A similar gender divide trend is repeated worldwide, with top technology employers in Silicon Valley, including Facebook, Google, Twitter and Apple reporting that only 30% of the workforce is female (Larson 2014). Previous research into this gender divide suggests that young women in Secondary Education display a more negative attitude towards computing than their male counterparts. It would appear that the negative female perception of computing has led to representatively low numbers of women studying ICT at a tertiary level and consequently an under representation of females within the ICT industry. The aim of this study is to 1) establish a baseline understanding of the attitudes and perceptions of Secondary Education pupils in regard to computing and 2) statistically establish if young females in Secondary Education really do have a more negative attitude towards computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a method for rational behaviour recognition that combines vision-based pose estimation with knowledge modeling and reasoning. The proposed method consists of two stages. First, RGB-D images are used in the estimation of the body postures. Then, estimated actions are evaluated to verify that they make sense. This method requires rational behaviour to be exhibited. To comply with this requirement, this work proposes a rational RGB-D dataset with two types of sequences, some for training and some for testing. Preliminary results show the addition of knowledge modeling and reasoning leads to a significant increase of recognition accuracy when compared to a system based only on computer vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Major food adulteration and contamination events occur with alarming regularity and are known to be episodic, with the question being not if but when another large-scale food safety/integrity incident will occur. Indeed, the challenges of maintaining food security are now internationally recognised. The ever increasing scale and complexity of food supply networks can lead to them becoming significantly more vulnerable to fraud and contamination, and potentially dysfunctional. This can make the task of deciding which analytical methods are more suitable to collect and analyse (bio)chemical data within complex food supply chains, at targeted points of vulnerability, that much more challenging. It is evident that those working within and associated with the food industry are seeking rapid, user-friendly methods to detect food fraud and contamination, and rapid/high-throughput screening methods for the analysis of food in general. In addition to being robust and reproducible, these methods should be portable and ideally handheld and/or remote sensor devices, that can be taken to or be positioned on/at-line at points of vulnerability along complex food supply networks and require a minimum amount of background training to acquire information rich data rapidly (ergo point-and-shoot). Here we briefly discuss a range of spectrometry and spectroscopy based approaches, many of which are commercially available, as well as other methods currently under development. We discuss a future perspective of how this range of detection methods in the growing sensor portfolio, along with developments in computational and information sciences such as predictive computing and the Internet of Things, will together form systems- and technology-based approaches that significantly reduce the areas of vulnerability to food crime within food supply chains. As food fraud is a problem of systems and therefore requires systems level solutions and thinking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.