78 resultados para informatics
Resumo:
This paper describes middleware-level support for agent mobility, targeted at hierarchically structured wireless sensor and actuator network applications. Agent mobility enables a dynamic deployment and adaptation of the application on top of the wireless network at runtime, while allowing the middleware to optimize the placement of agents, e.g., to reduce wireless network traffic, transparently to the application programmer. The paper presents the design of the mechanisms and protocols employed to instantiate agents on nodes and to move agents between nodes. It also gives an evaluation of a middleware prototype running on Imote2 nodes that communicate over ZigBee. The results show that our implementation is reasonably efficient and fast enough to support the envisioned functionality on top of a commodity multi-hop wireless technology. Our work is to a large extent platform-neutral, thus it can inform the design of other systems that adopt a hierarchical structuring of mobile components. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
Resumo:
Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights.
Resumo:
Ultrasound has long been recognized as a means of effecting change at the cellular and tissue levels [1-3], which may be enhanced in the presence of photosensitive agents [4-6]. During insonation, the presence of bubbles can also play a role, creating strong microstreaming effects in solution and in more dramatic circumstances leading to the formation of energetic microjets [7], plasmas [8], and the production of other highly reactive species [9]. Such sonodynamic activity has generated particular excitement in the medical community as it Moreover the dual role for microbubbles as both an adjunct to therapy and a diagnostic echogenicity enhancer has seen industry take a proactive role in their development. In the present paper we studied the role of ultrasound driven sonoluminescent light on the degradation of a fluorescent test species (rhodamine) in the presence of an archetypal photocatalyst material, TiO 2, with a view to exploring its exploitation potential for downstream medical applications. We found that, whilst the efficiency of this process is seen to be low compared with conventional ultra-violet sources, we advocate the further exploration of the sonoluminescent approach given its potential for non-invasive applications. A strategy for enhancing the effect is also suggested.
Resumo:
This paper presents a multi-agent system approach to address the difficulties encountered in traditional SCADA systems deployed in critical environments such as electrical power generation, transmission and distribution. The approach models uncertainty and combines multiple sources of uncertain information to deliver robust plan selection. We examine the approach in the context of a simplified power supply/demand scenario using a residential grid connected solar system and consider the challenges of modelling and reasoning with
uncertain sensor information in this environment. We discuss examples of plans and actions required for sensing, establish and discuss the effect of uncertainty on such systems and investigate different uncertainty theories and how they can fuse uncertain information from multiple sources for effective decision making in
such a complex system.
Resumo:
Cystic fibrosis (CF) is a lifelong, inflammatory multi-organ disease and the most common lethal, genetic condition in Caucasian populations, with a median survival rate of 41.5 years. Pulmonary disease, characterized by infective exacerbations, bronchiectasis and increasing airway insufficiency is the most serious manifestation of this disease process, currently responsible for over 80% of CF deaths. Chronic dysregulation of the innate immune and host inflammatory response has been proposed as a mechanism central to this genetic condition, primarily driven by the nuclear factor κB (NF-κB) pathway. Chronic activation of this transcription factor complex leads to the production of pro-inflammatory cytokines and mediators such as IL-6, IL-8 and TNF-α. A20 has been described as a central and inducible negative regulator of NF-κB. This intracellular molecule negatively regulates NF-κB-driven pro-inflammatory signalling upon toll-like receptor activation at the level of TRAF6 activation. Silencing of A20 increases cellular levels of p65 and induces a pro-inflammatory state. We have previously shown that A20 expression positively correlates with lung function (FEV1%) in CF. Despite improvement in survival rates in recent years, advancements in available therapies have been incremental. We demonstrate that the experimental use of naturally occurring plant diterpenes such as gibberellin on lipopolysaccharide-stimulated cell lines reduces IL-8 release in an A20-dependent manner. We discuss how the use of a novel bio-informatics gene expression connectivity-mapping technique to identify small molecule compounds that similarly mimic the action of A20 may lead to the development of new therapeutic approaches capable of reducing chronic airway inflammation in CF.
Resumo:
Background: Identifying new and more robust assessments of proficiency/expertise (finding new "biomarkers of expertise") in histopathology is desirable for many reasons. Advances in digital pathology permit new and innovative tests such as flash viewing tests and eye tracking and slide navigation analyses that would not be possible with a traditional microscope. The main purpose of this study was to examine the usefulness of time-restricted testing of expertise in histopathology using digital images.
Methods: 19 novices (undergraduate medical students), 18 intermediates (trainees), and 19 experts (consultants) were invited to give their opinion on 20 general histopathology cases after 1 s and 10 s viewing times. Differences in performance between groups were measured and the internal reliability of the test was calculated.
Results: There were highly significant differences in performance between the groups using the Fisher's least significant difference method for multiple comparisons. Differences between groups were consistently greater in the 10-s than the 1-s test. The Kuder-Richardson 20 internal reliability coefficients were very high for both tests: 0.905 for the 1-s test and 0.926 for the 10-s test. Consultants had levels of diagnostic accuracy of 72% at 1 s and 83% at 10 s.
Conclusions: Time-restricted tests using digital images have the potential to be extremely reliable tests of diagnostic proficiency in histopathology. A 10-s viewing test may be more reliable than a 1-s test. Over-reliance on "at a glance" diagnoses in histopathology is a potential source of medical error due to over-confidence bias and premature closure.
Resumo:
In many applications, and especially those where batch processes are involved, a target scalar output of interest is often dependent on one or more time series of data. With the exponential growth in data logging in modern industries such time series are increasingly available for statistical modeling in soft sensing applications. In order to exploit time series data for predictive modelling, it is necessary to summarise the information they contain as a set of features to use as model regressors. Typically this is done in an unsupervised fashion using simple techniques such as computing statistical moments, principal components or wavelet decompositions, often leading to significant information loss and hence suboptimal predictive models. In this paper, a functional learning paradigm is exploited in a supervised fashion to derive continuous, smooth estimates of time series data (yielding aggregated local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The proposed Supervised Aggregative Feature Extraction (SAFE) methodology can be extended to support nonlinear predictive models by embedding the functional learning framework in a Reproducing Kernel Hilbert Spaces setting. SAFE has a number of attractive features including closed form solution and the ability to explicitly incorporate first and second order derivative information. Using simulation studies and a practical semiconductor manufacturing case study we highlight the strengths of the new methodology with respect to standard unsupervised feature extraction approaches.
Resumo:
Answer set programming is a form of declarative programming that has proven very successful in succinctly formulating and solving complex problems. Although mechanisms for representing and reasoning with the combined answer set programs of multiple agents have already been proposed, the actual gain in expressivity when adding communication has not been thoroughly studied. We show that allowing simple programs to talk to each other results in the same expressivity as adding negation-as-failure. Furthermore, we show that the ability to focus on one program in a network of simple programs results in the same expressivity as adding disjunction in the head of the rules.
Resumo:
Social media channels, such as Facebook or Twitter, allow for people to express their views and opinions about any public topics. Public sentiment related to future events, such as demonstrations or parades, indicate public attitude and therefore may be applied while trying to estimate the level of disruption and disorder during such events. Consequently, sentiment analysis of social media content may be of interest for different organisations, especially in security and law enforcement sectors. This paper presents a new lexicon-based sentiment analysis algorithm that has been designed with the main focus on real time Twitter content analysis. The algorithm consists of two key components, namely sentiment normalisation and evidence-based combination function, which have been used in order to estimate the intensity of the sentiment rather than positive/negative label and to support the mixed sentiment classification process. Finally, we illustrate a case study examining the relation between negative sentiment of twitter posts related to English Defence League and the level of disorder during the organisation’s related events.
Resumo:
Analysing public sentiment about future events, such as demonstration or parades, may provide valuable information while estimating the level of disruption and disorder during these events. Social media, such as Twitter or Facebook, provides views and opinions of users related to any public topics. Consequently, sentiment analysis of social media content may be of interest to different public sector organisations, especially in the security and law enforcement sector. In this paper we present a lexicon-based approach to sentiment analysis of Twitter content. The algorithm performs normalisation of the sentiment in an effort to provide intensity of the sentiment rather than positive/negative label. Following this, we evaluate an evidence-based combining function that supports the classification process in cases when positive and negative words co-occur in a tweet. Finally, we illustrate a case study examining the relation between sentiment of twitter posts related to English Defence League and the level of disorder during the EDL related events.
Resumo:
Abstract
Publicly available, outdoor webcams continuously view the world and share images. These cameras include traffic cams, campus cams, ski-resort cams, etc. The Archive of Many Outdoor Scenes (AMOS) is a project aiming to geolocate, annotate, archive, and visualize these cameras and images to serve as a resource for a wide variety of scientific applications. The AMOS dataset has archived over 750 million images of outdoor environments from 27,000 webcams since 2006. Our goal is to utilize the AMOS image dataset and crowdsourcing to develop reliable and valid tools to improve physical activity assessment via online, outdoor webcam capture of global physical activity patterns and urban built environment characteristics.
This project’s grand scale-up of capturing physical activity patterns and built environments is a methodological step forward in advancing a real-time, non-labor intensive assessment using webcams, crowdsourcing, and eventually machine learning. The combined use of webcams capturing outdoor scenes every 30 min and crowdsources providing the labor of annotating the scenes allows for accelerated public health surveillance related to physical activity across numerous built environments. The ultimate goal of this public health and computer vision collaboration is to develop machine learning algorithms that will automatically identify and calculate physical activity patterns.
Resumo:
Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.
Resumo:
The notion of educating the public through generic healthy eating messages has pervaded dietary health promotion efforts over the years and continues to do so through various media, despite little evidence for any enduring impact upon eating behaviour. There is growing evidence, however, that tailored interventions such as those that could be delivered online can be effective in bringing about healthy dietary behaviour change. The present paper brings together evidence from qualitative and quantitative studies that have considered the public perspective of genomics, nutrigenomics and personalised nutrition, including those conducted as part of the EU-funded Food4Me project. Such studies have consistently indicated that although the public hold positive views about nutrigenomics and personalised nutrition, they have reservations about the service providers' ability to ensure the secure handling of health data. Technological innovation has driven the concept of personalised nutrition forward and now a further technological leap is required to ensure the privacy of online service delivery systems and to protect data gathered in the process of designing personalised nutrition therapies.