82 resultados para Church architecture - Data processing - Spain
em CentAUR: Central Archive University of Reading - UK
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.
Resumo:
The intelligent controlling mechanism of a typical mobile robot is usually a computer system. Research is however now ongoing in which biological neural networks are being cultured and trained to act as the brain of an interactive real world robot – thereby either completely replacing or operating in a cooperative fashion with a computer system. Studying such neural systems can give a distinct insight into biological neural structures and therefore such research has immediate medical implications. The principal aims of the present research are to assess the computational and learning capacity of dissociated cultured neuronal networks with a view to advancing network level processing of artificial neural networks. This will be approached by the creation of an artificial hybrid system (animat) involving closed loop control of a mobile robot by a dissociated culture of rat neurons. This paper details the components of the overall animat closed loop system architecture and reports on the evaluation of the results from preliminary real-life and simulated robot experiments.
Resumo:
In recent years, there has been an increasing interest in the adoption of emerging ubiquitous sensor network (USN) technologies for instrumentation within a variety of sustainability systems. USN is emerging as a sensing paradigm that is being newly considered by the sustainability management field as an alternative to traditional tethered monitoring systems. Researchers have been discovering that USN is an exciting technology that should not be viewed simply as a substitute for traditional tethered monitoring systems. In this study, we investigate how a movement monitoring measurement system of a complex building is developed as a research environment for USN and related decision-supportive technologies. To address the apparent danger of building movement, agent-mediated communication concepts have been designed to autonomously manage large volumes of exchanged information. In this study, we additionally detail the design of the proposed system, including its principles, data processing algorithms, system architecture, and user interface specifics. Results of the test and case study demonstrate the effectiveness of the USN-based data acquisition system for real-time monitoring of movement operations.
Resumo:
The long-term stability, high accuracy, all-weather capability, high vertical resolution, and global coverage of Global Navigation Satellite System (GNSS) radio occultation (RO) suggests it as a promising tool for global monitoring of atmospheric temperature change. With the aim to investigate and quantify how well a GNSS RO observing system is able to detect climate trends, we are currently performing an (climate) observing system simulation experiment over the 25-year period 2001 to 2025, which involves quasi-realistic modeling of the neutral atmosphere and the ionosphere. We carried out two climate simulations with the general circulation model MAECHAM5 (Middle Atmosphere European Centre/Hamburg Model Version 5) of the MPI-M Hamburg, covering the period 2001–2025: One control run with natural variability only and one run also including anthropogenic forcings due to greenhouse gases, sulfate aerosols, and tropospheric ozone. On the basis of this, we perform quasi-realistic simulations of RO observables for a small GNSS receiver constellation (six satellites), state-of-the-art data processing for atmospheric profiles retrieval, and a statistical analysis of temperature trends in both the “observed” climatology and the “true” climatology. Here we describe the setup of the experiment and results from a test bed study conducted to obtain a basic set of realistic estimates of observational errors (instrument- and retrieval processing-related errors) and sampling errors (due to spatial-temporal undersampling). The test bed results, obtained for a typical summer season and compared to the climatic 2001–2025 trends from the MAECHAM5 simulation including anthropogenic forcing, were found encouraging for performing the full 25-year experiment. They indicated that observational and sampling errors (both contributing about 0.2 K) are consistent with recent estimates of these errors from real RO data and that they should be sufficiently small for monitoring expected temperature trends in the global atmosphere over the next 10 to 20 years in most regions of the upper troposphere and lower stratosphere (UTLS). Inspection of the MAECHAM5 trends in different RO-accessible atmospheric parameters (microwave refractivity and pressure/geopotential height in addition to temperature) indicates complementary climate change sensitivity in different regions of the UTLS so that optimized climate monitoring shall combine information from all climatic key variables retrievable from GNSS RO data.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
This paper proposes a conceptual model of a context-aware group support system (GSS) to assist local council employees to perform collaborative tasks in conjunction with inter- and intra-organisational stakeholders. Most discussions about e-government focus on the use of ICT to improve the relationship between government and citizen, not on the relationship between government and employees. This paper seeks to expose the unique culture of UK local councils and to show how a GSS could support local government employer and employee needs.
Resumo:
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Resumo:
This chapter introduces the latest practices and technologies in the interactive interpretation of environmental data. With environmental data becoming ever larger, more diverse and more complex, there is a need for a new generation of tools that provides new capabilities over and above those of the standard workhorses of science. These new tools aid the scientist in discovering interesting new features (and also problems) in large datasets by allowing the data to be explored interactively using simple, intuitive graphical tools. In this way, new discoveries are made that are commonly missed by automated batch data processing. This chapter discusses the characteristics of environmental science data, common current practice in data analysis and the supporting tools and infrastructure. New approaches are introduced and illustrated from the points of view of both the end user and the underlying technology. We conclude by speculating as to future developments in the field and what must be achieved to fulfil this vision.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
Environment monitoring applications using Wireless Sensor Networks (WSNs) have had a lot of attention in recent years. In much of this research tasks like sensor data processing, environment states and events decision making and emergency message sending are done by a remote server. A proposed cross layer protocol for two different applications where, reliability for delivered data, delay and life time of the network need to be considered, has been simulated and the results are presented in this paper. A WSN designed for the proposed applications needs efficient MAC and routing protocols to provide a guarantee for the reliability of the data delivered from source nodes to the sink. A cross layer based on the design given in [1] has been extended and simulated for the proposed applications, with new features, such as routes discovery algorithms added. Simulation results show that the proposed cross layer based protocol can conserve energy for nodes and provide the required performance such as life time of the network, delay and reliability.
Resumo:
The motion of a car is described using a stochastic model in which the driving processes are the steering angle and the tangential acceleration. The model incorporates exactly the kinematic constraint that the wheels do not slip sideways. Two filters based on this model have been implemented, namely the standard EKF, and a new filter (the CUF) in which the expectation and the covariance of the system state are propagated accurately. Experiments show that i) the CUF is better than the EKF at predicting future positions of the car; and ii) the filter outputs can be used to control the measurement process, leading to improved ability to recover from errors in predictive tracking.
Resumo:
Pair Programming is a technique from the software development method eXtreme Programming (XP) whereby two programmers work closely together to develop a piece of software. A similar approach has been used to develop a set of Assessment Learning Objects (ALO). Three members of academic staff have developed a set of ALOs for a total of three different modules (two with overlapping content). In each case a pair programming approach was taken to the development of the ALO. In addition to demonstrating the efficiency of this approach in terms of staff time spent developing the ALOs, a statistical analysis of the outcomes for students who made use of the ALOs is used to demonstrate the effectiveness of the ALOs produced via this method.
Resumo:
One of the main tasks of the mathematical knowledge management community must surely be to enhance access to mathematics on digital systems. In this paper we present a spectrum of approaches to solving the various problems inherent in this task, arguing that a variety of approaches is both necessary and useful. The main ideas presented are about the differences between digitised mathematics, digitally represented mathematics and formalised mathematics. Each has its part to play in managing mathematical information in a connected world. Digitised material is that which is embodied in a computer file, accessible and displayable locally or globally. Represented material is digital material in which there is some structure (usually syntactic in nature) which maps to the mathematics contained in the digitised information. Formalised material is that in which both the syntax and semantics of the represented material, is automatically accessible. Given the range of mathematical information to which access is desired, and the limited resources available for managing that information, we must ensure that these resources are applied to digitise, form representations of or formalise, existing and new mathematical information in such a way as to extract the most benefit from the least expenditure of resources. We also analyse some of the various social and legal issues which surround the practical tasks.