524 resultados para Data protection.
Resumo:
This paper presents an input-orientated data envelopment analysis (DEA) framework which allows the measurement and decomposition of economic, environmental and ecological efficiency levels in agricultural production across different countries. Economic, environmental and ecological optimisations search for optimal input combinations that minimise total costs, total amount of nutrients, and total amount of cumulative exergy contained in inputs respectively. The application of the framework to an agricultural dataset of 30 OECD countries revealed that (i) there was significant scope to make their agricultural production systemsmore environmentally and ecologically sustainable; (ii) the improvement in the environmental and ecological sustainability could be achieved by being more technically efficient and, even more significantly, by changing the input combinations; (iii) the rankings of sustainability varied significantly across OECD countries within frontier-based environmental and ecological efficiency measures and between frontier-based measures and indicators.
Resumo:
The ability to forecast machinery health is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models which attempt to forecast machinery health based on condition data such as vibration measurements. This paper demonstrates how the population characteristics and condition monitoring data (both complete and suspended) of historical items can be integrated for training an intelligent agent to predict asset health multiple steps ahead. The model consists of a feed-forward neural network whose training targets are asset survival probabilities estimated using a variation of the Kaplan–Meier estimator and a degradation-based failure probability density function estimator. The trained network is capable of estimating the future survival probabilities when a series of asset condition readings are inputted. The output survival probabilities collectively form an estimated survival curve. Pump data from a pulp and paper mill were used for model validation and comparison. The results indicate that the proposed model can predict more accurately as well as further ahead than similar models which neglect population characteristics and suspended data. This work presents a compelling concept for longer-range fault prognosis utilising available information more fully and accurately.
Resumo:
Our paper approaches Twitter through the lens of “platform politics” (Gillespie, 2010), focusing in particular on controversies around user data access, ownership, and control. We characterise different actors in the Twitter data ecosystem: private and institutional end users of Twitter, commercial data resellers such as Gnip and DataSift, data scientists, and finally Twitter, Inc. itself; and describe their conflicting interests. We furthermore study Twitter’s Terms of Service and application programming interface (API) as material instantiations of regulatory instruments used by the platform provider and argue for a more promotion of data rights and literacy to strengthen the position of end users.
Resumo:
The deployment of new emerging technologies, such as cooperative systems, allows the traffic community to foresee relevant improvements in terms of traffic safety and efficiency. Vehicles are able to communicate on the local traffic state in real time, which could result in an automatic and therefore better reaction to the mechanism of traffic jam formation. An upstream single hop radio broadcast network can improve the perception of each cooperative driver within radio range and hence the traffic stability. The impact of a cooperative law on traffic congestion appearance is investigated, analytically and through simulation. Ngsim field data is used to calibrate the Optimal Velocity with Relative Velocity (OVRV) car following model and the MOBIL lane-changing model is implemented. Assuming that congestion can be triggered either by a perturbation in the instability domain or by a critical lane changing behavior, the calibrated car following behavior is used to assess the impact of a microscopic cooperative law on abnormal lane changing behavior. The cooperative law helps reduce and delay traffic congestion as it increases traffic flow stability.
Resumo:
Background Accumulated biological research outcomes show that biological functions do not depend on individual genes, but on complex gene networks. Microarray data are widely used to cluster genes according to their expression levels across experimental conditions. However, functionally related genes generally do not show coherent expression across all conditions since any given cellular process is active only under a subset of conditions. Biclustering finds gene clusters that have similar expression levels across a subset of conditions. This paper proposes a seed-based algorithm that identifies coherent genes in an exhaustive, but efficient manner. Methods In order to find the biclusters in a gene expression dataset, we exhaustively select combinations of genes and conditions as seeds to create candidate bicluster tables. The tables have two columns: (a) a gene set, and (b) the conditions on which the gene set have dissimilar expression levels to the seed. First, the genes with less than the maximum number of dissimilar conditions are identified and a table of these genes is created. Second, the rows that have the same dissimilar conditions are grouped together. Third, the table is sorted in ascending order based on the number of dissimilar conditions. Finally, beginning with the first row of the table, a test is run repeatedly to determine whether the cardinality of the gene set in the row is greater than the minimum threshold number of genes in a bicluster. If so, a bicluster is outputted and the corresponding row is removed from the table. Repeating this process, all biclusters in the table are systematically identified until the table becomes empty. Conclusions This paper presents a novel biclustering algorithm for the identification of additive biclusters. Since it involves exhaustively testing combinations of genes and conditions, the additive biclusters can be found more readily.
Resumo:
miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star
Resumo:
The IEEE Subcommittee on the Application of Probability Methods (APM) published the IEEE Reliability Test System (RTS) [1] in 1979. This system provides a consistent and generally acceptable set of data that can be used both in generation capacity and in composite system reliability evaluation [2,3]. The test system provides a basis for the comparison of results obtained by different people using different methods. Prior to its publication, there was no general agreement on either the system or the data that should be used to demonstrate or test various techniques developed to conduct reliability studies. Development of reliability assessment techniques and programs are very dependent on the intent behind the development as the experience of one power utility with their system may be quite different from that of another utility. The development and the utilization of a reliability program are, therefore, greatly influenced by the experience of a utlity and the intent of the system manager, planner and designer conducting the reliability studies. The IEEE-RTS has proved to be extremely valuable in highlighting and comparing the capabilities (or incapabilities) of programs used in reliability studies, the differences in the perception of various power utilities and the differences in the solution techniques. The IEEE-RTS contains a reasonably large power network which can be difficult to use for initial studies in an educational environment.
Resumo:
The IEEE Reliability Test System (RTS) developed by the Application of Probability Method Subcommittee has been used to compare and test a wide range of generating capacity and composite system evaluation techniques and subsequent digital computer programs. A basic reliability test system is presented which has evolved from the reliability education and research programs conducted by the Power System Research Group at the University of Saskatchewan. The basic system data necessary for adequacy evaluation at the generation and composite generation and transmission system levels are presented together with the fundamental data required to conduct reliability-cost/reliability-worth evaluation
Resumo:
Buildings are key mediators between human activity and the environment around them, but details of energy usage and activity in buildings is often poorly communicated and understood. ECOS is an Eco-Visualization project that aims to contextualize the energy generation and consumption of a green building in a variety of different climates. The ECOS project is being developed for a large public interactive space installed in the new Science and Engineering Centre of the Queensland University of Technology that is dedicated to delivering interactive science education content to the public. This paper focuses on how design can develop ICT solutions from large data sets to create meaningful engagement with environmental data.
Resumo:
The adequacy of the UV Index (UVI), a simple measure of ambient solar ultraviolet (UV) radiation, has been questioned on the basis of recent scientific data on the importance of vitamin D for human health, the mutagenic capacity of radiation in the UVA wavelength, and limitations in the behavioral impact of the UVI as a public awareness tool. A working group convened by ICNIRP and WHO met to assess whether modifications of the UVI were warranted and to discuss ways of improving its effectiveness as a guide to healthy sun-protective behavior. A UV Index greater than 3 was confirmed as indicating ambient UV levels at which harmful sun exposure and sunburns could occur and hence as the threshold for promoting preventive messages. There is currently insufficient evidence about the quantitative relationship of sun exposure, vitamin D, and human health to include vitamin D considerations in sun protection recommendations. The role of UVA in sunlight-induced dermal immunosuppression and DNA damage was acknowledged, but the contribution of UVA to skin carcinogenesis could not be quantified precisely. As ambient UVA and UVB levels mostly vary in parallel in real life situations, any minor modification of the UVI weighting function with respect to UVA-induced skin cancer would not be expected to have a significant impact on the UV Index. Though it has been shown that the UV Index can raise awareness of the risk of UV radiation to some extent, the UVI does not appear to change attitudes to sun protection or behavior in the way it is presently used. Changes in the UVI itself were not warranted based on these findings, but rather research testing health behavior models, including the roles of self-efficacy and self-affirmation in relation to intention to use sun protection among different susceptible groups, should be carried out to develop more successful strategies toward improving sun protection behavior. Health Phys. 103(3):301-306; 2012
Resumo:
QUT’s new metadata repository (data registry), Research Data Finder, has been designed to promote the visibility and discoverability of QUT research datasets. Funded by the Australian National Data Service (ANDS), it will provide a qualitative snapshot of research data outputs created or collected by members of the QUT research community that are available via open or mediated access. As a fully integrated metadata repository Research Data Finder aligns with institutional sources of truth, such as QUT’s research administrative system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. In addition, the repository and its workflows are designed to foster smoother data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximize existing research datasets. The metadata schema used in Research Data Finder is the Registry Interchange Format - Collections and Services (RIF-CS), developed by ANDS in 2009. This comprehensive schema is potentially complex for researchers; unlike metadata for publications, which are often made publicly available with the official publication, metadata for datasets are not typically available and need to be created. Research Data Finder uses a hybrid self-deposit and mediated deposit system. In addition to automated ingests from ResearchMaster (research project information) and Academic Profiles system (researcher information), shareable data is identified at a number of key “trigger points” in the research cycle. These include: research grant proposals; ethics applications; Data Management Plans; Liaison Librarian data interviews; and thesis submissions. These ingested records can be supplemented with related metadata including links to related publications, such as those in QUT ePrints. Records deposited in Research Data Finder are harvested by ANDS and made available to a national and international audience via Research Data Australia, ANDS’ discovery service for Australian research data. Researcher and research group metadata records are also harvested by the National Library of Australia (NLA) and these records are then published in Trove (the NLA’s digital information portal). By contributing records to the national infrastructure, QUT data will become more visible. Within Australia and internationally, many funding bodies have already mandated the open access of publications produced from publicly funded research projects, such as those supported by the Australian Research Council (ARC), or the National Health and Medical Research Council (NHMRC). QUT will be well placed to respond to the rapidly evolving climate of research data management. This project is supported by the Australian National Data Service (ANDS). ANDS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative.
Resumo:
This paper describes the use of property graphs for mapping data between AEC software tools, which are not linked by common data formats and/or other interoperability measures. The intention of introducing this in practice, education and research is to facilitate the use of diverse, non-integrated design and analysis applications by a variety of users who need to create customised digital workflows, including those who are not expert programmers. Data model types are examined by way of supporting the choice of directed, attributed, multi-relational graphs for such data transformation tasks. A brief exemplar design scenario is also presented to illustrate the concepts and methods proposed, and conclusions are drawn regarding the feasibility of this approach and directions for further research.
Resumo:
Climate change and land use pressures are making environmental monitoring increasingly important. As environmental health is degrading at an alarming rate, ecologists have tried to tackle the problem by monitoring the composition and condition of environment. However, traditional monitoring methods using experts are manual and expensive; to address this issue government organisations designed a simpler and faster surrogate-based assessment technique for consultants, landholders and ordinary citizens. However, it remains complex, subjective and error prone. This makes collected data difficult to interpret and compare. In this paper we describe a work-in-progress mobile application designed to address these shortcomings through the use of augmented reality and multimedia smartphone technology.
Resumo:
Citizen Science projects are initiatives in which members of the general public participate in scientific research projects and perform or manage research-related tasks such as data collection and/or data annotation. Citizen Science is technologically possible and scientifically significant. However, although research teams can save time and money by recruiting general citizens to volunteer their time and skills to help data analysis, the reliability of contributed data varies a lot. Data reliability issues are significant to the domain of Citizen Science due to the quantity and diversity of people and devices involved. Participants may submit low quality, misleading, inaccurate, or even malicious data. Therefore, finding a way to improve the data reliability has become an urgent demand. This study aims to investigate techniques to enhance the reliability of data contributed by general citizens in scientific research projects especially for acoustic sensing projects. In particular, we propose to design a reputation framework to enhance data reliability and also investigate some critical elements that should be aware of during developing and designing new reputation systems.