990 resultados para CENSORED DATA


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mandatory data breach notification laws are a novel statutory solution in relation to organizational protections of personal information. They require organizations which have suffered a breach of security involving personal information to notif'y those persons whose information may have been affected. These laws originated in the state based legislatures of the United States during the last decade and have subsequently garnered worldwide legislative interest. Despite their perceived utility, mandatory data breach notification laws have several conceptual and practical concems that limit the scope of their applicability, particularly in relation to existing information privacy law regimes. We outline these concerns, and in doing so, we contend that while mandatory data breach notification laws have many useful facets, their utility as an 'add-on' to enhance the failings of current information privacy law frameworks should not necessarily be taken for granted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

longitudinal study of data modelling across grades 1-3. The activity engaged children in designing, implementing, and analysing a survey about their new playground. Data modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. The core components of data modelling addressed here are children’s structuring and representing of data, with a focus on their display of metarepresentational competence (diSessa, 2004). Such competence includes students’ abilities to invent or design a variety of new representations, explain their creations, understand the role they play, and critique and compare the adequacy of representations. Reported here are the ways in which the children structured and represented their data, the metarepresentational competence displayed, and links between their metarepresentational competence and conceptual competence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Cancer outlier profile analysis (COPA) has proven to be an effective approach to analyzing cancer expression data, leading to the discovery of the TMPRSS2 and ETS family gene fusion events in prostate cancer. However, the original COPA algorithm did not identify down-regulated outliers, and the currently available R package implementing the method is similarly restricted to the analysis of over-expressed outliers. Here we present a modified outlier detection method, mCOPA, which contains refinements to the outlier-detection algorithm, identifies both over- and under-expressed outliers, is freely available, and can be applied to any expression dataset. Results We compare our method to other feature-selection approaches, and demonstrate that mCOPA frequently selects more-informative features than do differential expression or variance-based feature selection approaches, and is able to recover observed clinical subtypes more consistently. We demonstrate the application of mCOPA to prostate cancer expression data, and explore the use of outliers in clustering, pathway analysis, and the identification of tumour suppressors. We analyse the under-expressed outliers to identify known and novel prostate cancer tumour suppressor genes, validating these against data in Oncomine and the Cancer Gene Index. We also demonstrate how a combination of outlier analysis and pathway analysis can identify molecular mechanisms disrupted in individual tumours. Conclusions We demonstrate that mCOPA offers advantages, compared to differential expression or variance, in selecting outlier features, and that the features so selected are better able to assign samples to clinically annotated subtypes. Further, we show that the biology explored by outlier analysis differs from that uncovered in differential expression or variance analysis. mCOPA is an important new tool for the exploration of cancer datasets and the discovery of new cancer subtypes, and can be combined with pathway and functional analysis approaches to discover mechanisms underpinning heterogeneity in cancers

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an input-orientated data envelopment analysis (DEA) framework which allows the measurement and decomposition of economic, environmental and ecological efficiency levels in agricultural production across different countries. Economic, environmental and ecological optimisations search for optimal input combinations that minimise total costs, total amount of nutrients, and total amount of cumulative exergy contained in inputs respectively. The application of the framework to an agricultural dataset of 30 OECD countries revealed that (i) there was significant scope to make their agricultural production systemsmore environmentally and ecologically sustainable; (ii) the improvement in the environmental and ecological sustainability could be achieved by being more technically efficient and, even more significantly, by changing the input combinations; (iii) the rankings of sustainability varied significantly across OECD countries within frontier-based environmental and ecological efficiency measures and between frontier-based measures and indicators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to forecast machinery health is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models which attempt to forecast machinery health based on condition data such as vibration measurements. This paper demonstrates how the population characteristics and condition monitoring data (both complete and suspended) of historical items can be integrated for training an intelligent agent to predict asset health multiple steps ahead. The model consists of a feed-forward neural network whose training targets are asset survival probabilities estimated using a variation of the Kaplan–Meier estimator and a degradation-based failure probability density function estimator. The trained network is capable of estimating the future survival probabilities when a series of asset condition readings are inputted. The output survival probabilities collectively form an estimated survival curve. Pump data from a pulp and paper mill were used for model validation and comparison. The results indicate that the proposed model can predict more accurately as well as further ahead than similar models which neglect population characteristics and suspended data. This work presents a compelling concept for longer-range fault prognosis utilising available information more fully and accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our paper approaches Twitter through the lens of “platform politics” (Gillespie, 2010), focusing in particular on controversies around user data access, ownership, and control. We characterise different actors in the Twitter data ecosystem: private and institutional end users of Twitter, commercial data resellers such as Gnip and DataSift, data scientists, and finally Twitter, Inc. itself; and describe their conflicting interests. We furthermore study Twitter’s Terms of Service and application programming interface (API) as material instantiations of regulatory instruments used by the platform provider and argue for a more promotion of data rights and literacy to strengthen the position of end users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The deployment of new emerging technologies, such as cooperative systems, allows the traffic community to foresee relevant improvements in terms of traffic safety and efficiency. Vehicles are able to communicate on the local traffic state in real time, which could result in an automatic and therefore better reaction to the mechanism of traffic jam formation. An upstream single hop radio broadcast network can improve the perception of each cooperative driver within radio range and hence the traffic stability. The impact of a cooperative law on traffic congestion appearance is investigated, analytically and through simulation. Ngsim field data is used to calibrate the Optimal Velocity with Relative Velocity (OVRV) car following model and the MOBIL lane-changing model is implemented. Assuming that congestion can be triggered either by a perturbation in the instability domain or by a critical lane changing behavior, the calibrated car following behavior is used to assess the impact of a microscopic cooperative law on abnormal lane changing behavior. The cooperative law helps reduce and delay traffic congestion as it increases traffic flow stability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Accumulated biological research outcomes show that biological functions do not depend on individual genes, but on complex gene networks. Microarray data are widely used to cluster genes according to their expression levels across experimental conditions. However, functionally related genes generally do not show coherent expression across all conditions since any given cellular process is active only under a subset of conditions. Biclustering finds gene clusters that have similar expression levels across a subset of conditions. This paper proposes a seed-based algorithm that identifies coherent genes in an exhaustive, but efficient manner. Methods In order to find the biclusters in a gene expression dataset, we exhaustively select combinations of genes and conditions as seeds to create candidate bicluster tables. The tables have two columns: (a) a gene set, and (b) the conditions on which the gene set have dissimilar expression levels to the seed. First, the genes with less than the maximum number of dissimilar conditions are identified and a table of these genes is created. Second, the rows that have the same dissimilar conditions are grouped together. Third, the table is sorted in ascending order based on the number of dissimilar conditions. Finally, beginning with the first row of the table, a test is run repeatedly to determine whether the cardinality of the gene set in the row is greater than the minimum threshold number of genes in a bicluster. If so, a bicluster is outputted and the corresponding row is removed from the table. Repeating this process, all biclusters in the table are systematically identified until the table becomes empty. Conclusions This paper presents a novel biclustering algorithm for the identification of additive biclusters. Since it involves exhaustively testing combinations of genes and conditions, the additive biclusters can be found more readily.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The IEEE Subcommittee on the Application of Probability Methods (APM) published the IEEE Reliability Test System (RTS) [1] in 1979. This system provides a consistent and generally acceptable set of data that can be used both in generation capacity and in composite system reliability evaluation [2,3]. The test system provides a basis for the comparison of results obtained by different people using different methods. Prior to its publication, there was no general agreement on either the system or the data that should be used to demonstrate or test various techniques developed to conduct reliability studies. Development of reliability assessment techniques and programs are very dependent on the intent behind the development as the experience of one power utility with their system may be quite different from that of another utility. The development and the utilization of a reliability program are, therefore, greatly influenced by the experience of a utlity and the intent of the system manager, planner and designer conducting the reliability studies. The IEEE-RTS has proved to be extremely valuable in highlighting and comparing the capabilities (or incapabilities) of programs used in reliability studies, the differences in the perception of various power utilities and the differences in the solution techniques. The IEEE-RTS contains a reasonably large power network which can be difficult to use for initial studies in an educational environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The IEEE Reliability Test System (RTS) developed by the Application of Probability Method Subcommittee has been used to compare and test a wide range of generating capacity and composite system evaluation techniques and subsequent digital computer programs. A basic reliability test system is presented which has evolved from the reliability education and research programs conducted by the Power System Research Group at the University of Saskatchewan. The basic system data necessary for adequacy evaluation at the generation and composite generation and transmission system levels are presented together with the fundamental data required to conduct reliability-cost/reliability-worth evaluation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Buildings are key mediators between human activity and the environment around them, but details of energy usage and activity in buildings is often poorly communicated and understood. ECOS is an Eco-Visualization project that aims to contextualize the energy generation and consumption of a green building in a variety of different climates. The ECOS project is being developed for a large public interactive space installed in the new Science and Engineering Centre of the Queensland University of Technology that is dedicated to delivering interactive science education content to the public. This paper focuses on how design can develop ICT solutions from large data sets to create meaningful engagement with environmental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QUT’s new metadata repository (data registry), Research Data Finder, has been designed to promote the visibility and discoverability of QUT research datasets. Funded by the Australian National Data Service (ANDS), it will provide a qualitative snapshot of research data outputs created or collected by members of the QUT research community that are available via open or mediated access. As a fully integrated metadata repository Research Data Finder aligns with institutional sources of truth, such as QUT’s research administrative system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. In addition, the repository and its workflows are designed to foster smoother data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximize existing research datasets. The metadata schema used in Research Data Finder is the Registry Interchange Format - Collections and Services (RIF-CS), developed by ANDS in 2009. This comprehensive schema is potentially complex for researchers; unlike metadata for publications, which are often made publicly available with the official publication, metadata for datasets are not typically available and need to be created. Research Data Finder uses a hybrid self-deposit and mediated deposit system. In addition to automated ingests from ResearchMaster (research project information) and Academic Profiles system (researcher information), shareable data is identified at a number of key “trigger points” in the research cycle. These include: research grant proposals; ethics applications; Data Management Plans; Liaison Librarian data interviews; and thesis submissions. These ingested records can be supplemented with related metadata including links to related publications, such as those in QUT ePrints. Records deposited in Research Data Finder are harvested by ANDS and made available to a national and international audience via Research Data Australia, ANDS’ discovery service for Australian research data. Researcher and research group metadata records are also harvested by the National Library of Australia (NLA) and these records are then published in Trove (the NLA’s digital information portal). By contributing records to the national infrastructure, QUT data will become more visible. Within Australia and internationally, many funding bodies have already mandated the open access of publications produced from publicly funded research projects, such as those supported by the Australian Research Council (ARC), or the National Health and Medical Research Council (NHMRC). QUT will be well placed to respond to the rapidly evolving climate of research data management. This project is supported by the Australian National Data Service (ANDS). ANDS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative.