886 resultados para computer based experiments


Relevância:

40.00% 40.00%

Publicador:

Resumo:

User requirements of multimedia authentication are various. In some cases, the user requires an authentication system to monitor a set of specific areas with respective sensitivity while neglecting other modification. Most current existing fragile watermarking schemes are mixed systems, which can not satisfy accurate user requirements. Therefore, in this paper we designed a sensor-based multimedia authentication architecture. This system consists of sensor combinations and a fuzzy response logic system. A sensor is designed to strictly respond to given area tampering of a certain type. With this scheme, any complicated authentication requirement can be satisfied, and many problems such as error tolerant tamper method detection will be easily resolved. We also provided experiments to demonstrate the implementation of the sensor-based system

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sharing data among organizations often leads to mutual benefit. Recent technology in data mining has enabled efficient extraction of knowledge from large databases. This, however, increases risks of disclosing the sensitive knowledge when the database is released to other parties. To address this privacy issue, one may sanitize the original database so that the sensitive knowledge is hidden. The challenge is to minimize the side effect on the quality of the sanitized database so that nonsensitive knowledge can still be mined. In this paper, we study such a problem in the context of hiding sensitive frequent itemsets by judiciously modifying the transactions in the database. To preserve the non-sensitive frequent itemsets, we propose a border-based approach to efficiently evaluate the impact of any modification to the database during the hiding process. The quality of database can be well maintained by greedily selecting the modifications with minimal side effect. Experiments results are also reported to show the effectiveness of the proposed approach. © 2005 IEEE

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There has been an increased demand for characterizing user access patterns using web mining techniques since the informative knowledge extracted from web server log files can not only offer benefits for web site structure improvement but also for better understanding of user navigational behavior. In this paper, we present a web usage mining method, which utilize web user usage and page linkage information to capture user access pattern based on Probabilistic Latent Semantic Analysis (PLSA) model. A specific probabilistic model analysis algorithm, EM algorithm, is applied to the integrated usage data to infer the latent semantic factors as well as generate user session clusters for revealing user access patterns. Experiments have been conducted on real world data set to validate the effectiveness of the proposed approach. The results have shown that the presented method is capable of characterizing the latent semantic factors and generating user profile in terms of weighted page vectors, which may reflect the common access interest exhibited by users among same session cluster.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Genome sequences from many organisms, including humans, have been completed, and high-throughput analyses have produced burgeoning volumes of 'omics' data. Bioinformatics is crucial for the management and analysis of such data and is increasingly used to accelerate progress in a wide variety of large-scale and object-specific functional analyses. Refined algorithms enable biotechnologists to follow 'computer-aided strategies' based on experiments driven by high-confidence predictions. In order to address compound problems, current efforts in immuno-informatics and reverse vaccinology are aimed at developing and tuning integrative approaches and user-friendly, automated bioinformatics environments. This will herald a move to 'computer-aided biotechnology': smart projects in which time-consuming and expensive large-scale experimental approaches are progressively replaced by prediction-driven investigations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study presents a detailed contrastive description of the textual functioning of connectives in English and Arabic. Particular emphasis is placed on the organisational force of connectives and their role in sustaining cohesion. The description is intended as a contribution for a better understanding of the variations in the dominant tendencies for text organisation in each language. The findings are expected to be utilised for pedagogical purposes, particularly in improving EFL teaching of writing at the undergraduate level. The study is based on an empirical investigation of the phenomenon of connectivity and, for optimal efficiency, employs computer-aided procedures, particularly those adopted in corpus linguistics, for investigatory purposes. One important methodological requirement is the establishment of two comparable and statistically adequate corpora, also the design of software and the use of existing packages and to achieve the basic analysis. Each corpus comprises ca 250,000 words of newspaper material sampled in accordance to a specific set of criteria and assembled in machine readable form prior to the computer-assisted analysis. A suite of programmes have been written in SPITBOL to accomplish a variety of analytical tasks, and in particular to perform a battery of measurements intended to quantify the textual functioning of connectives in each corpus. Concordances and some word lists are produced by using OCP. Results of these researches confirm the existence of fundamental differences in text organisation in Arabic in comparison to English. This manifests itself in the way textual operations of grouping and sequencing are performed and in the intensity of the textual role of connectives in imposing linearity and continuity and in maintaining overall stability. Furthermore, computation of connective functionality and range of operationality has identified fundamental differences in the way favourable choices for text organisation are made and implemented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated which evoked response component occurring in the first 800 ms after stimulus presentation was most suitable to be used in a classical P300-based brain-computer interface speller protocol. Data was acquired from 275 Magnetoencephalographic sensors in two subjects and from 61 Electroencephalographic sensors in four. To better characterize the evoked physiological responses and minimize the effect of response overlap, a 1000 ms Inter Stimulus Interval was preferred to the short (

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Combining the results of classifiers has shown much promise in machine learning generally. However, published work on combining text categorizers suggests that, for this particular application, improvements in performance are hard to attain. Explorative research using a simple voting system is presented and discussed in the light of a probabilistic model that was originally developed for safety critical software. It was found that typical categorization approaches produce predictions which are too similar for combining them to be effective since they tend to fail on the same records. Further experiments using two less orthodox categorizers are also presented which suggest that combining text categorizers can be successful, provided the essential element of ‘difference’ is considered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper presents basic notions and scientific achievements in the field of program transformations, describes usage of these achievements both in the professional activity (when developing optimizing and unparallelizing compilers) and in the higher education. It also analyzes main problems in this area. The concept of control of program transformation information is introduced in the form of specialized knowledge bank on computer program transformations to support the scientific research, education and professional activity in the field. The tasks that are solved by the knowledge bank are formulated. The paper is intended for experts in the artificial intelligence, optimizing compilation, postgraduates and senior students of corresponding specialties; it may be also interesting for university lecturers and instructors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is proposed an agent approach for creation of intelligent intrusion detection system. The system allows detecting known type of attacks and anomalies in user activity and computer system behavior. The system includes different types of intelligent agents. The most important one is user agent based on neural network model of user behavior. Proposed approach is verified by experiments in real Intranet of Institute of Physics and Technologies of National Technical University of Ukraine "Kiev Polytechnic Institute”.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computer simulators of real-world processes are often computationally expensive and require many inputs. The problem of the computational expense can be handled using emulation technology; however, highly multidimensional input spaces may require more simulator runs to train and validate the emulator. We aim to reduce the dimensionality of the problem by screening the simulators inputs for nonlinear effects on the output rather than distinguishing between negligible and active effects. Our proposed method is built upon the elementary effects (EE) method for screening and uses a threshold value to separate the inputs with linear and nonlinear effects. The technique is simple to implement and acts in a sequential way to keep the number of simulator runs down to a minimum, while identifying the inputs that have nonlinear effects. The algorithm is applied on a set of simulated examples and a rabies disease simulator where we observe run savings ranging between 28% and 63% compared with the batch EE method. Supplementary materials for this article are available online.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.