969 resultados para Common Scrambling Algorithm Stream Cipher
Resumo:
Combinatorial chemistry has become an invaluable tool in medicinal chemistry for the identification of new drug leads. For example, libraries of predetermined sequences and head-to-tail cyclized peptides are routinely synthesized in our laboratory using the IRORI approach. Such libraries are used as molecular toolkits that enable the development of pharmacophores that define activity and specificity at receptor targets. These libraries can be quite large and difficult to handle, due to physical and chemical constraints imposed by their size. Therefore, smaller sub-libraries are often targeted for synthesis. The number of coupling reactions required can be greatly reduced if the peptides having common amino acids are grouped into the same sub-library (batching). This paper describes a schedule optimizer to minimize the number of coupling reactions by rotating and aligning sequences while simultaneously batching. The gradient descent method thereby reduces the number of coupling reactions required for synthesizing cyclic peptide libraries. We show that the algorithm results in a 75% reduction in the number of coupling reactions for a typical cyclic peptide library.
Resumo:
In many online applications, we need to maintain quantile statistics for a sliding window on a data stream. The sliding windows in natural form are defined as the most recent N data items. In this paper, we study the problem of estimating quantiles over other types of sliding windows. We present a uniform framework to process quantile queries for time constrained and filter based sliding windows. Our algorithm makes one pass on the data stream and maintains an E-approximate summary. It uses O((1)/(epsilon2) log(2) epsilonN) space where N is the number of data items in the window. We extend this framework to further process generalized constrained sliding window queries and proved that our technique is applicable for flexible window settings. Our performance study indicates that the space required in practice is much less than the given theoretical bound and the algorithm supports high speed data streams.
Resumo:
Online multimedia data needs to be encrypted for access control. To be capable of working on mobile devices such as pocket PC and mobile phones, lightweight video encryption algorithms should be proposed. The two major problems in these algorithms are that they are either not fast enough or unable to work on highly compressed data stream. In this paper, we proposed a new lightweight encryption algorithm based on Huffman error diffusion. It is a selective algorithm working on compressed data. By carefully choosing the most significant parts (MSP), high performance is achieved with proper security. Experimental results has proved the algorithm to be fast. secure: and compression-compatible.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
Intermittent photic stimulation (IPS) is a common procedure performed in the electroencephalography (EEG) laboratory in children and adults to detect abnormal epileptogenic sensitivity to flickering light (i.e., photosensitivity). In practice, substantial variability in outcome is anecdotally found due to the many different methods used per laboratory and country. We believe that standardization of procedure, based on scientific and clinical data, should permit reproducible identification and quantification of photosensitivity. We hope that the use of our new algorithm will help in standardizing the IPS procedure, which in turn may more clearly identify and assist monitoring of patients with epilepsy and photosensitivity. Our algorithm goes far beyond that published in 1999 (Epilepsia, 1999a, 40, 75; Neurophysiol Clin, 1999b, 29, 318): it has substantially increased content, detailing technical and logistical aspects of IPS testing and the rationale for many of the steps in the IPS procedure. Furthermore, our latest algorithm incorporates the consensus of repeated scientific meetings of European experts in this field over a period of 6 years with feedback from general neurologists and epileptologists to improve its validity and utility. Accordingly, our European group has provided herein updated algorithms for two different levels of methodology: (1) requirements for defining photosensitivity in patients and in family members of known photosensitive patients and (2) requirements for tailored studies in patients with a clear history of visually induced seizures or complaints, and in those already known to be photosensitive.
Resumo:
Gastroesophageal reflux disease (GERD) is a common cause of chronic cough. For the diagnosis and treatment of GERD, it is desirable to quantify the temporal correlation between cough and reflux events. Cough episodes can be identified on esophageal manometric recordings as short-duration, rapid pressure rises. The present study aims at facilitating the detection of coughs by proposing an algorithm for the classification of cough events using manometric recordings. The algorithm detects cough episodes based on digital filtering, slope and amplitude analysis, and duration of the event. The algorithm has been tested on in vivo data acquired using a single-channel intra-esophageal manometric probe that comprises a miniature white-light interferometric fiber optic pressure sensor. Experimental results demonstrate the feasibility of using the proposed algorithm for identifying cough episodes based on real-time recordings using a single channel pressure catheter. The presented work can be integrated with commercial reflux pH/impedance probes to facilitate simultaneous 24-hour ambulatory monitoring of cough and reflux events, with the ultimate goal of quantifying the temporal correlation between the two types of events.
Resumo:
Reading scientific articles is more time-consuming than reading news because readers need to search and read many citations. This paper proposes a citation guided method for summarizing multiple scientific papers. A phenomenon we can observe is that citation sentences in one paragraph or section usually talk about a common fact, which is usually represented as a set of noun phrases co-occurring in citation texts and it is usually discussed from different aspects. We design a multi-document summarization system based on common fact detection. One challenge is that citations may not use the same terms to refer to a common fact. We thus use term association discovering algorithm to expand terms based on a large set of scientific article abstracts. Then, citations can be clustered based on common facts. The common fact is used as a salient term set to get relevant sentences from the corresponding cited articles to form a summary. Experiments show that our method outperforms three baseline methods by ROUGE metric.©2013 Elsevier B.V. All rights reserved.
Resumo:
An approach is proposed for inferring implicative logical rules from examples. The concept of a good diagnostic test for a given set of positive examples lies in the basis of this approach. The process of inferring good diagnostic tests is considered as a process of inductive common sense reasoning. The incremental approach to learning algorithms is implemented in an algorithm DIAGaRa for inferring implicative rules from examples.
Resumo:
The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-deterministic optimization algorithms. Experiments have been performed optimizing several random queries against a randomly generated data dictionary. The proposed adaptive genetic algorithm with probabilistic selection operator outperforms in a number of test runs the canonical genetic algorithm with Elitist selection as well as two common random search strategies and proves to be a viable alternative to existing non-deterministic optimization approaches.
An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement
Resumo:
Unwanted spike noise in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the noise includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes.
Resumo:
In 1917 Pell (1) and Gordon used sylvester2, Sylvester’s little known and hardly ever used matrix of 1853, to compute(2) the coefficients of a Sturmian remainder — obtained in applying in Q[x], Sturm’s algorithm on two polynomials f, g ∈ Z[x] of degree n — in terms of the determinants (3) of the corresponding submatrices of sylvester2. Thus, they solved a problem that had eluded both J. J. Sylvester, in 1853, and E. B. Van Vleck, in 1900. (4) In this paper we extend the work by Pell and Gordon and show how to compute (2) the coefficients of an Euclidean remainder — obtained in finding in Q[x], the greatest common divisor of f, g ∈ Z[x] of degree n — in terms of the determinants (5) of the corresponding submatrices of sylvester1, Sylvester’s widely known and used matrix of 1840. (1) See the link http://en.wikipedia.org/wiki/Anna_Johnson_Pell_Wheeler for her biography (2) Both for complete and incomplete sequences, as defined in the sequel. (3) Also known as modified subresultants. (4) Using determinants Sylvester and Van Vleck were able to compute the coefficients of Sturmian remainders only for the case of complete sequences. (5) Also known as (proper) subresultants.
Resumo:
Lifelong surveillance is not cost-effective after endovascular aneurysm repair (EVAR), but is required to detect aortic complications which are fatal if untreated (type 1/3 endoleak, sac expansion, device migration). Aneurysm morphology determines the probability of aortic complications and therefore the need for surveillance, but existing analyses have proven incapable of identifying patients at sufficiently low risk to justify abandoning surveillance. This study aimed to improve the prediction of aortic complications, through the application of machine-learning techniques. Patients undergoing EVAR at 2 centres were studied from 2004–2010. Aneurysm morphology had previously been studied to derive the SGVI Score for predicting aortic complications. Bayesian Neural Networks were designed using the same data, to dichotomise patients into groups at low- or high-risk of aortic complications. Network training was performed only on patients treated at centre 1. External validation was performed by assessing network performance independently of network training, on patients treated at centre 2. Discrimination was assessed by Kaplan-Meier analysis to compare aortic complications in predicted low-risk versus predicted high-risk patients. 761 patients aged 75 +/− 7 years underwent EVAR in 2 centres. Mean follow-up was 36+/− 20 months. Neural networks were created incorporating neck angu- lation/length/diameter/volume; AAA diameter/area/volume/length/tortuosity; and common iliac tortuosity/diameter. A 19-feature network predicted aor- tic complications with excellent discrimination and external validation (5-year freedom from aortic complications in predicted low-risk vs predicted high-risk patients: 97.9% vs. 63%; p < 0.0001). A Bayesian Neural-Network algorithm can identify patients in whom it may be safe to abandon surveillance after EVAR. This proposal requires prospective study.
Resumo:
The Three-Layer distributed mediation architecture, designed by Secure System Architecture laboratory, employed a layered framework of presence, integration, and homogenization mediators. The architecture does not have any central component that may affect the system reliability. A distributed search technique was adapted in the system to increase its reliability. An Enhanced Chord-like algorithm (E-Chord) was designed and deployed in the integration layer. The E-Chord is a skip-list algorithm based on Distributed Hash Table (DHT) which is a distributed but structured architecture. DHT is distributed in the sense that no central unit is required to maintain indexes, and it is structured in the sense that indexes are distributed over the nodes in a systematic manner. Each node maintains three kind of routing information: a frequency list, a successor/predecessor list, and a finger table. None of the nodes in the system maintains all indexes, and each node knows about some other nodes in the system. These nodes, also called composer mediators, were connected in a P2P fashion. ^ A special composer mediator called a global mediator initiates the keyword-based matching decomposition of the request using the E-Chord. It generates an Integrated Data Structure Graph (IDSG) on the fly, creates association and dependency relations between nodes in the IDSG, and then generates a Global IDSG (GIDSG). The GIDSG graph is a plan which guides the global mediator how to integrate data. It is also used to stream data from the mediators in the homogenization layer which connected to the data sources. The connectors start sending the data to the global mediator just after the global mediator creates the GIDSG and just before the global mediator sends the answer to the presence mediator. Using the E-Chord and GIDSG made the mediation system more scalable than using a central global schema repository since all the composers in the integration layer are capable of handling and routing requests. Also, when a composer fails, it would only minimally affect the entire mediation system. ^
Resumo:
The purpose of the research is to investigate the emerging data security methodologies that will work with most suitable applications in the academic, industrial and commercial environments. Of several methodologies considered for Advanced Encryption Standard (AES), MARS (block cipher) developed by IBM, has been selected. Its design takes advantage of the powerful capabilities of modern computers to allow a much higher level of performance than can be obtained from less optimized algorithms such as Data Encryption Standards (DES). MARS is unique in combining virtually every design technique known to cryptographers in one algorithm. The thesis presents the performance of 128-bit cipher flexibility, which is a scaled down version of the algorithm MARS. The cryptosystem used showed equally comparable performance in speed, flexibility and security, with that of the original algorithm. The algorithm is considered to be very secure and robust and is expected to be implemented for most of the applications.