195 resultados para False confession


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a text watermarking scheme that embeds a bitstream watermark Wi in a text document P preserving the meaning, context, and flow of the document. The document is viewed as a set of paragraphs, each paragraph being a set of sentences. The sequence of paragraphs and sentences used to embed watermark bits is permuted using a secret key. Then, English language sentence transformations are used to modify sentence lengths, thus embedding watermarking bits in the Least Significant Bits (LSB) of the sentences’ cardinalities. The embedding and extracting algorithms are public, while the secrecy and security of the watermark depends on a secret key K. The probability of False Positives is extremely small, hence avoiding incidental occurrences of our watermark in random text documents. Majority voting provides security against text addition, deletion, and swapping attacks, further reducing the probability of False Positives. The scheme is secure against the general attacks on text watermarks such as reproduction (photocopying, FAX), reformatting, synonym substitution, text addition, text deletion, text swapping, paragraph shuffling and collusion attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quick detection of an abrupt unknown change in the conditional distribution of a dependent stochastic process has numerous applications. In this paper, we pose a minimax robust quickest change detection problem for cases where there is uncertainty about the post-change conditional distribution. Our minimax robust formulation is based on the popular Lorden criteria of optimal quickest change detection. Under a condition on the set of possible post-change distributions, we show that the widely known cumulative sum (CUSUM) rule is asymptotically minimax robust under our Lorden minimax robust formulation as a false alarm constraint becomes more strict. We also establish general asymptotic bounds on the detection delay of misspecified CUSUM rules (i.e. CUSUM rules that are designed with post- change distributions that differ from those of the observed sequence). We exploit these bounds to compare the delay performance of asymptotically minimax robust, asymptotically optimal, and other misspecified CUSUM rules. In simulation examples, we illustrate that asymptotically minimax robust CUSUM rules can provide better detection delay performance at greatly reduced computation effort compared to competing generalised likelihood ratio procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web servers are accessible by anyone who can access the Internet. Although this universal accessibility is attractive for all kinds of Web-based applications, Web servers are exposed to attackers who may want to alter their contents. Alterations range from humorous additions or changes, which are typically easy to spot, to more sinister tampering, such as providing false or damaging information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ever since Cox et. al published their paper, “A Secure, Robust Watermark for Multimedia” in 1996 [6], there has been tremendous progress in multimedia watermarking. The same pattern re-emerged with Agrawal and Kiernan publishing their work “Watermarking Relational Databases” in 2001 [1]. However, little attention has been given to primitive data collections with only a handful works of research known to the authors [11, 10]. This is primarily due to the absence of an attribute that differentiates marked items from unmarked item during insertion and detection process. This paper presents a distribution-independent, watermarking model that is secure against secondary-watermarking in addition to conventional attacks such as data addition, deletion and distortion. The low false positives and high capacity provide additional strength to the scheme. These claims are backed by experimental results provided in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic recordings of the environment are an important aid to ecologists monitoring biodiversity and environmental health. However, rapid advances in recording technology, storage and computing make it possible to accumulate thousands of hours of recordings, of which, ecologists can only listen to a small fraction. The big-data challenge addressed in this paper is to visualize the content of long-duration audio recordings on multiple scales, from hours, days, months to years. The visualization should facilitate navigation and yield ecologically meaningful information. Our approach is to extract (at one minute resolution) acoustic indices which reflect content of ecological interest. An acoustic index is a statistic that summarizes some aspect of the distribution of acoustic energy in a recording. We combine indices to produce false-color images that reveal acoustic content and facilitate navigation through recordings that are months or even years in duration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DNA methylation at promoter CpG islands (CGI) is an epigenetic modification associated with inappropriate gene silencing in multiple tumor types. In the absence of a human pituitary tumor cell line, small interfering RNA-mediated knockdown of the maintenance methyltransferase DNA methyltransferase (cytosine 5)-1 (Dnmt1) was used in the murine pituitary adenoma cell line AtT-20. Sustained knockdown induced reexpression of the fully methylated and normally imprinted gene neuronatin (Nnat) in a time-dependent manner. Combined bisulfite restriction analysis (COBRA) revealed that reexpression of Nnat was associated with partial CGI demethylation, which was also observed at the H19 differentially methylated region. Subsequent genome-wide microarray analysis identified 91 genes that were significantly differentially expressed in Dnmt1 knockdown cells (10% false discovery rate). The analysis showed that genes associated with the induction of apoptosis, signal transduction, and developmental processes were significantly overrepresented in this list (P < 0.05). Following validation by reverse transcription-PCR and detection of inappropriate CGI methylation by COBRA, four genes (ICAM1, NNAT, RUNX1, and S100A10) were analyzed in primary human pituitary tumors, each displaying significantly reduced mRNA levels relative to normal pituitary (P < 0.05). For two of these genes, NNAT and S100A10, decreased expression was associated with increased promoter CGI methylation. Induced expression of Nnat in stable transfected AtT-20 cells inhibited cell proliferation. To our knowledge, this is the first report of array-based "epigenetic unmasking" in combination with Dnmt1 knockdown and reveals the potential of this strategy toward identifying genes silenced by epigenetic mechanisms across species boundaries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Detection of outbreaks is an important part of disease surveillance. Although many algorithms have been designed for detecting outbreaks, few have been specifically assessed against diseases that have distinct seasonal incidence patterns, such as those caused by vector-borne pathogens. Methods We applied five previously reported outbreak detection algorithms to Ross River virus (RRV) disease data (1991-2007) for the four local government areas (LGAs) of Brisbane, Emerald, Redland and Townsville in Queensland, Australia. The methods used were the Early Aberration Reporting System (EARS) C1, C2 and C3 methods, negative binomial cusum (NBC), historical limits method (HLM), Poisson outbreak detection (POD) method and the purely temporal SaTScan analysis. Seasonally-adjusted variants of the NBC and SaTScan methods were developed. Some of the algorithms were applied using a range of parameter values, resulting in 17 variants of the five algorithms. Results The 9,188 RRV disease notifications that occurred in the four selected regions over the study period showed marked seasonality, which adversely affected the performance of some of the outbreak detection algorithms. Most of the methods examined were able to detect the same major events. The exception was the seasonally-adjusted NBC methods that detected an excess of short signals. The NBC, POD and temporal SaTScan algorithms were the only methods that consistently had high true positive rates and low false positive and false negative rates across the four study areas. The timeliness of outbreak signals generated by each method was also compared but there was no consistency across outbreaks and LGAs. Conclusions This study has highlighted several issues associated with applying outbreak detection algorithms to seasonal disease data. In lieu of a true gold standard, a quantitative comparison is difficult and caution should be taken when interpreting the true positives, false positives, sensitivity and specificity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Rapid diagnostic tests (RDTs) for detection of Plasmodium falciparum infection that target P. falciparum histidine-rich protein 2 (PfHRP2), a protein that circulates in the blood of patients infected with this species of malaria, are widely used to guide case management. Understanding determinants of PfHRP2 availability in circulation is therefore essential to understanding the performance of PfHRP2-detecting RDTs. Methods The possibility that pre-formed host anti-PfHRP2 antibodies may block target antigen detection, thereby causing false negative test results was investigated in this study. Results Anti-PfHRP2 antibodies were detected in 19/75 (25%) of plasma samples collected from patients with acute malaria from Cambodia, Nigeria and the Philippines, as well as in 3/28 (10.7%) asymptomatic Solomon Islands residents. Pre-incubation of plasma samples from subjects with high-titre anti-PfHRP2 antibodies with soluble PfHRP2 blocked the detection of the target antigen on two of the three brands of RDTs tested, leading to false negative results. Pre-incubation of the plasma with intact parasitized erythrocytes resulted in a reduction of band intensity at the highest parasite density, and a reduction of lower detection threshold by ten-fold on all three brands of RDTs tested. Conclusions These observations indicate possible reduced sensitivity for diagnosis of P. falciparum malaria using PfHRP2-detecting RDTs among people with high levels of specific antibodies and low density infection, as well as possible interference with tests configured to detect soluble PfHRP2 in saliva or urine samples. Further investigations are required to assess the impact of pre-formed anti-PfHRP2 antibodies on RDT performance in different transmission settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: The aim of this survey was to assess registered nurse’s perceptions of alarm setting and management in an Australian Regional Critical Care Unit. Background: The setting and management of alarms within the critical care environment is one of the key responsibilities of the nurse in this area. However, with up to 99% of alarms potentially being false-positives it is easy for the nurse to become desensitised or fatigued by incessant alarms; in some cases up to 400 per patient per day. Inadvertently ignoring, silencing or disabling alarms can have deleterious implications for the patient and nurse. Method: A total population sample of 48 nursing staff from a 13 bedded ICU/HDU/CCU within regional Australia were asked to participate. A 10 item open-ended and multiple choice questionnaire was distributed to determine their perceptions and attitudes of alarm setting and management within this clinical area. Results: Two key themes were identified from the open-ended questions: attitudes towards inappropriate alarm settings and annoyance at delayed responses to alarms. A significant number of respondents (93%) agreed that alarm fatigue can result in alarm desensitisation and the disabling of alarms, whilst 81% suggested the key factors are those associated with false-positive alarms and inappropriately set alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification of cognates between two distinct languages has recently start- ed to attract the attention of NLP re- search, but there has been little research into using semantic evidence to detect cognates. The approach presented in this paper aims to detect English-French cog- nates within monolingual texts (texts that are not accompanied by aligned translat- ed equivalents), by integrating word shape similarity approaches with word sense disambiguation techniques in order to account for context. Our implementa- tion is based on BabelNet, a semantic network that incorporates a multilingual encyclopedic dictionary. Our approach is evaluated on two manually annotated da- tasets. The first one shows that across different types of natural text, our method can identify the cognates with an overall accuracy of 80%. The second one, con- sisting of control sentences with semi- cognates acting as either true cognates or false friends, shows that our method can identify 80% of semi-cognates acting as cognates but also identifies 75% of the semi-cognates acting as false friends.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity - past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens "from all angles" and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. ("Natural-colour" is used to contrast with "false-colour", i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data for research, education, art, entertainment, biodiversity assessment and biosecurity control. © 2014 Nguyen et al.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The existence of an ecstasy dependence syndrome is controversial. We examined whether the acute after-effects of ecstasy use (i.e., the “come-down”) falsely lead to the identification of ecstasy withdrawal and the subsequent diagnosis of ecstasy dependence. Methods: The Structured Clinical Interview for DSM-IV-TR Disorders: Research Version (SCID-RV) was administered to 214 Australian ecstasy users. Ecstasy withdrawal was operationalized in three contrasting ways: (i) as per DSM-IV criteria; (ii) as the expected after effects of ecstasy (a regular come-down); or (iii) as a substantially greater or longer come-down than on first use (intense come-down). These definitions were validated against frequency of ecstasy use, readiness to change and ability to resist the urge to use ecstasy. Confirmatory factor analyses were used to see how they aligned with the overall dependence syndrome. Results: Come-down symptoms increased the prevalence of withdrawal from 1% (DSM-IV criterion) to 11% (intense come-downs) and 75% (regular come-downs). Past year ecstasy dependence remained at 31% when including the DSM-IV withdrawal criteria and was 32% with intense come-downs, but increased to 45% with regular come-downs. Intense come-downs were associated with lower ability to resist ecstasy use and loaded positively on the dependence syndrome. Regular come-downs did not load positively on the ecstasy dependence syndrome and were not related to other indices of dependence. Conclusion: The acute after-effects of ecstasy should be excluded when assessing ecstasy withdrawal as they can lead to a false diagnosis of ecstasy dependence. Worsening of the ecstasy come-down may be a marker for dependence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to their unobtrusive nature, vision-based approaches to tracking sports players have been preferred over wearable sensors as they do not require the players to be instrumented for each match. Unfortunately however, due to the heavy occlusion between players, variation in resolution and pose, in addition to fluctuating illumination conditions, tracking players continuously is still an unsolved vision problem. For tasks like clustering and retrieval, having noisy data (i.e. missing and false player detections) is problematic as it generates discontinuities in the input data stream. One method of circumventing this issue is to use an occupancy map, where the field is discretised into a series of zones and a count of player detections in each zone is obtained. A series of frames can then be concatenated to represent a set-play or example of team behaviour. A problem with this approach though is that the compressibility is low (i.e. the variability in the feature space is incredibly high). In this paper, we propose the use of a bilinear spatiotemporal basis model using a role representation to clean-up the noisy detections which operates in a low-dimensional space. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labeled data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Corner detection has shown its great importance in many computer vision tasks. However, in real-world applications, noise in the image strongly affects the performance of corner detectors. Few corner detectors have been designed to be robust to heavy noise by now, partly because the noise could be reduced by a denoising procedure. In this paper, we present a corner detector that could find discriminative corners in images contaminated by noise of different levels, without any denoising procedure. Candidate corners (i.e., features) are firstly detected by a modified SUSAN approach, and then false corners in noise are rejected based on their local characteristics. Features in flat regions are removed based on their intensity centroid, and features on edge structures are removed using the Harris response. The detector is self-adaptive to noise since the image signal-to-noise ratio (SNR) is automatically estimated to choose an appropriate threshold for refining features. Experimental results show that our detector has better performance at locating discriminative corners in images with strong noise than other widely used corner or keypoint detectors.