998 resultados para Digital watermark
Resumo:
Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.
Resumo:
The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches. The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
With the increase of use of digital media the need for the methods of multimedia protection becomes extremely important. The number of the solutions to the problem from encryption to watermarking is large and is growing every year. In this work digital image watermarking is considered, specifically a novel method of digital watermarking of color and spectral images. An overview of existing methods watermarking of color and grayscale images is given in the paper. Methods using independent component analysis (ICA) for detection and the ones using discrete wavelet transform (DWT) and discrete cosine transform (DCT) are considered in more detail. A novel method of watermarking proposed in this paper allows embedding of a color or spectral watermark image into color or spectral image consequently and successful extraction of the watermark out of the resultant watermarked image. A number of experiments have been performed on the quality of extraction depending on the parameters of the embedding procedure. Another set of experiments included the test of the robustness of the algorithm proposed. Three techniques have been chosen for that purpose: median filter, low-pass filter (LPF) and discrete cosine transform (DCT), which are a part of a widely known StirMark - Image Watermarking Robustness Test. The study shows that the proposed watermarking technique is fragile, i.e. watermark is altered by simple image processing operations. Moreover, we have found that the contents of the image to be watermarked do not affect the quality of the extraction. Mixing coefficients, that determine the amount of the key and watermark image in the result, should not exceed 1% of the original. The algorithm proposed has proven to be successful in the task of watermark embedding and extraction.
Resumo:
A domain independent ICA-based approach to watermarking is presented. This approach can be used on images, music or video to embed either a robust or fragile watermark. In the case of robust watermarking, the method shows high information rate and robustness against malicious and non-malicious attacks, while keeping a low induced distortion. The fragile watermarking scheme, on the other hand, shows high sensitivity to tampering attempts while keeping the requirement for high information rate and low distortion. The improved performance is achieved by employing a set of statistically independent sources (the independent components) as the feature space and principled statistical decoding methods. The performance of the suggested method is compared to other state of the art approaches. The paper focuses on applying the method to digitized images although the same approach can be used for other media, such as music or video.
Resumo:
This paper addresses the security of a specific class of common watermarking methods based on Dither modulation-quantisation index modulation (DM-QIM) and focusing on watermark-only attacks (WOA). The vulnerabilities of and probable attacks on lattice structure based watermark embedding methods have been presented in the literature. DM-QIM is one of the best known lattice structure based watermarking techniques. In this paper, the authors discuss a watermark-only attack scenario (the attacker has access to a single watermarked content only). In the literature it is an assumption that DM-QIM methods are secure to WOA. However, the authors show that the DM-QIM based embedding method is vulnerable against a guided key guessing attack by exploiting subtle statistical regularities in the feature space embeddings for time series and images. Using a distribution-free algorithm, this paper presents an analysis of the attack and numerical results for multiple examples of image and time series data.
Resumo:
This paper presents an up to date review of digital watermarking (WM) from a VLSI designer point of view. The reader is introduced to basic principles and terms in the field of image watermarking. It goes through a brief survey on WM theory, laying out common classification criterions and discussing important design considerations and trade-offs. Elementary WM properties such as robustness, computational complexity and their influence on image quality are discussed. Common attacks and testing benchmarks are also briefly mentioned. It is shown that WM design must take the intended application into account. The difference between software and hardware implementations is explained through the introduction of a general scheme of a WM system and two examples from previous works. A versatile methodology to aid in a reliable and modular design process is suggested. Relating to mixed-signal VLSI design and testing, the proposed methodology allows an efficient development of a CMOS image sensor with WM capabilities.
Resumo:
Several medical and dental schools have described their experience in the transition from conventional to digital microscopy in the teaching of general pathology and histology disciplines; however, this transitional process has scarcely been reported in the teaching of oral pathology. Therefore, the objective of the current study is to report the transition from conventional glass slide to virtual microscopy in oral pathology teaching, a unique experience in Latin America. An Aperio ScanScope® scanner was used to digitalize histological slides used in practical lectures of oral pathology. The challenges and benefits observed by the group of Professors from the Piracicaba Dental School (Brazil) are described and a questionnaire to evaluate the students' compliance to this new methodology was applied. An improvement in the classes was described by the Professors who mainly dealt with questions related to pathological changes instead of technical problems; also, a higher interaction with the students was described. The simplicity of the software used and the high quality of the virtual slides, requiring a smaller time to identify microscopic structures, were considered important for a better teaching process. Virtual microscopy used to teach oral pathology represents a useful educational methodology, with an excellent compliance of the dental students.
Resumo:
Remotely sensed imagery has been widely used for land use/cover classification thanks to the periodic data acquisition and the widespread use of digital image processing systems offering a wide range of classification algorithms. The aim of this work was to evaluate some of the most commonly used supervised and unsupervised classification algorithms under different landscape patterns found in Rondônia, including (1) areas of mid-size farms, (2) fish-bone settlements and (3) a gradient of forest and Cerrado (Brazilian savannah). Comparison with a reference map based on the kappa statistics resulted in good to superior indicators (best results - K-means: k=0.68; k=0.77; k=0.64 and MaxVer: k=0.71; k=0.89; k=0.70 respectively for three areas mentioned). Results show that choosing a specific algorithm requires to take into account both its capacity to discriminate among various spectral signatures under different landscape patterns as well as a cost/benefit analysis considering the different steps performed by the operator performing a land cover/use map. it is suggested that a more systematic assessment of several options of implementation of a specific project is needed prior to beginning a land use/cover mapping job.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
OBJECTIVES: This study assessed the bone density gain and its relationship with the periodontal clinical parameters in a case series of a regenerative therapy procedure. MATERIAL AND METHODS: Using a split-mouth study design, 10 pairs of infrabony defects from 15 patients were treated with a pool of bovine bone morphogenetic proteins associated with collagen membrane (test sites) or collagen membrane only (control sites). The periodontal healing was clinically and radiographically monitored for six months. Standardized pre-surgical and 6-month postoperative radiographs were digitized for digital subtraction analysis, which showed relative bone density gain in both groups of 0.034 ± 0.423 and 0.105 ± 0.423 in the test and control group, respectively (p>0.05). RESULTS: As regards the area size of bone density change, the influence of the therapy was detected in 2.5 mm² in the test group and 2 mm² in the control group (p>0.05). Additionally, no correlation was observed between the favorable clinical results and the bone density gain measured by digital subtraction radiography (p>0.05). CONCLUSIONS: The findings of this study suggest that the clinical benefit of the regenerative therapy observed did not come with significant bone density gains. Long-term evaluation may lead to a different conclusions.
Resumo:
This in vivo study evaluated the dissociation quality of maxillary premolar roots combining variations of vertical and horizontal angulations by using X-ray holders (Rinn -XCP), and made a comparison between two types of intraoral radiography systems - conventional film (Kodak Insight, Rochester, USA) and digital radiography (Kodak RVG 6100, Kodak, Rochester, USA). The study sample was comprised of 20 patients with a total of 20 maxillary premolars that were radiographed, using the paralleling angle technique (GP), with a 20º variation of the horizontal angle (GM) and 25º variation of the horizontal angle combined with 15º vertical angle (GMV). Each image was independently analyzed by two experienced examiners. These examiners assigned a score to the diagnostic capability of root dissociation and the measurement of the distance between the apexes. Statistical data was derived using the Wilcoxon Signed Rank test, Friedman and T test. The means of the measured distances between buccal and lingual root apexes were greater for the GMV, which ranged from 2.3 mm to 3.3 mm. A statistically significant difference was found between GM and GMV when compared to GP with p < 0.01. An established best diagnostic dissociation roots image was found in the GMV. These results support the use of the anterior X-ray holders which offer a better combined deviation (GMV) to dissociate maxillary premolar roots in both radiography systems.
Resumo:
The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC) were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner) and 0.80 ± 0.19 (inter-examiner). The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05), although the differences were considered clinically insignificant (differences < 0.1 mm). The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.
Resumo:
Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.
Resumo:
Much has been discussed about Digital Literacy, but it is quite obscure the identification of the skills required to develop such process. This study was done towards an integration of the Digital Literacy process to the specific informational skills a person may dominate, search, retrieve and use information efficiently, in its professional, academic or personal life. The main objective of this work is to propose methodological parameters for training in informational skills. Otherwise, the specific objectives are associated to the supposition and identification of the desired skills of the Digital Literacy program participants. The methodological procedures applied to the research are of exploratory character, and to do so two tools are used: the literature research and case studies. Besides having the methodology in structured information competence, the study points out to the fact that the country is too far from what is desired concerning development and employment of Digital Literacy programs consistent enough to support the teaching and learning of searching, recovering and using of information by the participants. Therefore, it is essential to create programs that provide not only machinery, but motivate individuals to develop informational skills to help in the learning process.