814 resultados para Gravimetric techniques
Resumo:
In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.
Resumo:
Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis
Resumo:
This paper describes a novel framework for automatic segmentation of primary tumors and its boundary from brain MRIs using morphological filtering techniques. This method uses T2 weighted and T1 FLAIR images. This approach is very simple, more accurate and less time consuming than existing methods. This method is tested by fifty patients of different tumor types, shapes, image intensities, sizes and produced better results. The results were validated with ground truth images by the radiologist. Segmentation of the tumor and boundary detection is important because it can be used for surgical planning, treatment planning, textural analysis, 3-Dimensional modeling and volumetric analysis
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
The thesis explores the area of still image compression. The image compression techniques can be broadly classified into lossless and lossy compression. The most common lossy compression techniques are based on Transform coding, Vector Quantization and Fractals. Transform coding is the simplest of the above and generally employs reversible transforms like, DCT, DWT, etc. Mapped Real Transform (MRT) is an evolving integer transform, based on real additions alone. The present research work aims at developing new image compression techniques based on MRT. Most of the transform coding techniques employ fixed block size image segmentation, usually 8×8. Hence, a fixed block size transform coding is implemented using MRT and the merits and demerits are analyzed for both 8×8 and 4×4 blocks. The N2 unique MRT coefficients, for each block, are computed using templates. Considering the merits and demerits of fixed block size transform coding techniques, a hybrid form of these techniques is implemented to improve the performance of compression. The performance of the hybrid coder is found to be better compared to the fixed block size coders. Thus, if the block size is made adaptive, the performance can be further improved. In adaptive block size coding, the block size may vary from the size of the image to 2×2. Hence, the computation of MRT using templates is impractical due to memory requirements. So, an adaptive transform coder based on Unique MRT (UMRT), a compact form of MRT, is implemented to get better performance in terms of PSNR and HVS The suitability of MRT in vector quantization of images is then experimented. The UMRT based Classified Vector Quantization (CVQ) is implemented subsequently. The edges in the images are identified and classified by employing a UMRT based criteria. Based on the above experiments, a new technique named “MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)”is developed. Its performance is evaluated and compared against existing techniques. A comparison with standard JPEG & the well-known Shapiro’s Embedded Zero-tree Wavelet (EZW) is done and found that the proposed technique gives better performance for majority of images
Resumo:
HINDI
Resumo:
This thesis is divided in to 9 chapters and deals with the modification of TiO2 for various applications include photocatalysis, thermal reaction, photovoltaics and non-linear optics. Chapter 1 involves a brief introduction of the topic of study. An introduction to the applications of modified titania systems in various fields are discussed concisely. Scope and objectives of the present work are also discussed in this chapter. Chapter 2 explains the strategy adopted for the synthesis of metal, nonmetal co-doped TiO2 systems. Hydrothermal technique was employed for the preparation of the co-doped TiO2 system, where Ti[OCH(CH3)2]4, urea and metal nitrates were used as the sources for TiO2, N and metals respectively. In all the co-doped systems, urea to Ti[OCH(CH3)2]4 was taken in a 1:1 molar ratio and varied the concentration of metals. Five different co-doped catalytic systems and for each catalysts, three versions were prepared by varying the concentration of metals. A brief explanation of physico-chemical techniques used for the characterization of the material was also presented in this chapter. This includes X-ray Diffraction (XRD), Raman Spectroscopy, FTIR analysis, Thermo Gravimetric Analysis, Energy Dispersive X-ray Analysis (EDX), Scanning Electron Microscopy(SEM), UV-Visible Diffuse Reflectance Spectroscopy (UV-Vis DRS), Transmission Electron Microscopy (TEM), BET Surface Area Measurements and X-ray Photoelectron Spectroscopy (XPS). Chapter 3 contains the results and discussion of characterization techniques used for analyzing the prepared systems. Characterization is an inevitable part of materials research. Determination of physico-chemical properties of the prepared materials using suitable characterization techniques is very crucial to find its exact field of application. It is clear from the XRD pattern that photocatalytically active anatase phase dominates in the calcined samples with peaks at 2θ values around 25.4°, 38°, 48.1°, 55.2° and 62.7° corresponding to (101), (004), (200), (211) and (204) crystal planes (JCPDS 21-1272) respectively. But in the case of Pr-N-Ti sample, a new peak was observed at 2θ = 30.8° corresponding to the (121) plane of the polymorph brookite. There are no visible peaks corresponding to dopants, which may be due to their low concentration or it is an indication of the better dispersion of impurities in the TiO2. Crystallite size of the sample was calculated from Scherrer equation byusing full width at half maximum (FWHM) of the (101) peak of the anatase phase. Crystallite size of all the co-doped TiO2 was found to be lower than that of bare TiO2 which indicates that the doping of metal ions having higher ionic radius into the lattice of TiO2 causes some lattice distortion which suppress the growth of TiO2 nanoparticles. The structural identity of the prepared system obtained from XRD pattern is further confirmed by Raman spectra measurements. Anatase has six Raman active modes. Band gap of the co-doped system was calculated using Kubelka-Munk equation and that was found to be lower than pure TiO2. Stability of the prepared systems was understood from thermo gravimetric analysis. FT-IR was performed to understand the functional groups as well as to study the surface changes occurred during modification. EDX was used to determine the impurities present in the system. The EDX spectra of all the co-doped samples show signals directly related to the dopants. Spectra of all the co-doped systems contain O and Ti as the main components with low concentrations of doped elements. Morphologies of the prepared systems were obtained from SEM and TEM analysis. Average particle size of the systems was drawn from histogram data. Electronic structures of the samples were identified perfectly from XPS measurements. Chapter 4 describes the photocatalytic degradation of herbicides Atrazine and Metolachlor using metal, non-metal co-doped titania systems. The percentage of degradation was analyzed by HPLC technique. Parameters such as effect of different catalysts, effect of time, effect of catalysts amount and reusability studies were discussed. Chapter 5 deals with the photo-oxidation of some anthracene derivatives by co-doped catalytic systems. These anthracene derivatives come underthe category of polycyclic aromatic hydrocarbons (PAH). Due to the presence of stable benzene rings, most of the PAH show strong inhibition towards biological degradation and the common methods employed for their removal. According to environmental protection agency, most of the PAH are highly toxic in nature. TiO2 photochemistry has been extensively investigated as a method for the catalytic conversion of such organic compounds, highlighting the potential of thereof in the green chemistry. There are actually two methods for the removal of pollutants from the ecosystem. Complete mineralization is the one way to remove pollutants. Conversion of toxic compounds to another compound having toxicity less than the initial starting compound is the second way. Here in this chapter, we are concentrating on the second aspect. The catalysts used were Gd(1wt%)-N-Ti, Pd(1wt%)-N-Ti and Ag(1wt%)-N-Ti. Here we were very successfully converted all the PAH to anthraquinone, a compound having diverse applications in industrial as well as medical fields. Substitution of 10th position of desired PAH by phenyl ring reduces the feasibility of photo reaction and produced 9-hydroxy 9-phenyl anthrone (9H9PA) as an intermediate species. The products were separated and purified by column chromatography using 70:30 hexane/DCM mixtures as the mobile phase and the resultant products were characterized thoroughly by 1H NMR, IR spectroscopy and GCMS analysis. Chapter 6 elucidates the heterogeneous Suzuki coupling reaction by Cu/Pd bimetallic supported on TiO2. Sol-Gel followed by impregnation method was adopted for the synthesis of Cu/Pd-TiO2. The prepared system was characterized by XRD, TG-DTG, SEM, EDX, BET Surface area and XPS. The product was separated and purified by column chromatography using hexane as the mobile phase. Maximum isolated yield of biphenyl of around72% was obtained in DMF using Cu(2wt%)-Pd(4wt%)-Ti as the catalyst. In this reaction, effective solvent, base and catalyst were found to be DMF, K2CO3 and Cu(2wt%)-Pd(4wt%)-Ti respectively. Chapter 7 gives an idea about the photovoltaic (PV) applications of TiO2 based thin films. Due to energy crisis, the whole world is looking for a new sustainable energy source. Harnessing solar energy is one of the most promising ways to tackle this issue. The present dominant photovoltaic (PV) technologies are based on inorganic materials. But the high material, low power conversion efficiency and manufacturing cost limits its popularization. A lot of research has been conducted towards the development of low-cost PV technologies, of which organic photovoltaic (OPV) devices are one of the promising. Here two TiO2 thin films having different thickness were prepared by spin coating technique. The prepared films were characterized by XRD, AFM and conductivity measurements. The thickness of the films was measured by Stylus Profiler. This chapter mainly concentrated on the fabrication of an inverted hetero junction solar cell using conducting polymer MEH-PPV as photo active layer. Here TiO2 was used as the electron transport layer. Thin films of MEH-PPV were also prepared using spin coating technique. Two fullerene derivatives such as PCBM and ICBA were introduced into the device in order to improve the power conversion efficiency. Effective charge transfer between the conducting polymer and ICBA were understood from fluorescence quenching studies. The fabricated Inverted hetero junction exhibited maximum power conversion efficiency of 0.22% with ICBA as the acceptor molecule. Chapter 8 narrates the third order order nonlinear optical properties of bare and noble metal modified TiO2 thin films. Thin films were fabricatedby spray pyrolysis technique. Sol-Gel derived Ti[OCH(CH3)2]4 in CH3CH2OH/CH3COOH was used as the precursor for TiO2. The precursors used for Au, Ag and Pd were the aqueous solutions of HAuCl4, AgNO3 and Pd(NO3)2 respectively. The prepared films were characterized by XRD, SEM and EDX. The nonlinear optical properties of the prepared materials were investigated by Z-Scan technique comprising of Nd-YAG laser (532 nm,7 ns and10 Hz). The non-linear coefficients were obtained by fitting the experimental Z-Scan plot with the theoretical plots. Nonlinear absorption is a phenomenon defined as a nonlinear change (increase or decrease) in absorption with increasing of intensity. This can be mainly divided into two types: saturable absorption (SA) and reverse saturable absorption (RSA). Depending on the pump intensity and on the absorption cross- section at the excitation wavelength, most molecules show non- linear absorption. With increasing intensity, if the excited states show saturation owing to their long lifetimes, the transmission will show SA characteristics. Here absorption decreases with increase of intensity. If, however, the excited state has strong absorption compared with that of the ground state, the transmission will show RSA characteristics. Here in our work most of the materials show SA behavior and some materials exhibited RSA behavior. Both these properties purely depend on the nature of the materials and alignment of energy states within them. Both these SA and RSA have got immense applications in electronic devices. The important results obtained from various studies are presented in chapter 9.
Resumo:
The aim of the thesis was to design and develop spatially adaptive denoising techniques with edge and feature preservation, for images corrupted with additive white Gaussian noise and SAR images affected with speckle noise. Image denoising is a well researched topic. It has found multifaceted applications in our day to day life. Image denoising based on multi resolution analysis using wavelet transform has received considerable attention in recent years. The directionlet based denoising schemes presented in this thesis are effective in preserving the image specific features like edges and contours in denoising. Scope of this research is still open in areas like further optimization in terms of speed and extension of the techniques to other related areas like colour and video image denoising. Such studies would further augment the practical use of these techniques.
Resumo:
We present a new algorithm called TITANIC for computing concept lattices. It is based on data mining techniques for computing frequent itemsets. The algorithm is experimentally evaluated and compared with B. Ganter's Next-Closure algorithm.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Unidad didáctica de Inglés elaborada a partir de un tema transversal: el racismo. Este tema crea un clima positivo de respeto y colaboración que facilita el trabajo en equipo. Se resalta el papel que la lengua extranjera tiene como instrumento de comunicación y de cooperación entre los distintos paises y pueblos. La unidad cubre las cuatro detrezas comunicativas: listening, speaking, reading y writing para los niveles superiores de la Enseñanza Secundaria, a través de materiales audiovisuales reales, no creados por el profesorado para la ocasión (publicación audiovisual Speak-up, revista TIME, etc.).
Resumo:
Signalling off-chip requires significant current. As a result, a chip's power-supply current changes drastically during certain output-bus transitions. These current fluctuations cause a voltage drop between the chip and circuit board due to the parasitic inductance of the power-supply package leads. Digital designers often go to great lengths to reduce this "transmitted" noise. Cray, for instance, carefully balances output signals using a technique called differential signalling to guarantee a chip has constant output current. Transmitted-noise reduction costs Cray a factor of two in output pins and wires. Coding achieves similar results at smaller costs.
Resumo:
This paper presents an image-based rendering system using algebraic relations between different views of an object. The system uses pictures of an object taken from known positions. Given three such images it can generate "virtual'' ones as the object would look from any position near the ones that the two input images were taken from. The extrapolation from the example images can be up to about 60 degrees of rotation. The system is based on the trilinear constraints that bind any three view so fan object. As a side result, we propose two new methods for camera calibration. We developed and used one of them. We implemented the system and tested it on real images of objects and faces. We also show experimentally that even when only two images taken from unknown positions are given, the system can be used to render the object from other view points as long as we have a good estimate of the internal parameters of the camera used and we are able to find good correspondence between the example images. In addition, we present the relation between these algebraic constraints and a factorization method for shape and motion estimation. As a result we propose a method for motion estimation in the special case of orthographic projection.
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation