991 resultados para soft-commutation techniques
Resumo:
This paper describes a novel framework for automatic segmentation of primary tumors and its boundary from brain MRIs using morphological filtering techniques. This method uses T2 weighted and T1 FLAIR images. This approach is very simple, more accurate and less time consuming than existing methods. This method is tested by fifty patients of different tumor types, shapes, image intensities, sizes and produced better results. The results were validated with ground truth images by the radiologist. Segmentation of the tumor and boundary detection is important because it can be used for surgical planning, treatment planning, textural analysis, 3-Dimensional modeling and volumetric analysis
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
The thesis explores the area of still image compression. The image compression techniques can be broadly classified into lossless and lossy compression. The most common lossy compression techniques are based on Transform coding, Vector Quantization and Fractals. Transform coding is the simplest of the above and generally employs reversible transforms like, DCT, DWT, etc. Mapped Real Transform (MRT) is an evolving integer transform, based on real additions alone. The present research work aims at developing new image compression techniques based on MRT. Most of the transform coding techniques employ fixed block size image segmentation, usually 8×8. Hence, a fixed block size transform coding is implemented using MRT and the merits and demerits are analyzed for both 8×8 and 4×4 blocks. The N2 unique MRT coefficients, for each block, are computed using templates. Considering the merits and demerits of fixed block size transform coding techniques, a hybrid form of these techniques is implemented to improve the performance of compression. The performance of the hybrid coder is found to be better compared to the fixed block size coders. Thus, if the block size is made adaptive, the performance can be further improved. In adaptive block size coding, the block size may vary from the size of the image to 2×2. Hence, the computation of MRT using templates is impractical due to memory requirements. So, an adaptive transform coder based on Unique MRT (UMRT), a compact form of MRT, is implemented to get better performance in terms of PSNR and HVS The suitability of MRT in vector quantization of images is then experimented. The UMRT based Classified Vector Quantization (CVQ) is implemented subsequently. The edges in the images are identified and classified by employing a UMRT based criteria. Based on the above experiments, a new technique named “MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)”is developed. Its performance is evaluated and compared against existing techniques. A comparison with standard JPEG & the well-known Shapiro’s Embedded Zero-tree Wavelet (EZW) is done and found that the proposed technique gives better performance for majority of images
Resumo:
HINDI
Resumo:
The aim of the thesis was to design and develop spatially adaptive denoising techniques with edge and feature preservation, for images corrupted with additive white Gaussian noise and SAR images affected with speckle noise. Image denoising is a well researched topic. It has found multifaceted applications in our day to day life. Image denoising based on multi resolution analysis using wavelet transform has received considerable attention in recent years. The directionlet based denoising schemes presented in this thesis are effective in preserving the image specific features like edges and contours in denoising. Scope of this research is still open in areas like further optimization in terms of speed and extension of the techniques to other related areas like colour and video image denoising. Such studies would further augment the practical use of these techniques.
Resumo:
Magnetism and magnetic materials have been playing a lead role in improving the quality of life. They are increasingly being used in a wide variety of applications ranging from compasses to modern technological devices. Metallic glasses occupy an important position among magnetic materials. They assume importance both from a scientific and an application point of view since they represent an amorphous form of condensed matter with significant deviation from thermodynamic equilibrium. Metallic glasses having good soft magnetic properties are widely used in tape recorder heads, cores of high-power transformers and metallic shields. Superconducting metallic glasses are being used to produce high magnetic fields and magnetic levitation effect. Upon heat treatment, they undergo structural relaxation leading to subtle rearrangements of constituent atoms. This leads to densification of amorphous phase and subsequent nanocrystallisation. The short-range structural relaxation phenomenon gives rise to significant variations in physical, mechanical and magnetic properties. Magnetic amorphous alloys of Co-Fe exhibit excellent soft magnetic properties which make them promising candidates for applications as transformer cores, sensors, and actuators. With the advent of microminiaturization and nanotechnology, thin film forms of these alloys are sought after for soft under layers for perpendicular recording media. The thin film forms of these alloys can also be used for fabrication of magnetic micro electro mechanical systems (magnetic MEMS). In bulk, they are drawn in the form of ribbons, often by melt spinning. The main constituents of these alloys are Co, Fe, Ni, Si, Mo and B. Mo acts as the grain growth inhibitor and Si and B facilitate the amorphous nature in the alloy structure. The ferromagnetic phases such as Co-Fe and Fe-Ni in the alloy composition determine the soft magnetic properties. The grain correlation length, a measure of the grain size, often determines the soft magnetic properties of these alloys. Amorphous alloys could be restructured in to their nanocrystalline counterparts by different techniques. The structure of nanocrystalline material consists of nanosized ferromagnetic crystallites embedded in an amorphous matrix. When the amorphous phase is ferromagnetic, they facilitate exchange coupling between nanocrystallites. This exchange coupling results in the vanishing of magnetocrystalline anisotropy which improves the soft magnetic properties. From a fundamental perspective, exchange correlation length and grain size are the deciding factors that determine the magnetic properties of these nanocrystalline materials. In thin films, surfaces and interfaces predominantly decides the bulk property and hence tailoring the surface roughness and morphology of the film could result in modified magnetic properties. Surface modifications can be achieved by thermal annealing at various temperatures. Ion irradiation is an alternative tool to modify the surface/structural properties. The surface evolution of a thin film under swift heavy ion (SHI) irradiation is an outcome of different competing mechanism. It could be sputtering induced by SHI followed by surface roughening process and the material transport induced smoothening process. The impingement of ions with different fluence on the alloy is bound to produce systematic microstructural changes and this could effectively be used for tailoring magnetic parameters namely coercivity, saturation magnetization, magnetic permeability and remanence of these materials. Swift heavy ion irradiation is a novel and an ingenious tool for surface modification which eventually will lead to changes in the bulk as well as surface magnetic property. SHI has been widely used as a method for the creation of latent tracks in thin films. The bombardment of SHI modifies the surfaces or interfaces or creates defects, which induces strain in the film. These changes will have profound influence on the magnetic anisotropy and the magnetisation of the specimen. Thus inducing structural and morphological changes by thermal annealing and swift heavy ion irradiation, which in turn induce changes in the magnetic properties of these alloys, is one of the motivation of this study. Multiferroic and magneto-electrics is a class of functional materials with wide application potential and are of great interest to material scientists and engineers. Magnetoelectric materials combine both magnetic as well as ferroelectric properties in a single specimen. The dielectric properties of such materials can be controlled by the application of an external magnetic field and the magnetic properties by an electric field. Composites with magnetic and piezo/ferroelectric individual phases are found to have strong magnetoelectric (ME) response at room temperature and hence are preferred to single phasic multiferroic materials. Currently research in this class of materials is towards optimization of the ME coupling by tailoring the piezoelectric and magnetostrictive properties of the two individual components of ME composites. The magnetoelectric coupling constant (MECC) (_ ME) is the parameter that decides the extent of interdependence of magnetic and electric response of the composite structure. Extensive investigates have been carried out in bulk composites possessing on giant ME coupling. These materials are fabricated by either gluing the individual components to each other or mixing the magnetic material to a piezoelectric matrix. The most extensively investigated material combinations are Lead Zirconate Titanate (PZT) or Lead Magnesium Niobate-Lead Titanate (PMNPT) as the piezoelectric, and Terfenol-D as the magnetostrictive phase and the coupling is measured in different configurations like transverse, longitudinal and inplane longitudinal. Fabrication of a lead free multiferroic composite with a strong ME response is the need of the hour from a device application point of view. The multilayer structure is expected to be far superior to bulk composites in terms of ME coupling since the piezoelectric (PE) layer can easily be poled electrically to enhance the piezoelectricity and hence the ME effect. The giant magnetostriction reported in the Co-Fe thin films makes it an ideal candidate for the ferromagnetic component and BaTiO3 which is a well known ferroelectric material with improved piezoelectric properties as the ferroelectric component. The multilayer structure of BaTiO3- CoFe- BaTiO3 is an ideal system to understand the underlying fundamental physics behind the ME coupling mechanism. Giant magnetoelectric coupling coefficient is anticipated for these multilayer structures of BaTiO3-CoFe-BaTiO3. This makes it an ideal candidate for cantilever applications in magnetic MEMS/NEMS devices. SrTiO3 is an incipient ferroelectric material which is paraelectric up to 0K in its pure unstressed form. Recently few studies showed that ferroelectricity can be induced by application of stress or by chemical / isotopic substitution. The search for room temperature magnetoelectric coupling in SrTiO3-CoFe-SrTiO3 multilayer structures is of fundamental interest. Yet another motivation of the present work is to fabricate multilayer structures consisting of CoFe/ BaTiO3 and CoFe/ SrTiO3 for possible giant ME coupling coefficient (MECC) values. These are lead free and hence promising candidates for MEMS applications. The elucidation of mechanism for the giant MECC also will be the part of the objective of this investigation.
Resumo:
We present a new algorithm called TITANIC for computing concept lattices. It is based on data mining techniques for computing frequent itemsets. The algorithm is experimentally evaluated and compared with B. Ganter's Next-Closure algorithm.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Unidad didáctica de Inglés elaborada a partir de un tema transversal: el racismo. Este tema crea un clima positivo de respeto y colaboración que facilita el trabajo en equipo. Se resalta el papel que la lengua extranjera tiene como instrumento de comunicación y de cooperación entre los distintos paises y pueblos. La unidad cubre las cuatro detrezas comunicativas: listening, speaking, reading y writing para los niveles superiores de la Enseñanza Secundaria, a través de materiales audiovisuales reales, no creados por el profesorado para la ocasión (publicación audiovisual Speak-up, revista TIME, etc.).
Resumo:
Signalling off-chip requires significant current. As a result, a chip's power-supply current changes drastically during certain output-bus transitions. These current fluctuations cause a voltage drop between the chip and circuit board due to the parasitic inductance of the power-supply package leads. Digital designers often go to great lengths to reduce this "transmitted" noise. Cray, for instance, carefully balances output signals using a technique called differential signalling to guarantee a chip has constant output current. Transmitted-noise reduction costs Cray a factor of two in output pins and wires. Coding achieves similar results at smaller costs.
Resumo:
This paper presents an image-based rendering system using algebraic relations between different views of an object. The system uses pictures of an object taken from known positions. Given three such images it can generate "virtual'' ones as the object would look from any position near the ones that the two input images were taken from. The extrapolation from the example images can be up to about 60 degrees of rotation. The system is based on the trilinear constraints that bind any three view so fan object. As a side result, we propose two new methods for camera calibration. We developed and used one of them. We implemented the system and tested it on real images of objects and faces. We also show experimentally that even when only two images taken from unknown positions are given, the system can be used to render the object from other view points as long as we have a good estimate of the internal parameters of the camera used and we are able to find good correspondence between the example images. In addition, we present the relation between these algebraic constraints and a factorization method for shape and motion estimation. As a result we propose a method for motion estimation in the special case of orthographic projection.
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation