938 resultados para Computer forensic analysis
Resumo:
A method for computer- aided diagnosis of micro calcification clusters in mammograms images presented . Micro calcification clus.eni which are an early sign of bread cancer appear as isolated bright spots in mammograms. Therefore they correspond to local maxima of the image. The local maxima of the image is lint detected and they are ranked according to it higher-order statistical test performed over the sub band domain data
Resumo:
Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.
Resumo:
This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
This thesis Entitled Journal productivity in fishery science an informetric analysis.The analyses and formulating results of the study, the format of the thesis was determined. The thesis is divided into different chapters mentioned below. Chapter 1 gives an overview on the topic of research. Introduction gives the relevance of topic, define the problem, objectives of the study, hypothesis, methods of data collection, analysis and layout of the thesis. Chapter 2 provides a detailed account of the subject Fishery science and its development. A comprehensive outline is given along with definition, scope, classification, development and sources of information.Method of study used in this research and its literature review form the content of this chapter. Chapter 4 Details of the method adopted for collecting samples for the study, data collection and organization of the data are given. The methods are based on availability of data, period and objectives of the research undertaken.The description, analyses and the results of the study are covered in this chapter.
Resumo:
The application of computer vision based quality control has been slowly but steadily gaining importance mainly due to its speed in achieving results and also greatly due to its non- destnictive nature of testing. Besides, in food applications it also does not contribute to contamination. However, computer vision applications in quality control needs the application of an appropriate software for image analysis. Eventhough computer vision based quality control has several advantages, its application has limitations as to the type of work to be done, particularly so in the food industries. Selective applications, however, can be highly advantageous and very accurate.Computer vision based image analysis could be used in morphometric measurements of fish with the same accuracy as the existing conventional method. The method is non-destructive and non-contaminating thus providing anadvantage in seafood processing.The images could be stored in archives and retrieved at anytime to carry out morphometric studies for biologists.Computer vision and subsequent image analysis could be used in measurements of various food products to assess uniformity of size. One product namely cutlet and product ingredients namely coating materials such as bread crumbs and rava were selected for the study. Computer vision based image analysis was used in the measurements of length, width and area of cutlets. Also the width of coating materials like bread crumbs was measured.Computer imaging and subsequent image analysis can be very effectively used in quality evaluations of product ingredients in food processing. Measurement of width of coating materials could establish uniformity of particles or the lack of it. The application of image analysis in bacteriological work was also done
Resumo:
This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.
Resumo:
The objective of the study is to develop a hand written character recognition system that could recognisze all the characters in the mordern script of malayalam language at a high recognition rate
Resumo:
This paper presents a writer identification scheme for Malayalam documents. As the accomplishment rate of a scheme is highly dependent on the features extracted from the documents, the process of feature selection and extraction is highly relevant. The paper describes a set of novel features exclusively for Malayalam language. The features were studied in detail which resulted in a comparative study of all the features. The features are fused to form the feature vector or knowledge vector. This knowledge vector is then used in all the phases of the writer identification scheme. The scheme has been tested on a test bed of 280 writers of which 50 writers having only one page, 215 writers with at least 2 pages and 15 writers with at least 4 pages. To perform a comparative evaluation of the scheme the test is conducted using WD-LBP method also. A recognition rate of around 95% was obtained for the proposed approach
Resumo:
Analysis by reduction is a linguistically motivated method for checking correctness of a sentence. It can be modelled by restarting automata. In this paper we propose a method for learning restarting automata which are strictly locally testable (SLT-R-automata). The method is based on the concept of identification in the limit from positive examples only. Also we characterize the class of languages accepted by SLT-R-automata with respect to the Chomsky hierarchy.
Resumo:
This thesis work is dedicated to use the computer-algebraic approach for dealing with the group symmetries and studying the symmetry properties of molecules and clusters. The Maple package Bethe, created to extract and manipulate the group-theoretical data and to simplify some of the symmetry applications, is introduced. First of all the advantages of using Bethe to generate the group theoretical data are demonstrated. In the current version, the data of 72 frequently applied point groups can be used, together with the data for all of the corresponding double groups. The emphasize of this work is placed to the applications of this package in physics of molecules and clusters. Apart from the analysis of the spectral activity of molecules with point-group symmetry, it is demonstrated how Bethe can be used to understand the field splitting in crystals or to construct the corresponding wave functions. Several examples are worked out to display (some of) the present features of the Bethe program. While we cannot show all the details explicitly, these examples certainly demonstrate the great potential in applying computer algebraic techniques to study the symmetry properties of molecules and clusters. A special attention is placed in this thesis work on the flexibility of the Bethe package, which makes it possible to implement another applications of symmetry. This implementation is very reasonable, because some of the most complicated steps of the possible future applications are already realized within the Bethe. For instance, the vibrational coordinates in terms of the internal displacement vectors for the Wilson's method and the same coordinates in terms of cartesian displacement vectors as well as the Clebsch-Gordan coefficients for the Jahn-Teller problem are generated in the present version of the program. For the Jahn-Teller problem, moreover, use of the computer-algebraic tool seems to be even inevitable, because this problem demands an analytical access to the adiabatic potential and, therefore, can not be realized by the numerical algorithm. However, the ability of the Bethe package is not exhausted by applications, mentioned in this thesis work. There are various directions in which the Bethe program could be developed in the future. Apart from (i) studying of the magnetic properties of materials and (ii) optical transitions, interest can be pointed out for (iii) the vibronic spectroscopy, and many others. Implementation of these applications into the package can make Bethe a much more powerful tool.
Resumo:
The consumers are becoming more concerned about food quality, especially regarding how, when and where the foods are produced (Haglund et al., 1999; Kahl et al., 2004; Alföldi, et al., 2006). Therefore, during recent years there has been a growing interest in the methods for food quality assessment, especially in the picture-development methods as a complement to traditional chemical analysis of single compounds (Kahl et al., 2006). The biocrystallization as one of the picture-developing method is based on the crystallographic phenomenon that when crystallizing aqueous solutions of dihydrate CuCl2 with adding of organic solutions, originating, e.g., from crop samples, biocrystallograms are generated with reproducible crystal patterns (Kleber & Steinike-Hartung, 1959). Its output is a crystal pattern on glass plates from which different variables (numbers) can be calculated by using image analysis. However, there is a lack of a standardized evaluation method to quantify the morphological features of the biocrystallogram image. Therefore, the main sakes of this research are (1) to optimize an existing statistical model in order to describe all the effects that contribute to the experiment, (2) to investigate the effect of image parameters on the texture analysis of the biocrystallogram images, i.e., region of interest (ROI), color transformation and histogram matching on samples from the project 020E170/F financed by the Federal Ministry of Food, Agriculture and Consumer Protection(BMELV).The samples are wheat and carrots from controlled field and farm trials, (3) to consider the strongest effect of texture parameter with the visual evaluation criteria that have been developed by a group of researcher (University of Kassel, Germany; Louis Bolk Institute (LBI), Netherlands and Biodynamic Research Association Denmark (BRAD), Denmark) in order to clarify how the relation of the texture parameter and visual characteristics on an image is. The refined statistical model was accomplished by using a lme model with repeated measurements via crossed effects, programmed in R (version 2.1.0). The validity of the F and P values is checked against the SAS program. While getting from the ANOVA the same F values, the P values are bigger in R because of the more conservative approach. The refined model is calculating more significant P values. The optimization of the image analysis is dealing with the following parameters: ROI(Region of Interest which is the area around the geometrical center), color transformation (calculation of the 1 dimensional gray level value out of the three dimensional color information of the scanned picture, which is necessary for the texture analysis), histogram matching (normalization of the histogram of the picture to enhance the contrast and to minimize the errors from lighting conditions). The samples were wheat from DOC trial with 4 field replicates for the years 2003 and 2005, “market samples”(organic and conventional neighbors with the same variety) for 2004 and 2005, carrot where the samples were obtained from the University of Kassel (2 varieties, 2 nitrogen treatments) for the years 2004, 2005, 2006 and “market samples” of carrot for the years 2004 and 2005. The criterion for the optimization was repeatability of the differentiation of the samples over the different harvest(years). For different samples different ROIs were found, which reflect the different pictures. The best color transformation that shows efficiently differentiation is relied on gray scale, i.e., equal color transformation. The second dimension of the color transformation only appeared in some years for the effect of color wavelength(hue) for carrot treated with different nitrate fertilizer levels. The best histogram matching is the Gaussian distribution. The approach was to find a connection between the variables from textural image analysis with the different visual criteria. The relation between the texture parameters and visual evaluation criteria was limited to the carrot samples, especially, as it could be well differentiated by the texture analysis. It was possible to connect groups of variables of the texture analysis with groups of criteria from the visual evaluation. These selected variables were able to differentiate the samples but not able to classify the samples according to the treatment. Contrarily, in case of visual criteria which describe the picture as a whole there is a classification in 80% of the sample cases possible. Herewith, it clearly can find the limits of the single variable approach of the image analysis (texture analysis).
Resumo:
During recent years, quantum information processing and the study of N−qubit quantum systems have attracted a lot of interest, both in theory and experiment. Apart from the promise of performing efficient quantum information protocols, such as quantum key distribution, teleportation or quantum computation, however, these investigations also revealed a great deal of difficulties which still need to be resolved in practise. Quantum information protocols rely on the application of unitary and non–unitary quantum operations that act on a given set of quantum mechanical two-state systems (qubits) to form (entangled) states, in which the information is encoded. The overall system of qubits is often referred to as a quantum register. Today the entanglement in a quantum register is known as the key resource for many protocols of quantum computation and quantum information theory. However, despite the successful demonstration of several protocols, such as teleportation or quantum key distribution, there are still many open questions of how entanglement affects the efficiency of quantum algorithms or how it can be protected against noisy environments. To facilitate the simulation of such N−qubit quantum systems and the analysis of their entanglement properties, we have developed the Feynman program. The program package provides all necessary tools in order to define and to deal with quantum registers, quantum gates and quantum operations. Using an interactive and easily extendible design within the framework of the computer algebra system Maple, the Feynman program is a powerful toolbox not only for teaching the basic and more advanced concepts of quantum information but also for studying their physical realization in the future. To this end, the Feynman program implements a selection of algebraic separability criteria for bipartite and multipartite mixed states as well as the most frequently used entanglement measures from the literature. Additionally, the program supports the work with quantum operations and their associated (Jamiolkowski) dual states. Based on the implementation of several popular decoherence models, we provide tools especially for the quantitative analysis of quantum operations. As an application of the developed tools we further present two case studies in which the entanglement of two atomic processes is investigated. In particular, we have studied the change of the electron-ion spin entanglement in atomic photoionization and the photon-photon polarization entanglement in the two-photon decay of hydrogen. The results show that both processes are, in principle, suitable for the creation and control of entanglement. Apart from process-specific parameters like initial atom polarization, it is mainly the process geometry which offers a simple and effective instrument to adjust the final state entanglement. Finally, for the case of the two-photon decay of hydrogenlike systems, we study the difference between nonlocal quantum correlations, as given by the violation of the Bell inequality and the concurrence as a true entanglement measure.
Resumo:
The development of conceptual knowledge systems specifically requests knowledge acquisition tools within the framework of formal concept analysis. In this paper, the existing tools are presented, and furhter developments are discussed.