993 resultados para pacs: mathematical techniques
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
The increasing tempo of construction activity the world over creates heavy pressure on existing land space. The quest for new and competent site often points to the needs for improving existing sites, which are otherwise deemed unsuitable for adopting conventional foundations. This is accomplished by ground improvement methods, which are employed to improve the quality of soil incompetent in their natural state. Among the construction activities, a well-connected road network is one of the basic infrastructure requirements, which play a vital role for the fast and comfortable movement of inter- regional traffic in countries like India.One of the innovative ground improvement techniques practised all over the world is the use of geosynthetics, which include geotextiles, geomembranes, geogrids, etc . They offer the advantages such as space saving, enviromnental sensitivity, material availability, technical superiority, higher cost savings, less construction time, etc . Because of its fundamental properties, such as tensile strength, filtering and water permeability, a geotextile inserted between the base material and sub grade can function as reinforcement, a filter medium, a separation layer and as a drainage medium. Though polymeric geotextiles are used in abundant quantities, the use of natural geotextiles (like coir, jute, etc.) has yet to get momentum. This is primarily due to the lack of research work on natural geotextilcs for ground improvement, particularly in the areas of un paved roads. Coir geotextiles are best suited for low cost applications because of its availability at low prices compared to its synthetic counterparts. The proper utilisation of coir geotextilcs in various applications demands large quantities of the product, which in turn can create a boom in the coir industry. The present study aims at exploring the possibilities of utilising coir geotextiles for unpaved roads and embankments.The properties of coir geotextiles used have been evaluated. The properties studied include mass per unit area, puncture resistance, tensile strength, secant modulus, etc . The interfacial friction between soils and three types of coir geotextiles used was also evaluated. It was found that though the parameters evaluated for coir geotextiles have low values compared to polymeric geotextiles, the former are sufficient for use in unpaved roads and embankments. The frictional characteristics of coir geotextile - soil interfaces are extremely good and satisfy the condition set by the International Geosynthetic Society for varied applications.The performance of coir geotextiles reinforced subgrade was studied by conducting California Bearing Ratio (CBR) tests. Studies were made with coir geotextiles placed at different levels and also in multiple layers. The results have shown that the coir geotextile enhances the subgrade strength. A regression analysis was perfonned and a mathematical model was developed to predict the CBR of the coir geotextile reinforced subgrade soil as a function of the soil properties, coir geotextile properties, and placement depth of reinforcement.The effects of coir geotextiles on bearing capacity were studied by perfonning plate load tests in a test tan1e This helped to understand the functioning of geotextile as reinforcement in unpaved roads and embankments. The perfonnance of different types of coir geotextiles with respect to the placement depth in dry and saturated conditions was studied. The results revealed that the bearing capacity of coir-reinforced soil is increasing irrespective of the type of coir geotextiles and saturation condition.The rut behaviour of unreinforced and coir reinforced unpaved road sections were compared by conducting model static load tests in a test tank and also under repetitive loads in a wheel track test facility. The results showed that coir geotextiles could fulfill the functions as reinforcement and as a separator, both under static and repetitive loads. The rut depth was very much reduced whik placing coir geotextiles in between sub grade and sub base.In order to study the use of Coir geotextiles in improving the settlement characteristics, two types of prefabricated COlf geotextile vertical drains were developed and their time - settlement behaviour were studied. Three different dispositions were tried. It was found that the coir geotextile drains were very effective in reducing consolidation time due to radial drainage. The circular drains in triangular disposition gave maximum beneficial effect.In long run, the degradation of coir geotextile is expected, which results in a soil - fibre matrix. Hence, studies pertaining to strength and compressibility characteristics of soil - coir fibre composites were conducted. Experiments were done using coir fibres having different aspect ratios and in different proportions. The results revealed that the strength of the soil was increased by 150% to 200% when mixed with 2% of fibre having approximately 12mm length, at all compaction conditions. Also, the coefficient of consolidation increased and compression index decreased with the addition of coir fibre.Typical design charts were prepared for the design of coir geotextile reinforced unpaved roads. Some illustrative examples are also given. The results demonstrated that a considerable saving in subase / base thickness can he achieved with the use of eoir geotextiles, which in turn, would save large quantities of natural aggregates.
Resumo:
International School of Photonics, Cochin University of Science and Technology
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
Wind energy has emerged as a major sustainable source of energy.The efficiency of wind power generation by wind mills has improved a lot during the last three decades.There is still further scope for maximising the conversion of wind energy into mechanical energy.In this context,the wind turbine rotor dynamics has great significance.The present work aims at a comprehensive study of the Horizontal Axis Wind Turbine (HAWT) aerodynamics by numerically solving the fluid dynamic equations with the help of a finite-volume Navier-Stokes CFD solver.As a more general goal,the study aims at providing the capabilities of modern numerical techniques for the complex fluid dynamic problems of HAWT.The main purpose is hence to maximize the physics of power extraction by wind turbines.This research demonstrates the potential of an incompressible Navier-Stokes CFD method for the aerodynamic power performance analysis of horizontal axis wind turbine.The National Renewable Energy Laboratory USA-NREL (Technical Report NREL/Cp-500-28589) had carried out an experimental work aimed at the real time performance prediction of horizontal axis wind turbine.In addition to a comparison between the results reported by NREL made and CFD simulations,comparisons are made for the local flow angle at several stations ahead of the wind turbine blades.The comparison has shown that fairly good predictions can be made for pressure distribution and torque.Subsequently, the wind-field effects on the blade aerodynamics,as well as the blade/tower interaction,were investigated.The selected case corresponded to a 12.5 m/s up-wind HAWT at zero degree of yaw angle and a rotational speed of 25 rpm.The results obtained suggest that the present can cope well with the flows encountered around wind turbines.The areodynamic performance of the turbine and the flow details near and off the turbine blades and tower can be analysed using theses results.The aerodynamic performance of airfoils differs from one another.The performance mainly depends on co-efficient of performnace,co-efficient of lift,co-efficient of drag, velocity of fluid and angle of attack.This study shows that the velocity is not constant for all angles of attack of different airfoils.The performance parameters are calculated analytically and are compared with the standardized performance tests.For different angles of ,the velocity stall is determined for the better performance of a system with respect to velocity.The research addresses the effect of surface roughness factor on the blade surface at various sections.The numerical results were found to be in agreement with the experimental data.A relative advantage of the theoretical aerofoil design method is that it allows many different concepts to be explored economically.Such efforts are generally impractical in wind tunnels because of time and money constraints.Thus, the need for a theoretical aerofoil design method is threefold:first for the design of aerofoil that fall outside the range of applicability of existing calalogs:second,for the design of aerofoil that more exactly match the requirements of the intended application:and third,for the economic exploration of many aerofoil concepts.From the results obtained for the different aerofoils,the velocity is not constant for all angles of attack.The results obtained for the aerofoil mainly depend on angle of attack and velocity.The vortex generator technique was meticulously studies with the formulation of the specification for the right angle shaped vortex generators-VG.The results were validated in accordance with the primary analysis phase.The results were found to be in good agreement with the power curve.The introduction of correct size VGs at appropriate locations over the blades of the selected HAWT was found to increase the power generation by about 4%
Resumo:
The thesis mainly discussed the isolation and identification of a probiotic Lactobacillus plantarum, fermentative production of exopolysaccharide by the strain, its purification, structural characterisation and possible applications in food industry and therapeutics. The studies on the probiotic characterization explored the tolerance of the isolated LAB cultures to acid, bile, phenol, salt and mucin binding. These are some of the key factors that could satisfy the criteria for probiotic strains . The important factors required for a high EPS production in submerged fermentation was investigated with a collection of statistical and mathematical approach. Chapter 5 of the thesis explains the structural elucidation of EPS employing spectroscopic and chromatographic techniques. The studies helped in the exploration of the hetero-polysaccharide sequence from L. plantarum MTCC 9510. The thesis also explored the bioactivities of EPS from L. plantarum. As majority of chemical compounds identified as anti-cancerous are toxic to normal cells, the discovery and identification of new safe drugs has become an important goal of research in the biomedical sciences. The thesis has explored the anti-oxidant, anti-tumour and immunomodulating properties of EPS purified from Lactobacillus plantarum. The presence of (1, 3) linkages and its molecular weight presented the EPS with anti-oxidant, anti-tumour and immunomodulating properties under in vitro conditions.
Resumo:
After skin cancer, breast cancer accounts for the second greatest number of cancer diagnoses in women. Currently the etiologies of breast cancer are unknown, and there is no generally accepted therapy for preventing it. Therefore, the best way to improve the prognosis for breast cancer is early detection and treatment. Computer aided detection systems (CAD) for detecting masses or micro-calcifications in mammograms have already been used and proven to be a potentially powerful tool , so the radiologists are attracted by the effectiveness of clinical application of CAD systems. Fractal geometry is well suited for describing the complex physiological structures that defy the traditional Euclidean geometry, which is based on smooth shapes. The major contribution of this research include the development of • A new fractal feature to accurately classify mammograms into normal and normal (i)With masses (benign or malignant) (ii) with microcalcifications (benign or malignant) • A novel fast fractal modeling method to identify the presence of microcalcifications by fractal modeling of mammograms and then subtracting the modeled image from the original mammogram. The performances of these methods were evaluated using different standard statistical analysis methods. The results obtained indicate that the developed methods are highly beneficial for assisting radiologists in making diagnostic decisions. The mammograms for the study were obtained from the two online databases namely, MIAS (Mammographic Image Analysis Society) and DDSM (Digital Database for Screening Mammography.
Resumo:
Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.
Resumo:
The present work deals with the A study of morphological opertors with applications. Morphology is now a.necessary tool for engineers involved with imaging applications. Morphological operations have been viewed as filters the properties of which have been well studied (Heijmans, 1994). Another well-known class of non-linear filters is the class of rank order filters (Pitas and Venetsanopoulos, 1990). Soft morphological filters are a combination of morphological and weighted rank order filters (Koskinen, et al., 1991, Kuosmanen and Astola, 1995). They have been introduced to improve the behaviour of traditional morphological filters in noisy environments. The idea was to slightly relax the typical morphological definitions in such a way that a degree of robustness is achieved, while most of the desirable properties of typical morphological operations are maintained. Soft morphological filters are less sensitive to additive noise and to small variations in object shape than typical morphological filters. They can remove positive and negative impulse noise, preserving at the same time small details in images. Currently, Mathematical Morphology allows processing images to enhance fuzzy areas, segment objects, detect edges and analyze structures. The techniques developed for binary images are a major step forward in the application of this theory to gray level images. One of these techniques is based on fuzzy logic and on the theory of fuzzy sets.Fuzzy sets have proved to be strongly advantageous when representing in accuracies, not only regarding the spatial localization of objects in an image but also the membership of a certain pixel to a given class. Such inaccuracies are inherent to real images either because of the presence of indefinite limits between the structures or objects to be segmented within the image due to noisy acquisitions or directly because they are inherent to the image formation methods.
Resumo:
The application of computer vision based quality control has been slowly but steadily gaining importance mainly due to its speed in achieving results and also greatly due to its non- destnictive nature of testing. Besides, in food applications it also does not contribute to contamination. However, computer vision applications in quality control needs the application of an appropriate software for image analysis. Eventhough computer vision based quality control has several advantages, its application has limitations as to the type of work to be done, particularly so in the food industries. Selective applications, however, can be highly advantageous and very accurate.Computer vision based image analysis could be used in morphometric measurements of fish with the same accuracy as the existing conventional method. The method is non-destructive and non-contaminating thus providing anadvantage in seafood processing.The images could be stored in archives and retrieved at anytime to carry out morphometric studies for biologists.Computer vision and subsequent image analysis could be used in measurements of various food products to assess uniformity of size. One product namely cutlet and product ingredients namely coating materials such as bread crumbs and rava were selected for the study. Computer vision based image analysis was used in the measurements of length, width and area of cutlets. Also the width of coating materials like bread crumbs was measured.Computer imaging and subsequent image analysis can be very effectively used in quality evaluations of product ingredients in food processing. Measurement of width of coating materials could establish uniformity of particles or the lack of it. The application of image analysis in bacteriological work was also done
Resumo:
The present study is intended to provide a new scientific approach to the solution of the worlds cost engineering problems encountered in the chemical industries in our nation. The problem is that of cost estimation of equipments especially of pressure vessels when setting up chemical industries .The present study attempts to develop a model for such cost estimation. This in turn is hoped would go a long way to solve this and related problems in forecasting the cost of setting up chemical plants.
Resumo:
Spike disease in sandal is generally diagnosed by the manifestation of external symptoms. Attempts have been made to detect the diseased plants by determining the length/breadth ratio of leaves (lyengar, 1961) and histochemical tests using Mann's stain (Parthasarathi et al., 1966), Dienes' stain (Ananthapadmanabha et a/., 1973) aniline blue and Hoechst 33258 (Ghosh et a/., 1985, Rangaswamy, 1995). But most of these techniques are insensitive, indirect detection methods leading to misinterpretation of results. Moreover, to identify disease resistant sandal trees, highly sensitive techniques are needed to detect the presence of the pathogen. In sandal forests, several host plants of sandal like Zizyphus oenop/ea (Fig. 1.3) also exhibit the yellows type disease symptoms. Immunological and molecular assays have to be developed to confirm the presence of sandal spike phytoplasma in such hosts. The major objectives of the present work includes:In situ detection of sandal spike phytoplasma by epifluorescence microscopy and scanning electron microscopy.,Purification of sandal spike phytoplasma and production of polyclonal antibodies.,Amino acid and total protein estimation of sandal spike phytoplasma.,Immunological detection of sandal spike phytoplasma., Molecular detection of sandal spike phytoplasma.,Screening for phytoplasma in host plants of spike disease affected sandal using immunological and molecular techniques.
Resumo:
Magnetic Resonance Imaging (MRI) is a multi sequence medical imaging technique in which stacks of images are acquired with different tissue contrasts. Simultaneous observation and quantitative analysis of normal brain tissues and small abnormalities from these large numbers of different sequences is a great challenge in clinical applications. Multispectral MRI analysis can simplify the job considerably by combining unlimited number of available co-registered sequences in a single suite. However, poor performance of the multispectral system with conventional image classification and segmentation methods makes it inappropriate for clinical analysis. Recent works in multispectral brain MRI analysis attempted to resolve this issue by improved feature extraction approaches, such as transform based methods, fuzzy approaches, algebraic techniques and so forth. Transform based feature extraction methods like Independent Component Analysis (ICA) and its extensions have been effectively used in recent studies to improve the performance of multispectral brain MRI analysis. However, these global transforms were found to be inefficient and inconsistent in identifying less frequently occurred features like small lesions, from large amount of MR data. The present thesis focuses on the improvement in ICA based feature extraction techniques to enhance the performance of multispectral brain MRI analysis. Methods using spectral clustering and wavelet transforms are proposed to resolve the inefficiency of ICA in identifying small abnormalities, and problems due to ICA over-completeness. Effectiveness of the new methods in brain tissue classification and segmentation is confirmed by a detailed quantitative and qualitative analysis with synthetic and clinical, normal and abnormal, data. In comparison to conventional classification techniques, proposed algorithms provide better performance in classification of normal brain tissues and significant small abnormalities.
Resumo:
Microarray data analysis is one of data mining tool which is used to extract meaningful information hidden in biological data. One of the major focuses on microarray data analysis is the reconstruction of gene regulatory network that may be used to provide a broader understanding on the functioning of complex cellular systems. Since cancer is a genetic disease arising from the abnormal gene function, the identification of cancerous genes and the regulatory pathways they control will provide a better platform for understanding the tumor formation and development. The major focus of this thesis is to understand the regulation of genes responsible for the development of cancer, particularly colorectal cancer by analyzing the microarray expression data. In this thesis, four computational algorithms namely fuzzy logic algorithm, modified genetic algorithm, dynamic neural fuzzy network and Takagi Sugeno Kang-type recurrent neural fuzzy network are used to extract cancer specific gene regulatory network from plasma RNA dataset of colorectal cancer patients. Plasma RNA is highly attractive for cancer analysis since it requires a collection of small amount of blood and it can be obtained at any time in repetitive fashion allowing the analysis of disease progression and treatment response.