12 resultados para Differences-in-Differences method

em Cochin University of Science


Relevância:

90.00% 90.00%

Publicador:

Resumo:

HIV/AIDS is one of the most destructive epidemics in ever recorded history claims an estimated 2.4 –3.3 million lives every year. Even though there is no treatment for this pandemic Elisa and Western Blot tests are the only tests currently available for detecting HIV/AIDS. This article proposes a new method of detecting HIV/AIDS based on the measurement of the dielectric properties of blood at the microwave frequencies. The measurements were made at the S-band of microwave frequency using rectangular cavity perturbation technique with the samples of blood from healthy donors as well as from HIV/AIDS patients. An appreciable change is observed in the dielectric properties of patient samples than with the normal healthy samples and these measurements were in good agreement with clinical results. This measurement is an alternative in vitro method of diagnosing HIV/AIDS using microwaves.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work presents an efficient method for volume rendering of glioma tumors from segmented 2D MRI Datasets with user interactive control, by replacing manual segmentation required in the state of art methods. The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the pre- operative tumor volume is essential. Tumor portions were automatically segmented from 2D MR images using morphological filtering techniques. These seg- mented tumor slices were propagated and modeled with the software package. The 3D modeled tumor consists of gray level values of the original image with exact tumor boundary. Axial slices of FLAIR and T2 weighted images were used for extracting tumors. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming proc- ess and is prone to error. These defects are overcome in this method. Authors verified the performance of our method on several sets of MRI scans. The 3D modeling was also done using segmented 2D slices with the help of a medical software package called 3D DOCTOR for verification purposes. The results were validated with the ground truth models by the Radi- ologist.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article reports a new in vitro bile analysis based on the measurement of the dielectric properties at microwave frequencies. The measurements were made using rectangular cavity perturbation technique at the S-band of microwave frequency with the different samples of bile obtained from healthy persons as well as from patients. It is observed that an appreciable change in the dielectric properties of patient’s samples with the normal healthy samples and these measurements were in good agreement with clinical analysis. These results prove an alternative in-vitro method of detecting bile abnormalities based on the measurement of the dielectric properties of bile samples using microwaves without surgical procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article reports a new method of analyzing pericardial fluid based on the measurement of the dielectric properties at microwave frequencies. The microwave measurements were performed by rectangular cavity perturbation method in the S-band of microwave frequency with the pericardial fluid from healthy persons as well as from patients suffering from pericardial effusion. It is observed that a remarkable change in the dielectric properties of patient samples with the normal healthy samples and these measurements were in good agreement with clinical analysis. This measurement technique and the method of extraction of pericardial fluid are simple. These results give light to an alternative in-vitro method of diagnosing onset pericardial effusion abnormalities using microwaves without surgical procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary objective of this investigation has been to develop more efficient and low cost adhesives for bonding various elastomer combinations particularly NR to NR, NR/PB to NR/PB, CR to CR,NR to CR and NR to NBR.A significant achievement of the investigation was the development of solventless and environment friendly solid adhesives for NR to NR and NR/PB to NR/PB particularly for precured retreading. Conventionally used adhesives in this area are mostly NR based adhesive strips in the presence of a dough. The study has shown that an ultra accelerator could be added to the dough just before applying it on the tire which can significantly bring down the retreading time resulting in prolonged tire service and lower energy consumption. Further latex reclaim has been used for the preparation of the solid strip which can reduce the cost considerably.Another significant finding was that by making proper selection of the RF resin, the efficiency and shelflife of the RFL adhesive used for nylon and rayon tire cord dipping can be improved. In the conventionally used RFL adhesive, the resin once prepared has to be added to the latex within 30 minutes and the RFL has to be used after 4 hours maturation time maximum shelf life of the RFL dip solution being 72 hours. In this study a formaldehyde deficient resin was used and hence more flexibility was available for mixing with latex and maturing. It also has a much longer shelf life. In the method suggested in this study, formaldehyde donors were added only in the rubber compound to make up the formaldehyde deficiency in the RFL. The results of this investigation show that the pull through load by employing this method and the conventional method are comparable. This study has also shown that the amount of RF resin with RFL adhesive can be partially replaced by other modifying agents for cost reduction.Cashew nut shell liquid (CNSL) resin can be employed for improving the bonding of dipped nylon and rayon cord with NR.Since CNSL resin cannot be added in the dip solution since it is not soluble in water, it was added in the rubber compound. The amount of wood rosin in the rubber compound can be reduced by using CNSL resin.Another interesting result of the investigation was the use of CR based adhesive modified with chlorinated natural rubber for CR to CR bonding. Addition of chlorinated natural rubber was found to improve sea water resistance of CR based adhesive. In the bonding of a polar rubber like nitrile rubber or polychloroprene rubber to a non polar rubber like natural rubber, an adhesive based on polychloroprene rubber was found to be effective.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Among the large number of photothcrmal techniques available, photoacoustics assumes a very significant place because of its essential simplicity and the variety of applications it finds in science and technology. The photoacoustic (PA) effect is the generation of an acoustic signal when a sample, kept inside an enclosed volume, is irradiated by an intensity modulated beam of radiation. The radiation absorbed by the sample is converted into thermal waves by nonradiative de-excitation processes. The propagating thermal waves cause a corresponding expansion and contraction of the gas medium surrounding the sample, which in tum can be detected as sound waves by a sensitive microphone. These sound waves have the same frequency as the initial modulation frequency of light. Lock-in detection method enables one to have a sufficiently high signal to noise ratio for the detected signal. The PA signal amplitude depends on the optical absorption coefficient of the sample and its thermal properties. The PA signal phase is a function of the thermal diffusivity of the sample.Measurement of the PA amplitude and phase enables one to get valuable information about the thermal and optical properties of the sample. Since the PA signal depends on the optical and thennal properties of the sample, their variation will get reflected in the PA signal. Therefore, if the PA signal is collected from various points on a sample surface it will give a profile of the variations in the optical/thennal properties across the sample surface. Since the optical and thermal properties are affected by the presence of defects, interfaces, change of material etc. these will get reflected in the PA signal. By varying the modulation frequency, we can get information about the subsurface features also. This is the basic principle of PA imaging or PA depth profiling. It is a quickly expanding field with potential applications in thin film technology, chemical engineering, biology, medical diagnosis etc. Since it is a non-destructive method, PA imaging has added advantages over some of the other imaging techniques. A major part of the work presented in this thesis is concemed with the development of a PA imaging setup that can be used to detect the presence of surface and subsmface defects in solid samples.Determination of thermal transport properties such as thermal diffusivity, effusivity, conductivity and heat capacity of materials is another application of photothennal effect. There are various methods, depending on the nature of the sample, to determine these properties. However, there are only a few methods developed to determine all these properties simultaneously. Even though a few techniques to determine the above thermal properties individually for a coating can be found in literature, no technique is available for the simultaneous measurement of these parameters for a coating. We have developed a scanning photoacoustic technique that can be used to determine all the above thermal transport properties simultaneously in the case of opaque coatings such as paints. Another work that we have presented in this thesis is the determination of thermal effusivity of many bulk solids by a scanning photoacoustic technique. This is one of the very few methods developed to determine thermal effiisivity directly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold

Relevância:

80.00% 80.00%

Publicador:

Resumo:

DNA sequence representation methods are used to denote a gene structure effectively and help in similarities/dissimilarities analysis of coding sequences. Many different kinds of representations have been proposed in the literature. They can be broadly classified into Numerical, Graphical, Geometrical and Hybrid representation methods. DNA structure and function analysis are made easy with graphical and geometrical representation methods since it gives visual representation of a DNA structure. In numerical method, numerical values are assigned to a sequence and digital signal processing methods are used to analyze the sequence. Hybrid approaches are also reported in the literature to analyze DNA sequences. This paper reviews the latest developments in DNA Sequence representation methods. We also present a taxonomy of various methods. A comparison of these methods where ever possible is also done

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimum conditions and experimental details for the formation of v-Fe203 from goethite have been worked out. In another method, a cheap complexing medium of starch was employed for precipitating acicular ferrous oxalate, which on decomposition in nitrogen and subsequent oxidation yielded acicular y-Fe203. On the basis of thermal decomposition in dry and moist nitrogen, DTA, XRD, GC and thermodynamic arguments, the mechanism of decomposition was elucidated. New materials obtained by doping ~'-Fe203 with 1-16 atomic percent magnesium, cobalt, nickel and copper, were synthesised and characterized

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images