903 resultados para Signal Authentication
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
Thermal lens signals in solutions of rhodamine B laser dye in methanol are measured using the dual beam pump-probe technique. The nature of variations of signal strength with concentration is found to be different for 514 and 488 nm Ar + laser excitations. However, both the pump wavelengths produce an oscillatory type variation of thermal lens signal amplitude with the concentration of the dye solution. Probable reasons for this peculiar behaviour (which is absent in the case of fluorescent intensity) are mentioned.
Resumo:
Pulsed photoacoustic studies in solutions of C70 in toluene are made using the 532-nm radiation from a frequency-doubled Nd:YAG laser. It is found that contrary to expectation, there is no photoacoustic (PA) signal enhancement in the power-limiting range of laser fluences. Instead, the PA signal tends to saturate during optical power-limiting phenomenon. This could be due to the enhanced optical absorption from the photoexcited state and hence the depletion of the ground-state population. PA measurements also ruled out the possibility of multiphoton absorption in the C70 solution. We demonstrate that the nonlinear absorption leading to optical limiting is mainly due to reverse saturable absorption.
Resumo:
Machine tool chatter is an unfavorable phenomenon during metal cutting, which results in heavy vibration of cutting tool. With increase in depth of cut, the cutting regime changes from chatter-free cutting to one with chatter. In this paper, we propose the use of permutation entropy (PE), a conceptually simple and computationally fast measurement to detect the onset of chatter from the time series using sound signal recorded with a unidirectional microphone. PE can efficiently distinguish the regular and complex nature of any signal and extract information about the dynamics of the process by indicating sudden change in its value. Under situations where the data sets are huge and there is no time for preprocessing and fine-tuning, PE can effectively detect dynamical changes of the system. This makes PE an ideal choice for online detection of chatter, which is not possible with other conventional nonlinear methods. In the present study, the variation of PE under two cutting conditions is analyzed. Abrupt variation in the value of PE with increase in depth of cut indicates the onset of chatter vibrations. The results are verified using frequency spectra of the signals and the nonlinear measure, normalized coarse-grained information rate (NCIR).
Resumo:
The standard models for statistical signal extraction assume that the signal and noise are generated by linear Gaussian processes. The optimum filter weights for those models are derived using the method of minimum mean square error. In the present work we study the properties of signal extraction models under the assumption that signal/noise are generated by symmetric stable processes. The optimum filter is obtained by the method of minimum dispersion. The performance of the new filter is compared with their Gaussian counterparts by simulation.
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
Biometrics is an efficient technology with great possibilities in the area of security system development for official and commercial applications. The biometrics has recently become a significant part of any efficient person authentication solution. The advantage of using biometric traits is that they cannot be stolen, shared or even forgotten. The thesis addresses one of the emerging topics in Authentication System, viz., the implementation of Improved Biometric Authentication System using Multimodal Cue Integration, as the operator assisted identification turns out to be tedious, laborious and time consuming. In order to derive the best performance for the authentication system, an appropriate feature selection criteria has been evolved. It has been seen that the selection of too many features lead to the deterioration in the authentication performance and efficiency. In the work reported in this thesis, various judiciously chosen components of the biometric traits and their feature vectors are used for realizing the newly proposed Biometric Authentication System using Multimodal Cue Integration. The feature vectors so generated from the noisy biometric traits is compared with the feature vectors available in the knowledge base and the most matching pattern is identified for the purpose of user authentication. In an attempt to improve the success rate of the Feature Vector based authentication system, the proposed system has been augmented with the user dependent weighted fusion technique.
Resumo:
The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.
Resumo:
Neben der Verbreitung von gefährlichen Krankheiten sind Insekten für enorme agrarwirtschaftliche Schäden verantwortlich. Ein Großteil der Verhaltensweisen bei Insekten wird über den Geruchssinn gesteuert, der somit einen möglichen Angriffspunkt zur Bekämpfung von Schadinsekten darstellt. Hierzu ist es allerdings nötig, die Mechanismen der olfaktorischen Signalübertragung im Detail zu verstehen. Neben den duftstoffbindenden olfaktorischen Rezeptoren spielt hier auch ein konservierter Korezeptor (Orco) eine entscheidende Rolle. Inwieweit bei diesen Proteinen ionotrope bzw. metabotrope Prozesse involviert sind ist bislang nicht vollständig aufgeklärt. Um weitere Einzelheiten aufzuklären wurden daher Einzelsensillenableitungen am Tabakschwärmer Manduca sexta durchgeführt. Orco-Agonisten und Antagonisten wurden eingesetzt, um die Funktion des Korezeptors besser zu verstehen. Bei dem Einsatz des Orco-Agonisten VUAA1 konnte keine Verstärkung der Pheromonantworten bzw. eine Sensitivierung beobachtet werden, wie im Falle einer ionotropen Signalweiterleitung zu erwarten gewesen wäre. Ein ionotroper Signalweg über den OR/Orco-Komplex in M. sexta ist daher unwahrscheinlich. Der Orco-Antagonist OLC15 beeinflusste die gleichen Parameter wie VUAA1 und konnte die von VUAA1 generierte Spontanaktivität blocken. Daher ist es wahrscheinlich, dass dieser einen spezifischen Orco-Blocker darstellt. Sowohl VUAA1 als auch OLC15 hatten großen Effekt auf die langanhaltende Pheromonantwort, welches die Vermutung nahelegt, dass Orco modulierend auf die Sensitivität der Nervenzelle einwirkt. Von OLC15 abweichende Effekte durch die getesteten Amiloride HMA und MIA auf die Pheromonantwort lassen nicht auf eine spezifische Wirkung dieser Agenzien auf Orco schließen und zusätzliche Wirkorte sind anzunehmen. Um die These eines metabotropen Signalwegs zu überprüfen wurde ebenfalls der G-Protein-Blocker GDP-β-S eingesetzt. Alle Parameter der Pheromonantwort die innerhalb der ersten Millisekunden analysiert wurden wiesen eine Reduktion der Sensitivität auf. Im Gegensatz dazu hatte GDP-β-S keinen Effekt auf die langanhaltende Pheromonantwort. Somit scheint ausschließlich die schnelle Pheromonantwort über einen Ligand-bindenden G-Protein-gesteuerten Rezeptor gesteuert zu werden.
Resumo:
Surface (Lambertain) color is a useful visual cue for analyzing material composition of scenes. This thesis adopts a signal processing approach to color vision. It represents color images as fields of 3D vectors, from which we extract region and boundary information. The first problem we face is one of secondary imaging effects that makes image color different from surface color. We demonstrate a simple but effective polarization based technique that corrects for these effects. We then propose a systematic approach of scalarizing color, that allows us to augment classical image processing tools and concepts for multi-dimensional color signals.
Resumo:
Describes different approaches to authentication for wireless networks, and the evolution of eduroam
Resumo:
The main objective of this thesis was the integration of microstructure information in synoptic descriptors of turbulence, that reflects the mixing processes. Turbulent patches are intermittent in space and time, but they represent the dominant process for mixing. In this work, the properties of turbulent patches were considered the potential input for integrating the physical microscale measurements. The development of a method for integrating the properties of the turbulent patches required solving three main questions: a) how can we detect the turbulent patches from he microstructure measurements?; b) which are the most relevant properties of the turbulent patches?; and ) once an interval of time has been selected, what kind of synoptic parameters could better reflect the occurrence and properties of the turbulent patches? The answers to these questions were the final specific objectives of this thesis.
Resumo:
The purpose of this study was to examine objective and subjective distortion present when frequency modulation (FM) systems were coupled with four digital signal processing (DSP) hearing aids. Electroacoustic analysis and subjective listening tests by experienced audiologists revealed that distortion levels varied across hearing aids and channels.