930 resultados para High speed machining (HSM)
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
Käytettävyydeltään huippuluokkaa olevan pulssi-MIG/MAG-hitsausvalokaaren toteuttaminen vaatii runsaasti tietoa eri pulssiparametreista ja niiden vaikutuksista hitsaukseen. Näihin vaikutuksiin liittyvä tieteellinen tutkimus on ollut melko vähäistä. Erityisesti tieto pulssimuodon vaikutuksista hitsausääneen on perustunut lähinnä kokemuksen tuomaan tuntumaan. Tässä diplomityössä tutkittiin pulssimuodon vaikutusta valokaaren käytettävyyteen pulssi-MIG/MAG-hitsauksessa. Käytettävyys käsittää tässä tapauksessa hitsausäänen, hitsin geometrian ja hitsausominaisuudet. Tutkimuksen alussa perehdyttiin kirjallisuuteen ja tuoreimpiin tutkimuksiin, jonka jälkeen vertailtiin erilaisia pulssimuotoja keskenään hitsauskokeiden avulla. Hitsausääneen ja hitsin geometriaan liittyvät kokeet suoritettiin mekanisoidusti. Hitsausääneen liittyvät mittaukset suoritettiin luokan 1 äänitasomittarilla ja tuloksia analysoitiin tietokoneohjelmistolla. Hitsien geometrioiden vertailu suoritettiin makrohietutkimuksena. Hitsausominaisuuksia tutkittiin suurnopeuskameran ja oskilloskoopin, sekä lopulta käsinhitsauskokeiden avulla. Kaikissa koevaiheissa pulssimuodon tarkasteluun käytettiin oskilloskooppia. Lisäksi käytössä oli toinen oskilloskooppi, jolla tarkasteltiin hitsausvirran spektriä. Pulssimuodon muokkaamiseen käytettiin erillistä tietokoneohjelmaa. Työn kokeellinen osuus keskittyi pulssi-MAG-hitsaukseen. Pulssimuotoa muokkaamalla saatiin aikaan miellyttävämpi hitsausääni. Lisäksi havaittiin, että pulssimuotoa muokkaamalla hitsistä saadaan kapeampi, jolloin juuritunkeumaa saavutetaan enemmän. Käsinhitsauskokeet osoittivat muokatun pulssimuodon olevan myös hitsaajan näkökulmasta käytettävyydeltään paras pulssimuoto. Erityisesti valokaaren vakaus ja kohdistuvuus sekä suurien hitsausnopeuksien sietokyky olivat muokatun pulssimuodon etuja. Selviä haittavaikutuksia pulssimuodon muokkaamiselle ei löydetty.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
Bearing performance signi cantly a ects the dynamic behaviors and estimated working life of a rotating system. A common bearing type is the ball bearing, which has been under investigation in numerous published studies. The complexity of the ball bearing models described in the literature varies. Naturally, model complexity is related to computational burden. In particular, the inclusion of centrifugal forces and gyroscopic moments signi cantly increases the system degrees of freedom and lengthens solution time. On the other hand, for low or moderate rotating speeds, these e ects can be neglected without signi cant loss of accuracy. The objective of this paper is to present guidelines for the appropriate selection of a suitable bearing model for three case studies. To this end, two ball bearing models were implemented. One considers high-speed forces, and the other neglects them. Both models were used to study a three structures, and the simulation results were.
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
The non-idealities in a rotor-bearing system may cause undesirable subcritical superharmonic resonances that occur when the rotating speed of the rotor is a fraction of the natural frequency of the system. These resonances arise partly from the non-idealities of the bearings. This study introduces a novel simulation approach that can be used to study the superharmonic vibrations of rotor-bearing systems. The superharmonic vibrations of complex rotor-bearing systems can be studied in an accurate manner by combining a detailed rotor and bearing model in a multibody simulation approach. The research looks at the theoretical background of multibody formulations that can be used in the dynamic analysis of flexible rotors. The multibody formulations currently in use are suitable for linear deformation analysis only. However, nonlinear formulation may arise in high-speed rotor dynamics applications due to the cenrrifugal stiffening effect. For this reason, finite element formulations that can describe nonlinear deformation are also introduced in this work. The description of the elastic forces in the absolute nodal coordinate formulation is studied and improved. A ball bearing model that includes localized and distributed defects is developed in this study. This bearing model could be used in rotor dynamics or multibody code as an interface elements between the rotor and the supporting structure. The model includes descriptions of the nonlinear Hertzian contact deformation and the elastohydrodynamic fluid film. The simulation approaches and models developed here are applied in the analysis of two example rotor-bearing systems. The first example is an electric motor supported by two ball bearings and the second is a roller test rig that consists of the tube roll of a paper machine supported by a hard-bearing-type balanceing machine. The simulation results are compared to the results available in literature as well as to those obtained by measuring the existing structure. In both practical examples, the comparison shows that the simulation model is capable of predicting the realistic responses of a rotor system. The simulation approaches developed in this work can be used in the analysis of the superharmonic vibrations of general rotor-bearing systems.
Resumo:
Tässä työssä esitellään yleisesti ORC-prosessi, sen toimintaperiaate ja käyttökohteet. Työn tavoitteena oli todentaa diesel-moottorin savukaasujen lämpöenergian sähköenergiaksi muuntavan mikro-ORC-energianmuuntimen suorituskyky. Suorituskyky pyrittiin toteamaan laskemalla laboratoriomittauksista saadusta datasta koelaitoksen sähköntuotannon hyötysuhde ja vertaamalla sitä mallinnuksessa laskettuun hyötysuhteeseen. Esitys käytännöstä suorituskyvyn todentamiseen kuuluu työn sisältöön. Koelaitoksen suorituskykyä ei pystytty toteamaan turbogeneraattoriin liittyvien ongelmien vuoksi. Tarkasteltavaksi tähän työhön jäi koelaitoksen suorituskykyyn olennaisesti liittyvien laitoskomponenttien toiminta niille tyypillisten mittausdatasta laskettujen tunnuslukujen kautta. Koelaitoksella käytettyjen lämmönsiirrinten todettiin olevan kykeneviä siirtämään tarpeeksi lämpöenergiaa 130 kW jarruteholla toimivan diesel-moottorin savukaasujen lämmöstä sähköenergian tuotantoon. Laitoksen kaupallistamista tarkasteltiin asiakkaan ja valmistajan näkökulmasta. Tarkasteluun sisältyi katsaus kaupalliseen versioon kuuluvista ominaisuuksista, alihankinnasta ja säädöksistä, jotka laitoksen on täytettävä markkinoille päästäkseen.
Resumo:
An in vitro investigation of some important factors controlling the activity of chitin synthase in cell-free extracts of two Mortierella species has been carried out. Mixed membrane fractions from mycelial homogenates of Mortierella candelabrum and Mortierella pusilla were found to catalyse the transfer of N-acetylglucosamine from UDP-N-acetylglucosamine into an insoluble product characterized as chitin by its insolubility in weak acid and alkali, and the release of glucosamine and diacetylchitobiose on hydrolysis with a strong acid and chitinase, respectively. Apparent Km values for UDP-GlcNAc were 1.8 mM and 2.0 mM for M. pusilla and ~ candelabrum, respectively. Polyoxin D was found to be a very potent competitive inhibitor with values of the constant of inhibition, Ki' for both species about three orders of magnitude lower than theKm for UDP-GlcNAc. A divalent cation, Mg+2 , Mn+2 or Co+2 , was required for activity. N-acetylglucosamine, the monomer of chitin, stimulated the activity of the enzyme. The crude enzyme preparation of ~ candelabrum, unlike that of ~ pusilla, showed an absolute requirement for both Mg+2 and N-acetylglucosamine. Large differences in response to exogenous proteases were noted in the ratio of active to inactive chitin synthase of the two species. A fifteen fold or greater increase was obtained after treatment with acid protease (from Aspergillussaitoi) as compared to a two- to four-fold activation of the M. pusilla membrane preparation treated similarly. During storage at 4°C over 48 hours, an endogenous activation of chitin synthase of ~ pus ilIa was achieved, comparable to that obtained by exogenous protease treatment. The high speed supernatant of both species inhibited the chitin synthase activity of the mixed membrane fractions. The inhibitor of ~ pus ilIa was effective against the pre-activated enzyme whereas that of M. candelabrum inhibited the activated enzyme. Several possibilities are discussed as to the role of the different factors regulating the enzyme activity. The suggestion is made from the properties of chitin synthase in the two species that in vivo a delicate balance exists between the activation and inactivation of the enzyme which is responsible for the pattern of wall growth of each fungus.
Resumo:
Infrared thermography is a non-invasive technique that measures mid to long-wave infrared radiation emanating from all objects and converts this to temperature. As an imaging technique, the value of modern infrared thermography is its ability to produce a digitized image or high speed video rendering a thermal map of the scene in false colour. Since temperature is an important environmental parameter influencing animal physiology and metabolic heat production an energetically expensive process, measuring temperature and energy exchange in animals is critical to understanding physiology, especially under field conditions. As a non-contact approach, infrared thermography provides a non-invasive complement to physiological data gathering. One caveat, however, is that only surface temperatures are measured, which guides much research to those thermal events occurring at the skin and insulating regions of the body. As an imaging technique, infrared thermal imaging is also subject to certain uncertainties that require physical modeling, which is typically done via built-in software approaches. Infrared thermal imaging has enabled different insights into the comparative physiology of phenomena ranging from thermogenesis, peripheral blood flow adjustments, evaporative cooling, and to respiratory physiology. In this review, I provide background and guidelines for the use of thermal imaging, primarily aimed at field physiologists and biologists interested in thermal biology. I also discuss some of the better known approaches and discoveries revealed from using thermal imaging with the objective of encouraging more quantitative assessment.
Resumo:
Le projet de recherche porte sur l'étude des problèmes de conception et de planification d'un réseau optique de longue distance, aussi appelé réseau de coeur (OWAN-Optical Wide Area Network en anglais). Il s'agit d'un réseau qui transporte des flots agrégés en mode commutation de circuits. Un réseau OWAN relie différents sites à l'aide de fibres optiques connectées par des commutateurs/routeurs optiques et/ou électriques. Un réseau OWAN est maillé à l'échelle d'un pays ou d’un continent et permet le transit des données à très haut débit. Dans une première partie du projet de thèse, nous nous intéressons au problème de conception de réseaux optiques agiles. Le problème d'agilité est motivé par la croissance de la demande en bande passante et par la nature dynamique du trafic. Les équipements déployés par les opérateurs de réseaux doivent disposer d'outils de configuration plus performants et plus flexibles pour gérer au mieux la complexité des connexions entre les clients et tenir compte de la nature évolutive du trafic. Souvent, le problème de conception d'un réseau consiste à prévoir la bande passante nécessaire pour écouler un trafic donné. Ici, nous cherchons en plus à choisir la meilleure configuration nodale ayant un niveau d'agilité capable de garantir une affectation optimale des ressources du réseau. Nous étudierons également deux autres types de problèmes auxquels un opérateur de réseau est confronté. Le premier problème est l'affectation de ressources du réseau. Une fois que l'architecture du réseau en termes d'équipements est choisie, la question qui reste est de savoir : comment dimensionner et optimiser cette architecture pour qu'elle rencontre le meilleur niveau possible d'agilité pour satisfaire toute la demande. La définition de la topologie de routage est un problème d'optimisation complexe. Elle consiste à définir un ensemble de chemins optiques logiques, choisir les routes physiques suivies par ces derniers, ainsi que les longueurs d'onde qu'ils utilisent, de manière à optimiser la qualité de la solution obtenue par rapport à un ensemble de métriques pour mesurer la performance du réseau. De plus, nous devons définir la meilleure stratégie de dimensionnement du réseau de façon à ce qu'elle soit adaptée à la nature dynamique du trafic. Le second problème est celui d'optimiser les coûts d'investissement en capital(CAPEX) et d'opération (OPEX) de l'architecture de transport proposée. Dans le cas du type d'architecture de dimensionnement considérée dans cette thèse, le CAPEX inclut les coûts de routage, d'installation et de mise en service de tous les équipements de type réseau installés aux extrémités des connexions et dans les noeuds intermédiaires. Les coûts d'opération OPEX correspondent à tous les frais liés à l'exploitation du réseau de transport. Étant donné la nature symétrique et le nombre exponentiel de variables dans la plupart des formulations mathématiques développées pour ces types de problèmes, nous avons particulièrement exploré des approches de résolution de type génération de colonnes et algorithme glouton qui s'adaptent bien à la résolution des grands problèmes d'optimisation. Une étude comparative de plusieurs stratégies d'allocation de ressources et d'algorithmes de résolution, sur différents jeux de données et de réseaux de transport de type OWAN démontre que le meilleur coût réseau est obtenu dans deux cas : une stratégie de dimensionnement anticipative combinée avec une méthode de résolution de type génération de colonnes dans les cas où nous autorisons/interdisons le dérangement des connexions déjà établies. Aussi, une bonne répartition de l'utilisation des ressources du réseau est observée avec les scénarios utilisant une stratégie de dimensionnement myope combinée à une approche d'allocation de ressources avec une résolution utilisant les techniques de génération de colonnes. Les résultats obtenus à l'issue de ces travaux ont également démontré que des gains considérables sont possibles pour les coûts d'investissement en capital et d'opération. En effet, une répartition intelligente et hétérogène de ressources d’un réseau sur l'ensemble des noeuds permet de réaliser une réduction substantielle des coûts du réseau par rapport à une solution d'allocation de ressources classique qui adopte une architecture homogène utilisant la même configuration nodale dans tous les noeuds. En effet, nous avons démontré qu'il est possible de réduire le nombre de commutateurs photoniques tout en satisfaisant la demande de trafic et en gardant le coût global d'allocation de ressources de réseau inchangé par rapport à l'architecture classique. Cela implique une réduction substantielle des coûts CAPEX et OPEX. Dans nos expériences de calcul, les résultats démontrent que la réduction de coûts peut atteindre jusqu'à 65% dans certaines jeux de données et de réseau.
Resumo:
Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel. Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée.
Resumo:
Nonlinear dynamics of laser systems has become an interesting area of research in recent times. Lasers are good examples of nonlinear dissipative systems showing many kinds of nonlinear phenomena such as chaos, multistability and quasiperiodicity. The study of these phenomena in lasers has fundamental scientific importance since the investigations on these effects reveal many interesting features of nonlinear effects in practical systems. Further, the understanding of the instabilities in lasers is helpful in detecting and controlling such effects. Chaos is one of the most interesting phenomena shown by nonlinear deterministic systems. It is found that, like many nonlinear dissipative systems, lasers also show chaos for certain ranges of parameters. Many investigations on laser chaos have been done in the last two decades. The earlier studies in this field were concentrated on the dynamical aspects of laser chaos. However, recent developments in this area mainly belong to the control and synchronization of chaos. A number of attempts have been reported in controlling or suppressing chaos in lasers since lasers are the practical systems aimed to operated in stable or periodic mode. On the other hand, laser chaos has been found to be applicable in high speed secure communication based on synchronization of chaos. Thus, chaos in laser systems has technological importance also. Semiconductor lasers are most applicable in the fields of optical communications among various kinds of laser due to many reasons such as their compactness, reliability modest cost and the opportunity of direct current modulation. They show chaos and other instabilities under various physical conditions such as direct modulation and optical or optoelectronic feedback. It is desirable for semiconductor lasers to have stable and regular operation. Thus, the understanding of chaos and other instabilities in semiconductor lasers and their xi control is highly important in photonics. We address the problem of controlling chaos produced by direct modulation of laser diodes. We consider the delay feedback control methods for this purpose and study their performance using numerical simulation. Besides the control of chaos, control of other nonlinear effects such as quasiperiodicity and bistability using delay feedback methods are also investigated. A number of secure communication schemes based on synchronization of chaos semiconductor lasers have been successfully demonstrated theoretically and experimentally. The current investigations in these field include the study of practical issues on the implementations of such encryption schemes. We theoretically study the issues such as channel delay, phase mismatch and frequency detuning on the synchronization of chaos in directly modulated laser diodes. It would be helpful for designing and implementing chaotic encryption schemes using synchronization of chaos in modulated semiconductor laser
Resumo:
In recent years, there is a visible trend for products/services which demand seamless integration of cellular networks, WLANs and WPANs. This is a strong indication for the inclusion of high speed short range wireless technology in future applications. In this context UWB radio has a significant role to play as an extension/complement to existing cellular/access technology. In the present work, we have investigated two major types of wide band planar antennas: Monopole and Slot. Four novel compact broadband antennas, suitable for poratble applications, are designed and characterized, namely 1. Elliptical monopole 2. Inverted cone monopole 3. Koch fractal slot 4. Wide band slot The performance of these designs have been studied using standard simulation tools used in industry/academia and they have been experimentally verified. Antenna design guidelines are also deduced by accounting the resonances in each structure. In addition to having compact sized, high efficiency and broad bandwidth antennas, one of the major criterion in the design of impulse-UWB systems have been the transmission of narrow band pulses with minimum distortion. The key challenge is not only to design a broad band antenna with constant and stable gain but to maintain a flat group delay or linear phase response in the frequency domain or excellent transient response in time domain. One of the major contributions of the thesis lies in the analysis of the frequency and time-domain response of the designed UWB antennas to confirm their suitability for portable pulsed-UWB systems. Techniques to avoid narrowband interference by engraving narrow slot resonators on the antenna is also proposed and their effect on a nano-second pulse have been investigated.
Resumo:
The authors apply the theory of photothermal lens formation and also that of pure optical nonlinearity to account for the phase modulation in a beam as it traverses a nonlinear medium. It is used to simultaneously determine the nonlinear optical refraction and the thermo-optic coefficient. They demonstrate this technique using some metal phthalocyanines dissolved in dimethyl sulfoxide, irradiated by a Q-switched Nd:YAG laser with 10 Hz repetition rate and a pulse width of 8 ns. The mechanism for reverse saturable absorption in these materials is also discussed.
Resumo:
Wavelength dependence of saturable absorption (SA) and reverse saturable absorption (RSA) of zinc phthalocyanine was studied using 10 Hz, 8 ns pulses from a tunable laser, in the wavelength range of 520–686 nm, which includes the rising edge of the Q band in the electronic absorption spectrum. The nonlinear response is wavelength dependent and switching from RSA to SA has been observed as the excitation wavelength changes from the low absorption window region to higher absorption regime near the Q band. The SA again changes back to RSA when we further move over to the infrared region. Values of the imaginary part of third order susceptibility are calculated for various wavelengths in this range. This study is important in identifying the spectral range over which the nonlinear material acts as RSA based optical limiter.