925 resultados para Signature Verification, Forgery Detection, Fuzzy Modeling
Resumo:
This thesis presents a detailed numerical analysis, fabrication method and experimental investigation on 45º tilted fiber gratings (45º-TFGs) and excessively tilted fiber gratings (Ex-TFGs), and their applications in fiber laser and sensing systems. The one of the most significant contributions of the work reported in this thesis is that the 45º-TFGs with high polarization extinction ratio (PER) have been fabricated in single mode telecom and polarization maintaining (PM) fibers with spectral response covering three prominent optic communication and central wavelength ranges at 1060nm, 1310nm and 1550nm. The most achieved PERs for the 45º-TFGs are up to and greater than 35-50dB, which have reached and even exceeded many commercial in-fiber polarizers. It has been proposed that the 45º-TFGs of high PER can be used as ideal in-fiber polarizers for a wide range of fiber systems and applications. In addition, in-depth detailed theoretical models and analysis have been developed and systematic experimental evaluation has been conducted producing results in excellent agreement with theoretical modeling. Another important outcome of the research work is the proposal and demonstration of all fiber Lyot filters (AFLFs) implemented by utilizing two (for a single stage type) and more (for multi-stage) 45º-TFGs in PM fiber cavity structure. The detailed theoretical analysis and modelling of such AFLFs have also been carried out giving design guidance for the practical implementation. The unique function advantages of 45º-TFG based AFLFs have been revealed, showing high finesse multi-wavelength transmission of single polarization and wide range of tuneability. The temperature tuning results of AFLFs have shown that the AFLFs have 60 times higher thermal sensitivity than the normal FBGs, thus permitting thermal tuning rate of ~8nm/10ºC. By using an intra-cavity AFLF, an all fiber soliton mode locking laser with almost total suppression of siliton sidebands, single polarization output and single/multi-wavelength switchable operation has been demonstrated. The final significant contribution is the theoretical analysis and experimental verification on the design, fabrication and sensing application of Ex-TFGs. The Ex-TFG sensitivity model to the surrounding medium refractive index (SRI) has been developed for the first time, and the factors that affect the thermal and SRI sensitivity in relation to the wavelength range, tilt angle, and the size of cladding have been investigated. As a practical SRI sensor, an 81º-TFG UV-inscribed in the fiber with small (40μm) cladding radius has shown an SRI sensitivity up to 1180nm/RIU in the index of 1.345 range. Finally, to ensure single polarization detection in such an SRI sensor, a hybrid configuration by UV-inscribing a 45º-TFG and an 81º-TFG closely on the same piece of fiber has been demonstrated as a more advanced SRI sensing system.
Resumo:
This paper aims at development of procedures and algorithms for application of artificial intelligence tools to acquire process and analyze various types of knowledge. The proposed environment integrates techniques of knowledge and decision process modeling such as neural networks and fuzzy logic-based reasoning methods. The problem of an identification of complex processes with the use of neuro-fuzzy systems is solved. The proposed classifier has been successfully applied for building one decision support systems for solving managerial problem.
Resumo:
During the last decade, microfabrication of photonic devices by means of intense femtosecond (fs) laser pulses has emerged as a novel technology. A common requirement for the production of these devices is that the refractive index modification pitch size should be smaller than the inscribing wavelength. This can be achieved by making use of the nonlinear propagation of intense fs laser pulses. Nonlinear propagation of intense fs laser pulses is an extremely complicated phenomenon featuring complex multiscale spatiotemporal dynamics of the laser pulses. We have utilized a principal approach based on finite difference time domain (FDTD) modeling of the full set of Maxwell's equations coupled to the conventional Drude model for generated plasma. Nonlinear effects are included, such as self-phase modulation and multiphoton absorption. Such an approach resolves most problems related to the inscription of subwavelength structures, when the paraxial approximation is not applicable to correctly describe the creation of and scattering on the structures. In a representative simulation of the inscription process, the signature of degenerate four wave mixing has been found. © 2012 Optical Society of America.
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.
Resumo:
Formal grammars can used for describing complex repeatable structures such as DNA sequences. In this paper, we describe the structural composition of DNA sequences using a context-free stochastic L-grammar. L-grammars are a special class of parallel grammars that can model the growth of living organisms, e.g. plant development, and model the morphology of a variety of organisms. We believe that parallel grammars also can be used for modeling genetic mechanisms and sequences such as promoters. Promoters are short regulatory DNA sequences located upstream of a gene. Detection of promoters in DNA sequences is important for successful gene prediction. Promoters can be recognized by certain patterns that are conserved within a species, but there are many exceptions which makes the promoter recognition a complex problem. We replace the problem of promoter recognition by induction of context-free stochastic L-grammar rules, which are later used for the structural analysis of promoter sequences. L-grammar rules are derived automatically from the drosophila and vertebrate promoter datasets using a genetic programming technique and their fitness is evaluated using a Support Vector Machine (SVM) classifier. The artificial promoter sequences generated using the derived L- grammar rules are analyzed and compared with natural promoter sequences.
Resumo:
This paper investigates the power management issues in a mobile solar energy storage system. A multi-converter based energy storage system is proposed, in which solar power is the primary source while the grid or the diesel generator is selected as the secondary source. The existence of the secondary source facilitates the battery state of charge detection by providing a constant battery charging current. Converter modeling, multi-converter control system design, digital implementation and experimental verification are introduced and discussed in details. The prototype experiment indicates that the converter system can provide a constant charging current during solar converter maximum power tracking operation, especially during large solar power output variation, which proves the feasibility of the proposed design. © 2014 IEEE.
Resumo:
Decision making and technical decision analysis demand computer-aided techniques and therefore more and more support by formal techniques. In recent years fuzzy decision analysis and related techniques gained importance as an efficient method for planning and optimization applications in fields like production planning, financial and economical modeling and forecasting or classification. It is also known, that the hierarchical modeling of the situation is one of the most popular modeling method. It is shown, how to use the fuzzy hierarchical model in complex with other methods of Multiple Criteria Decision Making. We propose a novel approach to overcome the inherent limitations of Hierarchical Methods by exploiting multiple criteria decision making.
Resumo:
The papers is dedicated to the questions of modeling and basing super-resolution measuring- calculating systems in the context of the conception “device + PC = new possibilities”. By the authors of the article the new mathematical method of solution of the multi-criteria optimization problems was developed. The method is based on physic-mathematical formalism of reduction of fuzzy disfigured measurements. It is shown, that determinative part is played by mathematical properties of physical models of the object, which is measured, surroundings, measuring components of measuring-calculating systems and theirs cooperation as well as the developed mathematical method of processing and interpretation of measurements problem solution.
Resumo:
Магдалина Василева Тодорова - В статията е описан подход за верификация на процедурни програми чрез изграждане на техни модели, дефинирани чрез обобщени мрежи. Подходът интегрира концепцията “design by contract” с подходи за верификация от тип доказателство на теореми и проверка на съгласуваност на модели. За целта разделно се верифицират функциите, които изграждат програмата относно спецификации според предназначението им. Изгражда се обобщен мрежов модел, специфициащ връзките между функциите във вид на коректни редици от извиквания. За главната функция на програмата се построява обобщен мрежов модел и се проверява дали той съответства на мрежовия модел на връзките между функциите на програмата. Всяка от функциите на програмата, която използва други функции се верифицира и относно спецификацията, зададена чрез мрежовия модел на връзките между функциите на програмата.
Resumo:
This paper describes work carried out to develop methods of verifying that machine tools are capable of machining parts to within specification, immediately before carrying out critical material removal operations, and with negligible impact on process times. A review of machine tool calibration and verification technologies identified that current techniques were not suitable due to requirements for significant time and skilled human intervention. A 'solution toolkit' is presented consisting of a selection circular tests and artefact probing which are able to rapidly verify the kinematic errors and in some cases also dynamic errors for different types of machine tool, as well as supplementary methods for tool and spindle error detection. A novel artefact probing process is introduced which simplifies data processing so that the process can be readily automated using only the native machine tool controller. Laboratory testing and industrial case studies are described which demonstrate the effectiveness of this approach.
Resumo:
Software product line modeling aims at capturing a set of software products in an economic yet meaningful way. We introduce a class of variability models that capture the sharing between the software artifacts forming the products of a software product line (SPL) in a hierarchical fashion, in terms of commonalities and orthogonalities. Such models are useful when analyzing and verifying all products of an SPL, since they provide a scheme for divide-and-conquer-style decomposition of the analysis or verification problem at hand. We define an abstract class of SPLs for which variability models can be constructed that are optimal w.r.t. the chosen representation of sharing. We show how the constructed models can be fed into a previously developed algorithmic technique for compositional verification of control-flow temporal safety properties, so that the properties to be verified are iteratively decomposed into simpler ones over orthogonal parts of the SPL, and are not re-verified over the shared parts. We provide tool support for our technique, and evaluate our tool on a small but realistic SPL of cash desks.
Resumo:
This chapter explores ways in which rigorous mathematical techniques, termed formal methods, can be employed to improve the predictability and dependability of autonomic computing. Model checking, formal specification, and quantitative verification are presented in the contexts of conflict detection in autonomic computing policies, and of implementation of goal and utility-function policies in autonomic IT systems, respectively. Each of these techniques is illustrated using a detailed case study, and analysed to establish its merits and limitations. The analysis is then used as a basis for discussing the challenges and opportunities of this endeavour to transition the development of autonomic IT systems from the current practice of using ad-hoc methods and heuristic towards a more principled approach. © 2012, IGI Global.
Resumo:
Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^
Resumo:
Shape memory alloys are a special class of metals that can undergo large deformation yet still be able to recover their original shape through the mechanism of phase transformations. However, when they experience plastic slip, their ability to recover their original shape is reduced. This is due to the presence of dislocations generated by plastic flow that interfere with shape recovery through the shape memory effect and the superelastic effect. A one-dimensional model that captures the coupling between shape memory effect, the superelastic effect and plastic deformation is introduced. The shape memory alloy is assumed to have only 3 phases: austenite, positive variant martensite and negative variant martensite. If the SMA flows plastically, each phase will exhibit a dislocation field that permanently prevents a portion of it from being transformed back to other phases. Hence, less of the phase is available for subsequent phase transformations. A constitutive model was developed to depict this phenomena and simulate the effect of plasticity on both the shape memory effect and the superelastic effect in shape memory alloys. In addition, experimental tests were conducted to characterize the phenomenon in shape memory wire and superelastic wire. ^ The constitutive model was then implemented in within a finite element context as UMAT (User MATerial Subroutine) for the commercial finite element package ABAQUS. The model is phenomenological in nature and is based on the construction of stress-temperature phase diagram. ^ The model has been shown to be capable of capturing the qualitative and quantitative aspects of the coupling between plasticity and the shape memory effect and plasticity and the super elastic effect within acceptable limits. As a verification case a simple truss structure was built and tested and then simulated using the FEA constitutive model. The results where found to be close the experimental data. ^
Resumo:
The use of canines as a method of detection of explosives is well established worldwide and those applying this technology range from police forces and law enforcement to humanitarian agencies in the developing world. Despite the recent surge in publication of novel instrumental sensors for explosives detection, canines are still regarded by many to be the most effective real-time field method of explosives detection. However, unlike instrumental methods, currently it is difficult to determine detection levels, perform calibration of the canines' ability or produce scientifically valid quality control checks. Accordingly, amongst increasingly strict requirements regarding forensic evidence admission such as Frye and Daubert, there is a need for better scientific understanding of the process of canine detection. ^ When translated to the field of canine detection, just like any instrumental technique, peer reviewed publication of the reliability, success and error rates, is required for admissibility. Commonly training is focussed towards high explosives such as TNT and Composition 4, and the low explosives such as Black and Smokeless Powders are added often only for completeness. ^ Headspace analyses of explosive samples, performed by Solid Phase Microextraction (SPME) paired with Gas Chromatography - Mass Spectrometry (GC-MS), and Gas Chromatography - Electron Capture Detection (GC-ECD) was conducted, highlighting common odour chemicals. The odour chemicals detected were then presented to previously trained and certified explosives detection canines, and the activity/inactivity of the odour determined through field trials and experiments. ^ It was demonstrated that TNT and cast explosives share a common odour signature, and the same may be said for plasticized explosives such as Composition C-4 and Deta Sheet. Conversely, smokeless powders were demonstrated not to share common odours. An evaluation of the effectiveness of commercially available pseudo aids reported limited success. The implications of the explosive odour studies upon canine training then led to the development of novel inert training aids based upon the active odours determined. ^