881 resultados para Signature Verification, Forgery Detection, Fuzzy Modeling
Resumo:
Geophysical time series sometimes exhibit serial correlations that are stronger than can be captured by the commonly used first‐order autoregressive model. In this study we demonstrate that a power law statistical model serves as a useful upper bound for the persistence of total ozone anomalies on monthly to interannual timescales. Such a model is usually characterized by the Hurst exponent. We show that the estimation of the Hurst exponent in time series of total ozone is sensitive to various choices made in the statistical analysis, especially whether and how the deterministic (including periodic) signals are filtered from the time series, and the frequency range over which the estimation is made. In particular, care must be taken to ensure that the estimate of the Hurst exponent accurately represents the low‐frequency limit of the spectrum, which is the part that is relevant to long‐term correlations and the uncertainty of estimated trends. Otherwise, spurious results can be obtained. Based on this analysis, and using an updated equivalent effective stratospheric chlorine (EESC) function, we predict that an increase in total ozone attributable to EESC should be detectable at the 95% confidence level by 2015 at the latest in southern midlatitudes, and by 2020–2025 at the latest over 30°–45°N, with the time to detection increasing rapidly with latitude north of this range.
Resumo:
An unusually strong and prolonged stratospheric sudden warming (SSW) in January 2006 was the first major SSW for which globally distributed long-lived trace gas data are available covering the upper troposphere through the lower mesosphere. We use Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment-Fourier Transform Spectrometer (ACE-FTS) data, the SLIMCAT Chemistry Transport Model (CTM), and assimilated meteorological analyses to provide a comprehensive picture of transport during this event. The upper tropospheric ridge that triggered the SSW was associated with an elevated tropopause and layering in trace gas profiles in conjunction with stratospheric and tropospheric intrusions. Anomalous poleward transport (with corresponding quasi-isentropic troposphere-to-stratosphere exchange at the lowest levels studied) in the region over the ridge extended well into the lower stratosphere. In the middle and upper stratosphere, the breakdown of the polar vortex transport barrier was seen in a signature of rapid, widespread mixing in trace gases, including CO, H2O, CH4 and N2O. The vortex broke down slightly later and more slowly in the lower than in the middle stratosphere. In the middle and lower stratosphere, small remnants with trace gas values characteristic of the pre-SSW vortex lingered through the weak and slow recovery of the vortex. The upper stratospheric vortex quickly reformed, and, as enhanced diabatic descent set in, CO descended into this strong vortex, echoing the fall vortex development. Trace gas evolution in the SLIMCAT CTM agrees well with that in the satellite trace gas data from the upper troposphere through the middle stratosphere. In the upper stratosphere and lower mesosphere, the SLIMCAT simulation does not capture the strong descent of mesospheric CO and H2O values into the reformed vortex; this poor CTM performance in the upper stratosphere and lower mesosphere results primarily from biases in the diabatic descent in assimilated analyses.
Resumo:
We employ a numerical model of cusp ion precipitation and proton aurora emission to fit variations of the peak Doppler-shifted Lyman-a intensity observed on 26 November 2000 by the SI-12 channel of the FUV instrument on the IMAGE satellite. The major features of this event appeared in response to two brief swings of the interplanetary magnetic field (IMF) toward a southward orientation. We reproduce the observed spatial distributions of this emission on newly opened field lines by combining the proton emission model with a model of the response of ionospheric convection. The simulations are based on the observed variations of the solar wind proton temperature and concentration and the interplanetary magnetic field clock angle. They also allow for the efficiency, sampling rate, integration time and spatial resolution of the FUV instrument. The good match (correlation coefficient 0.91, significant at the 98% level) between observed and modeled variations confirms the time constant (about 4 min) for the rise and decay of the proton emissions predicted by the model for southward IMF conditions. The implications for the detection of pulsed magnetopause reconnection using proton aurora are discussed for a range of interplanetary conditions.
Resumo:
Multibiometrics aims at improving biometric security in presence of spoofing attempts, but exposes a larger availability of points of attack. Standard fusion rules have been shown to be highly sensitive to spoofing attempts – even in case of a single fake instance only. This paper presents a novel spoofing-resistant fusion scheme proposing the detection and elimination of anomalous fusion input in an ensemble of evidence with liveness information. This approach aims at making multibiometric systems more resistant to presentation attacks by modeling the typical behaviour of human surveillance operators detecting anomalies as employed in many decision support systems. It is shown to improve security, while retaining the high accuracy level of standard fusion approaches on the latest Fingerprint Liveness Detection Competition (LivDet) 2013 dataset.
Resumo:
Nonsyndromic cleft lip and palate (NSCL/P) is a complex disease resulting from failure of fusion of facial primordia, a complex developmental process that includes the epithelial-mesenchymal transition (EMT). Detection of differential gene transcription between NSCL/P patients and control individuals offers an interesting alternative for investigating pathways involved in disease manifestation. Here we compared the transcriptome of 6 dental pulp stem cell (DPSC) cultures from NSCL/P patients and 6 controls. Eighty-seven differentially expressed genes (DEGs) were identified. The most significant putative gene network comprised 13 out of 87 DEGs of which 8 encode extracellular proteins: ACAN, COL4A1, COL4A2, GDF15, IGF2, MMP1, MMP3 and PDGFa. Through clustering analyses we also observed that MMP3, ACAN, COL4A1 and COL4A2 exhibit co-regulated expression. Interestingly, it is known that MMP3 cleavages a wide range of extracellular proteins, including the collagens IV, V, IX, X, proteoglycans, fibronectin and laminin. It is also capable of activating other MMPs. Moreover, MMP3 had previously been associated with NSCL/P. The same general pattern was observed in a further sample, confirming involvement of synchronized gene expression patterns which differed between NSCL/P patients and controls. These results show the robustness of our methodology for the detection of differentially expressed genes using the RankProd method. In conclusion, DPSCs from NSCL/P patients exhibit gene expression signatures involving genes associated with mechanisms of extracellular matrix modeling and palate EMT processes which differ from those observed in controls. This comparative approach should lead to a more rapid identification of gene networks predisposing to this complex malformation syndrome than conventional gene mapping technologies.
Resumo:
The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or Virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that Could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients - batch learning and clutter detection - the NMF mechanism was capable to infer perfectly the correct object-word mapping. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Colour segmentation is the most commonly used method in road signs detection. Road sign contains several basic colours such as red, yellow, blue and white which depends on countries.The objective of this thesis is to do an evaluation of the four colour segmentation algorithms. Dynamic Threshold Algorithm, A Modification of de la Escalera’s Algorithm, the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm. The processing time and segmentation success rate as criteria are used to compare the performance of the four algorithms. And red colour is selected as the target colour to complete the comparison. All the testing images are selected from the Traffic Signs Database of Dalarna University [1] randomly according to the category. These road sign images are taken from a digital camera mounted in a moving car in Sweden.Experiments show that the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm are more accurate and stable to detect red colour of road signs. And the method could also be used in other colours analysis research. The yellow colour which is chosen to evaluate the performance of the four algorithms can reference Master Thesis of Yumei Liu.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
The article reviews the modelling of District Metered Areas (DMAs) with relatively high leakage rate. As a generally recognised approach in modelling of leakage does not exist, modelling of leakage by enginners and other researchers usually takes place by dividing the whole leakage rate evenly to all available nodes of the model. In this article, a new methodology is proposed to determine the nodal leakage by using a hydraulic model. The proposed methodology takes into consideration the IWA water balance methodology, the Minimum Night Flow (MNF) analysis, the number of connections related to each node and the marerial of pipes. In addition, the model is illustrated by a real case study, as it was applied in Kalipoli’s DMA. Results show that the proposed model gives reliable results.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
This thesis explores the possibility of directly detecting blackbody emission from Primordial Black Holes (PBHs). A PBH might form when a cosmological density uctuation with wavenumber k, that was once stretched to scales much larger than the Hubble radius during ination, reenters inside the Hubble radius at some later epoch. By modeling these uctuations with a running{tilt power{law spectrum (n(k) = n0 + a1(k)n1 + a2(k)n2 + a3(k)n3; n0 = 0:951; n1 = ????0:055; n2 and n3 unknown) each pair (n2,n3) gives a di erent n(k) curve with a maximum value (n+) located at some instant (t+). The (n+,t+) parameter space [(1:20,10????23 s) to (2:00,109 s)] has t+ = 10????23 s{109 s and n+ = 1:20{2:00 in order to encompass the formation of PBHs in the mass range 1015 g{1010M (from the ones exploding at present to the most massive known). It was evenly sampled: n+ every 0.02; t+ every order of magnitude. We thus have 41 33 = 1353 di erent cases. However, 820 of these ( 61%) are excluded (because they would provide a PBH population large enough to close the Universe) and we are left with 533 cases for further study. Although only sub{stellar PBHs ( 1M ) are hot enough to be detected at large distances we studied PBHs with 1015 g{1010M and determined how many might have formed and still exist in the Universe. Thus, for each of the 533 (n+,t+) pairs we determined the fraction of the Universe going into PBHs at each epoch ( ), the PBH density parameter (PBH), the PBH number density (nPBH), the total number of PBHs in the Universe (N), and the distance to the nearest one (d). As a rst result, 14% of these (72 cases) give, at least, one PBH within the observable Universe, one{third being sub{stellar and the remaining evenly spliting into stellar, intermediate mass and supermassive. Secondly, we found that the nearest stellar mass PBH might be at 32 pc, while the nearest intermediate mass and supermassive PBHs might be 100 and 1000 times farther, respectively. Finally, for 6% of the cases (four in 72) we might have substellar mass PBHs within 1 pc. One of these cases implies a population of 105 PBHs, with a mass of 1018 g(similar to Halley's comet), within the Oort cloud, which means that the nearest PBH might be as close as 103 AU. Such a PBH could be directly detected with a probability of 10????21 (cf. 10????32 for low{energy neutrinos). We speculate in this possibility.
Resumo:
COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.
Resumo:
This work presents a proposal to detect interface in atmospheric oil tanks by installing a differential pressure level transmitter to infer the oil-water interface. The main goal of this project is to maximize the quantity of free water that is delivered to the drainage line by controlling the interface. A Fuzzy Controller has been implemented by using the interface transmitter as the Process Variable. Two ladder routine was generated to perform the control. One routine was developed to calculate the error and error variation. The other was generate to develop the fuzzy controller itself. By using rules, the fuzzy controller uses these variables to set the output. The output is the position variation of the drainage valve. Although the ladder routine was implemented into an Allen Bradley PLC, Control Logix family it can be implemented into any brand of PLCs
Resumo:
A lógica fuzzy admite infinitos valores lógicos intermediários entre o falso e o verdadeiro. Com esse princípio, foi elaborado neste trabalho um sistema baseado em regras fuzzy, que indicam o índice de massa corporal de animais ruminantes com objetivo de obter o melhor momento para o abate. O sistema fuzzy desenvolvido teve como entradas as variáveis massa e altura, e a saída um novo índice de massa corporal, denominado Índice de Massa Corporal Fuzzy (IMC Fuzzy), que poderá servir como um sistema de detecção do momento de abate de bovinos, comparando-os entre si através das variáveis linguísticas )Muito BaixaM, ,BaixaB, ,MédiaM, ,AltaA e Muito AltaM. Para a demonstração e aplicação da utilização deste sistema fuzzy, foi feita uma análise de 147 vacas da raça Nelore, determinando os valores do IMC Fuzzy para cada animal e indicando a situação de massa corpórea de todo o rebanho. A validação realizada do sistema foi baseado em uma análise estatística, utilizando o coeficiente de correlação de Pearson 0,923, representando alta correlação positiva e indicando que o método proposto está adequado. Desta forma, o presente método possibilita a avaliação do rebanho, comparando cada animal do rebanho com seus pares do grupo, fornecendo desta forma um método quantitativo de tomada de decisão para o pecuarista. Também é possível concluir que o presente trabalho estabeleceu um método computacional baseado na lógica fuzzy capaz de imitar parte do raciocínio humano e interpretar o índice de massa corporal de qualquer tipo de espécie bovina e em qualquer região do País.