951 resultados para finite-difference time-domain (FDTD) method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel approach to recognize Grantha, an ancient script in South India and converting it to Malayalam, a prevalent language in South India using online character recognition mechanism. The motivation behind this work owes its credit to (i) developing a mechanism to recognize Grantha script in this modern world and (ii) affirming the strong connection among Grantha and Malayalam. A framework for the recognition of Grantha script using online character recognition is designed and implemented. The features extracted from the Grantha script comprises mainly of time-domain features based on writing direction and curvature. The recognized characters are mapped to corresponding Malayalam characters. The framework was tested on a bed of medium length manuscripts containing 9-12 sample lines and printed pages of a book titled Soundarya Lahari writtenin Grantha by Sri Adi Shankara to recognize the words and sentences. The manuscript recognition rates with the system are for Grantha as 92.11%, Old Malayalam 90.82% and for new Malayalam script 89.56%. The recognition rates of pages of the printed book are for Grantha as 96.16%, Old Malayalam script 95.22% and new Malayalam script as 92.32% respectively. These results show the efficiency of the developed system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A printed compact coplanar waveguide fed triangular slot antenna for ultra wide band (UWB) communication systems is presented. The antenna comprises of a triangular slot loaded ground plane with a T shaped strip radiator to enhance the bandwidth and radiation. This compact antenna has a dimension of 26mm×26mm when printed on a substrate of dielectric constant 4.4 and thickness 1.6mm. Design equations are implemented and validated for different substrates. The pulse distortion is insignificant and is verified by the measured antenna performance with high signal fidelity and virtually steady group delay. The simulation and experiment reveal that the proposed antenna exhibits good impedance match, stable radiation patterns and constant gain and group delay over the entire operating band

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental investigations on Microstrip line excited Dielectric Resonator Antenna configurations suitable for Mobile Communication applications are reported. High permittivity (εrd = 48) resonator samples with different aspect ratios are employed for the study. Theoretical analysis performed using FDTD method is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A compact coplanar waveguide-fed (CPW) monopole antenna for ultra-wideband wireless communication is presented. The proposed antenna comprises of a CPW-fed beveled rectangular patch with a modified slotted ground. The overall size of the antenna is 30 mm 27 mm 1.6 mm. The lower edge of the band is attained by properly decoupling the resonant frequencies due to the extended ground plane and the beveled rectangular patch of the antenna. The upper edge of the radiating band is enhanced by beveling the ground plane corners near the feed point. Experimental results show that the designed antenna operates in the 2.7–12 GHz band, for S11 10 dB with a gain of 2.7–5 dBi. Both the frequency domain and time domain characteristics of the antenna are investigated using antenna transfer function. It is observed that the antenna exhibits identical radiation patterns and reasonable transient characteristics over the entire operating band

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Coplanar waveguide fed compact planar monopole antenna with a modified ground plane is presented. Measured and simulated results reveal that the antenna operates in the Ultra Wide Band with almost constant group delay throughout the band. Developed design equations of the antenna are validated for different substrates. Time domain performance of the antenna is also discussed in order to assess its suitability for impulse radio applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The real-time dynamics of multiphoton ionization and fragmentation of molecules Na_2 and Na_3 has been studied in molecular beam experiments employing ion and electron spectroscopy together with femtosecond pump-probe techniques. Experiments with Na_2 and Na_3 reveal unexpected features of the dynamics of the absorption of several photons as seen in the one- and three-dimensional vibrational wave packet motion in different potential surfaces and in high laser fields: In Na_2 a second major resonance-enhanced multiphoton ionization (REMPI) process is observed, involving the excitation of two electrons and subsequent electronic autoionization. The possibility of controlling a reaction by controlling the duration of propagation of a wave packet on an electronically-excited surface is demonstrated. In high laser fields, the contributions from direct photoionization and from the second REMPI process to the total ion yield change, due to different populations in the electronic states participating in the multiphoton ionization (MPI) processes. In addition, a vibrational wave packet motion in the electronic ground state is induced through stimulated emission pumping by the pump laser. The 4^1 \summe^+_g shelf state of Na_2 is given as an example for performing frequency spectroscopy of highlying electronic states in the time domain. Pure wave packet effects, such as the spreading and the revival of a vibrational wave packet, are investigated. The three-dimensional wave packet motion in the Na_3 reflects the normal modes in the X and B states, and shows in addition the pseudorotational motion in the B state in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presencia de microorganismos patógenos en alimentos es uno de los problemas esenciales en salud pública, y las enfermedades producidas por los mismos es una de las causas más importantes de enfermedad. Por tanto, la aplicación de controles microbiológicos dentro de los programas de aseguramiento de la calidad es una premisa para minimizar el riesgo de infección de los consumidores. Los métodos microbiológicos clásicos requieren, en general, el uso de pre-enriquecimientos no-selectivos, enriquecimientos selectivos, aislamiento en medios selectivos y la confirmación posterior usando pruebas basadas en la morfología, bioquímica y serología propias de cada uno de los microorganismos objeto de estudio. Por lo tanto, estos métodos son laboriosos, requieren un largo proceso para obtener resultados definitivos y, además, no siempre pueden realizarse. Para solucionar estos inconvenientes se han desarrollado diversas metodologías alternativas para la detección identificación y cuantificación de microorganismos patógenos de origen alimentario, entre las que destacan los métodos inmunológicos y moleculares. En esta última categoría, la técnica basada en la reacción en cadena de la polimerasa (PCR) se ha convertido en la técnica diagnóstica más popular en microbiología, y recientemente, la introducción de una mejora de ésta, la PCR a tiempo real, ha producido una segunda revolución en la metodología diagnóstica molecular, como pude observarse por el número creciente de publicaciones científicas y la aparición continua de nuevos kits comerciales. La PCR a tiempo real es una técnica altamente sensible -detección de hasta una molécula- que permite la cuantificación exacta de secuencias de ADN específicas de microorganismos patógenos de origen alimentario. Además, otras ventajas que favorecen su implantación potencial en laboratorios de análisis de alimentos son su rapidez, sencillez y el formato en tubo cerrado que puede evitar contaminaciones post-PCR y favorece la automatización y un alto rendimiento. En este trabajo se han desarrollado técnicas moleculares (PCR y NASBA) sensibles y fiables para la detección, identificación y cuantificación de bacterias patogénicas de origen alimentario (Listeria spp., Mycobacterium avium subsp. paratuberculosis y Salmonella spp.). En concreto, se han diseñado y optimizado métodos basados en la técnica de PCR a tiempo real para cada uno de estos agentes: L. monocytogenes, L. innocua, Listeria spp. M. avium subsp. paratuberculosis, y también se ha optimizado y evaluado en diferentes centros un método previamente desarrollado para Salmonella spp. Además, se ha diseñado y optimizado un método basado en la técnica NASBA para la detección específica de M. avium subsp. paratuberculosis. También se evaluó la aplicación potencial de la técnica NASBA para la detección específica de formas viables de este microorganismo. Todos los métodos presentaron una especificidad del 100 % con una sensibilidad adecuada para su aplicación potencial a muestras reales de alimentos. Además, se han desarrollado y evaluado procedimientos de preparación de las muestras en productos cárnicos, productos pesqueros, leche y agua. De esta manera se han desarrollado métodos basados en la PCR a tiempo real totalmente específicos y altamente sensibles para la determinación cuantitativa de L. monocytogenes en productos cárnicos y en salmón y productos derivados como el salmón ahumado y de M. avium subsp. paratuberculosis en muestras de agua y leche. Además este último método ha sido también aplicado para evaluar la presencia de este microorganismo en el intestino de pacientes con la enfermedad de Crohn's, a partir de biopsias obtenidas de colonoscopia de voluntarios afectados. En conclusión, este estudio presenta ensayos moleculares selectivos y sensibles para la detección de patógenos en alimentos (Listeria spp., Mycobacterium avium subsp. paratuberculosis) y para una rápida e inambigua identificación de Salmonella spp. La exactitud relativa de los ensayos ha sido excelente, si se comparan con los métodos microbiológicos de referencia y pueden serusados para la cuantificación de tanto ADN genómico como de suspensiones celulares. Por otro lado, la combinación con tratamientos de preamplificación ha resultado ser de gran eficiencia para el análisis de las bacterias objeto de estudio. Por tanto, pueden constituir una estrategia útil para la detección rápida y sensible de patógenos en alimentos y deberían ser una herramienta adicional al rango de herramientas diagnósticas disponibles para el estudio de patógenos de origen alimentario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under anthropogenic climate change it is possible that the increased radiative forcing and associated changes in mean climate may affect the “dynamical equilibrium” of the climate system; leading to a change in the relative dominance of different modes of natural variability, the characteristics of their patterns or their behavior in the time domain. Here we use multi-century integrations of version three of the Hadley Centre atmosphere model coupled to a mixed layer ocean to examine potential changes in atmosphere-surface ocean modes of variability. After first evaluating the simulated modes of Northern Hemisphere winter surface temperature and geopotential height against observations, we examine their behavior under an idealized equilibrium doubling of atmospheric CO2. We find no significant changes in the order of dominance, the spatial patterns or the associated time series of the modes. Having established that the dynamic equilibrium is preserved in the model on doubling of CO2, we go on to examine the temperature pattern of mean climate change in terms of the modes of variability; the motivation being that the pattern of change might be explicable in terms of changes in the amount of time the system resides in a particular mode. In addition, if the two are closely related, we might be able to assess the relative credibility of different spatial patterns of climate change from different models (or model versions) by assessing their representation of variability. Significant shifts do appear to occur in the mean position of residence when examining a truncated set of the leading order modes. However, on examining the complete spectrum of modes, it is found that the mean climate change pattern is close to orthogonal to all of the modes and the large shifts are a manifestation of this orthogonality. The results suggest that care should be exercised in using a truncated set of variability EOFs to evaluate climate change signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the use of pulse shaping for optimal excitation of samples in time-domain THz spectroscopy. Pulse shaping can be performed in a 4f optical system to specifications from state space models of the system's dynamics. Subspace algorithms may be used for the identification of the state space models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.