879 resultados para Computation time delay
Resumo:
Cette thèse est divisée en trois chapitres. Le premier explique comment utiliser la méthode «level-set» de manière rigoureuse pour faire la simulation de feux de forêt en utilisant comme modèle physique pour la propagation le modèle de l'ellipse de Richards. Le second présente un nouveau schéma semi-implicite avec une preuve de convergence pour la solution d'une équation de type Hamilton-Jacobi anisotrope. L'avantage principal de cette méthode est qu'elle permet de réutiliser des solutions à des problèmes «proches» pour accélérer le calcul. Une autre application de ce schéma est l'homogénéisation. Le troisième chapitre montre comment utiliser les méthodes numériques des deux premiers chapitres pour étudier l'influence de variations à petites échelles dans la vitesse du vent sur la propagation d'un feu de forêt à l'aide de la théorie de l'homogénéisation.
Resumo:
A non-invasive technique is implemented to measure a parameter which is closely related to the distensibility of large arteries, using the second derivative of the infrared photoplethysmographic waveform. Thirty subjects within the age group of 20-61 years were involved in this pilot study. Two new parameters, namely the area of the photoplethysmographic waveform under the systolic peak, and the ratio of the time delay between the systolic and the diastolic peaks and the time period of the waveform ( T/T) were studied as a function of age. It was found that while the parameter which is supposed to be a marker of distensibility of large arteries and T /T values correlate negatively with age, the area under the systolic peak correlates positively with age. The results suggest that the derived parameters could provide a simple, non-invasive means for studying the changes in the elastic properties of the vascular system as a function of age.
Resumo:
The emission features of laser ablated graphite plume generated in a helium ambient atmosphere have been investigated with time and space resolved plasma diagnostic technique. Time resolved optical emission spectroscopy is employed to reveal the velocity distribution of different species ejected during ablation. At lower values of laser fluences only a slowly propagating component of C2 is seen. At high fluences emission from C2 shows a twin peak distribution in time. The formation of an emission peak with diminished time delay giving an energetic peak at higher laser fluences is attributed to many body recombination. It is also observed that these double peaks get modified into triple peak time of flight distribution at distances greater than 16 mm from the target. The occurrence of multiple peaks in the C2 emission is mainly due to the delays caused from the different formation mechanism of C2 species. The velocity distribution of the faster peak exhibits an oscillating character with distance from the target surface.
Resumo:
A laser produced plasma from the multielement solid target YBa2Cu3O7 is generated using 1.06 μm, 9 ns pulses from a Q-switched Nd:YAG laser in air at atmospheric pressure. A time resolved analysis of the profile of the 4554.03 Å resonance line emission from Ba II at various laser power densities has been carried out. It has been found that the line has a profile which is strongly self-reversed. It is also observed that at laser power densities equal to or exceeding 1.6×1011 W cm−2, a third peak begins to develop at the centre of the self-reversed profile and this has been interpreted as due to the anisotropic resonance scattering (fluorescence). The number densities of singly ionized barium ions evaluated from the width of the resonance line as a function of time delay with respect to the beginning of the laser pulse give typical values of the order of 1019 cm−3. The higher ion concentrations existing at smaller time delays are seen to decrease rapidly. The Ba II ions in the ground state resonantly absorb the radiation and this absorption is maximum around 120 ns after the laser pulse.
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
The main goal of this thesis is to study the dynamics of Josephson junction system in the presence of an external rf-biasing.A system of two chaotically synchronized Josephson junction is studied.The change in the dynamics of the system in the presence of at phase difference between the applied fields is considered. Control of chaos is very important from an application point of view. The role Of phase difference in controlling chaos is discussed.An array of three Josephson junctions iS studied for the effect of phase difference on chaos and synchronization and the argument is extended for a system of N Josephson junctions. In the presence of a phase difference between the external fields, the system exhibits periodic behavior with a definite phase relationship between all the three junctions.Itdeals with an array of three Josephson junctions with a time delay in the coupling term. It is observed that only the outer systems synchronize while the middle system remain uncorrelated with t-he other two. The effect of phase difference between the applied fields and time-delay on system dynamics and synchronization is also studied. We study the influence of an applied ac biasing on a serniannular Josephson junction. It is found the magnetic field along with the biasing induces creation and annihilation of fluxons in the junction. The I-V characteristics of the junction is studied by considering the surface loss term also in the model equation. The system is found to exhibit chaotic behavior in the presence of ac biasing.
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
Thermodynamic parameters of the atmosphere form part of the input to numerical forecasting models. Usually these parameters are evaluated from a thermodynamic diagram. Here, a technique is developed to evaluate these parameters quickly and accurately using a Fortran program. This technique is tested with four sets of randomly selected data and the results are in agreement with the results from the conventional method. This technique is superior to the conventional method in three respects: more accuracy, less computation time, and evaluation of additional parameters. The computation time for all the parameters on a PC AT 286 machine is II sec. This software, with appropriate modifications, can be used, for verifying various lines on a thermodynamic diagram
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
Während der letzten 20 Jahre hat sich das Periodensystem bis zu den Elementen 114 und 116 erweitert. Diese sind kernphysikalisch nachgewiesen, so dass jetzt die chemische Untersuchung an erster Selle steht. Nachdem sich das Periodensystem bis zum Element 108 so verhält, wie man es dem Periodensystem nach annimmt, wird in dieser Arbeit die Chemie des Elements 112 untersucht. Dabei geht es um die Adsorptionsenergie auf einer Gold-Ober fläche, weil dies der physikalisch/chemische Prozess ist, der bei der Analyse angewandt wird. Die Methode, die in dieser Arbeit angwandt wird, ist die relativistische Dichtefunktionalmethode. Im ersten Teil wird das Vielkörperproblem in allgemeiner Form behandelt, und im zweiten die grundlegenden Eigenschaften und Formulierungen der Dichtefunktionaltheorie. Die Arbeit beschreibt zwei prinzipiell unterschiedliche Ansätze, wie die Adsorptionsenergie berechnet werden kann. Zum einen ist es die sogenannte Clustermethode, bei der ein Atom auf ein relativ kleines Cluster aufgebracht und dessen Adsorptionsenergie berechnet wird. Wenn es gelingt, die Konvergenz mit der Größe des Clusters zu erreichen, sollte dies zu einem Wert für die Adsorptionsenergie führen. Leider zeigt sich in den Rechnungen, dass aufgrund des zeitlichen Aufwandes die Konvergenz für die Clusterrechnungen nicht erreicht wird. Es werden sehr ausführlich die drei verschiedenen Adsorptionsplätze, die Top-, die Brücken- und die Muldenposition, berechnet. Sehr viel mehr Erfolg erzielt man mit der Einbettungsmethode, bei der ein kleiner Cluster von vielen weiteren Atomen an den Positionen, die sie im Festkörpers auf die Adsorptionsenergie soweit sichergestellt ist, dass physikalisch-chemisch gute Ergebnisse erzielt werden. Alle hier gennanten Rechnungen sowohl mit der Cluster- wie mit der Einbettungsmethode verlangen sehr, sehr lange Rechenzeiten, die, wie oben bereits erwähnt, nicht zu einer Konvergenz für die Clusterrechnungen ausreichten. In der Arbeit wird bei allen Rechnungen sehr detailliert auf die Abhängigkeit von den möglichen Basissätzen eingegangen, die ebenfalls in entscheidender Weise zur Länge und Qualität der Rechnungen beitragen. Die auskonvergierten Rechnungen werden in der Form von Potentialkurven, Density of States (DOS), Overlap Populations sowie Partial Crystal Overlap Populations analysiert. Im Ergebnis zeigt sich, dass die Adsoptionsenergie für das Element 112 auf einer Goldoberfläche ca. 0.2 eV niedriger ist als die Adsorption von Quecksilber auf der gleichen Ober fläche. Mit diesem Ergebnis haben die experimentellen Kernchemiker einen Wert an der Hand, mit dem sie eine Anhaltspunkt haben, wo sie bei den Messungen die wenigen zu erwartenden Ereignisse finden können.
Resumo:
One of the most effective techniques offering QoS routing is minimum interference routing. However, it is complex in terms of computation time and is not oriented toward improving the network protection level. In order to include better levels of protection, new minimum interference routing algorithms are necessary. Minimizing the failure recovery time is also a complex process involving different failure recovery phases. Some of these phases depend completely on correct routing selection, such as minimizing the failure notification time. The level of protection also involves other aspects, such as the amount of resources used. In this case shared backup techniques should be considered. Therefore, minimum interference techniques should also be modified in order to include sharing resources for protection in their objectives. These aspects are reviewed and analyzed in this article, and a new proposal combining minimum interference with fast protection using shared segment backups is introduced. Results show that our proposed method improves both minimization of the request rejection ratio and the percentage of bandwidth allocated to backup paths in networks with low and medium protection requirements
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
Los traumatismos por accidentes de tránsito, constituyen un problema de salud pública, a nivel mundial. Las lesiones más frecuentes son las fracturas de extremidades (84.3%). Las fracturas tienen un elevado riesgo de presentar infecciones, secuelas e incapacidades permanentes. Objetivo : Determinar si los factores asociados con la patología (lugar de fractura, clasificación de fractura, comorbilidades del paciente) y/o los factores relacionados con la atención médica (uso de profilaxis antibiótica diferente al protocolo institucional, tiempo prolongado para remisión, demoras en manejo quirúrgico) se asocian a mayor probabilidad de presentar infección de fracturas abiertas, en población mayor a 15 años, atendidos por accidente de tránsito, en una clínica de Bogotá de tercer nivel especializada en atención de SOAT, durante el período Octubre de 2012 a Octubre de 2013. Metodología: Estudio de casos y controles no apareado, relación 1:3, conformado por 43 casos (fracturas abiertas infectadas) y 129 controles (fracturas abiertas no infectadas). Resultados: La edad media de los casos fue de 39.42 +/- 16.82 años (med=36 años) y la edad media de los controles fue de 33.15 +/- 11.78 años (med=30 años). El 83.7% de los casos y el 78.3% de los controles corresponden al sexo masculino. Predominaron los accidentes en motocicleta en el 81.4% de los casos y el 86% de los controles. En el análisis bivariado se encuentra que la edad mayor a 50 años (p=0.042), una clasificación de la fractura grado IIIB o IIIC (p=0.02), cumplimiento del protocolo antibiótico institucional según el grado de fractura (p=0.014) y un tiempo mayor a 24 horas desde el momento del accidente al centro especializado en trauma (p=0.035) se asociaron significativamente con infección de la fractura abierta. En el análisis multivariado se encuentra únicamente que la clasificación de la fractura grado IIIB o IIIC se asocia con infección de la fractura OR 2.6 IC95% (1.187 – 5.781) (p=0.017). La duración de hospitalización fue mayor en los casos (32.37+/- 22.92 días, med=26 días) que en los controles (8.81 +/- 7.52 días, med=6 días) (p<0.001). El promedio de lavados quirúrgicos fue mayor en los casos (4.85±4.1, med=4.0) que en el grupo control (1.94±1.26, med=2) (p<0.001). Conclusiones: La infección posterior a una fractura abierta, implica costos elevados de atención con hospitalizaciones prolongadas y mayor frecuencia de intervenciones quirúrgicas como se evidencia en el presente estudio. Se debe fortalecer el sistema de remisión y contra remisión para acortar los tiempos de inicio de manejo especializado de los pacientes con fracturas abiertas. Se debe incentivar dentro de las instituciones, el cumplimiento de protocolos de profilaxis antibiótica según el grado de la fractura para disminuir el riesgo de complicación infecciosa.
Resumo:
Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, where asset value and population density are greatest, the model spatial resolution required to represent flows through a typical street network (i.e. < 10m) often results in impractical computational cost at the whole city scale. Explicit diffusive storage cell models become very inefficient at such high resolutions, relative to shallow water models, because the stable time step in such schemes scales as a quadratic of resolution. This paper presents the calibration and evaluation of a recently developed new formulation of the LISFLOOD-FP model, where stability is controlled by the Courant–Freidrichs–Levy condition for the shallow water equations, such that, the stable time step instead scales linearly with resolution. The case study used is based on observations during the summer 2007 floods in Tewkesbury, UK. Aerial photography is available for model evaluation on three separate days from the 24th to the 31st of July. The model covered a 3.6 km by 2 km domain and was calibrated using gauge data from high flows during the previous month. The new formulation was benchmarked against the original version of the model at 20 m and 40 m resolutions, demonstrating equally accurate performance given the available validation data but at 67x faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in a significantly more accurate simulation of the drying dynamics compared to that simulated by the coarse resolution models, although estimates of peak inundation depth were similar.
Resumo:
Stephens and Donnelly have introduced a simple yet powerful importance sampling scheme for computing the likelihood in population genetic models. Fundamental to the method is an approximation to the conditional probability of the allelic type of an additional gene, given those currently in the sample. As noted by Li and Stephens, the product of these conditional probabilities for a sequence of draws that gives the frequency of allelic types in a sample is an approximation to the likelihood, and can be used directly in inference. The aim of this note is to demonstrate the high level of accuracy of "product of approximate conditionals" (PAC) likelihood when used with microsatellite data. Results obtained on simulated microsatellite data show that this strategy leads to a negligible bias over a wide range of the scaled mutation parameter theta. Furthermore, the sampling variance of likelihood estimates as well as the computation time are lower than that obtained with importance sampling on the whole range of theta. It follows that this approach represents an efficient substitute to IS algorithms in computer intensive (e.g. MCMC) inference methods in population genetics. (c) 2006 Elsevier Inc. All rights reserved.