985 resultados para Time marching schemes
Resumo:
The requirement to provide multimedia services with QoS support in mobile networks has led to standardization and deployment of high speed data access technologies such as the High Speed Downlink Packet Access (HSDPA) system. HSDPA improves downlink packet data and multimedia services support in WCDMA-based cellular networks. As is the trend in emerging wireless access technologies, HSDPA supports end-user multi-class sessions comprising parallel flows with diverse Quality of Service (QoS) requirements, such as real-time (RT) voice or video streaming concurrent with non real-time (NRT) data service being transmitted to the same user, with differentiated queuing at the radio link interface. Hence, in this paper we present and evaluate novel radio link buffer management schemes for QoS control of multimedia traffic comprising concurrent RT and NRT flows in the same HSDPA end-user session. The new buffer management schemes—Enhanced Time Space Priority (E-TSP) and Dynamic Time Space Priority (D-TSP)—are designed to improve radio link and network resource utilization as well as optimize end-to-end QoS performance of both RT and NRT flows in the end-user session. Both schemes are based on a Time-Space Priority (TSP) queuing system, which provides joint delay and loss differentiation between the flows by queuing (partially) loss tolerant RT flow packets for higher transmission priority but with restricted access to the buffer space, whilst allowing unlimited access to the buffer space for delay-tolerant NRT flow but with queuing for lower transmission priority. Experiments by means of extensive system-level HSDPA simulations demonstrates that with the proposed TSP-based radio link buffer management schemes, significant end-to-end QoS performance gains accrue to end-user traffic with simultaneous RT and NRT flows, in addition to improved resource utilization in the radio access network.
Resumo:
The finite difference time domain (FDTD) method has direct applications in musical instrument modeling, simulation of environmental acoustics, room acoustics and sound reproduction paradigms, all of which benefit from auralization. However, rendering binaural impulse responses from simulated
data is not straightforward to accomplish as the calculated pressure at FDTD grid nodes does not contain any directional information. This paper addresses this issue by introducing a spherical array to capture sound pressure on a finite difference grid, and decomposing it into a plane-wave density
function. Binaural impulse responses are then constructed in the spherical harmonics domain by combining the decomposed grid data with free field head-related transfer functions. The effects of designing a spherical array in a Cartesian grid are studied, and emphasis is given to the relationships
between array sampling and the spatial and spectral design parameters of several finite-difference
schemes.
Resumo:
Collisions are an innate part of the function of many musical instruments. Due to the nonlinear nature of contact forces, special care has to be taken in the construction of numerical schemes for simulation and sound synthesis. Finite difference schemes and other time-stepping algorithms used for musical instrument modelling purposes are normally arrived at by discretising a Newtonian description of the system. However because impact forces are non-analytic functions of the phase space variables, algorithm stability can rarely be established this way. This paper presents a systematic approach to deriving energy conserving schemes for frictionless impact modelling. The proposed numerical formulations follow from discretising Hamilton׳s equations of motion, generally leading to an implicit system of nonlinear equations that can be solved with Newton׳s method. The approach is first outlined for point mass collisions and then extended to distributed settings, such as vibrating strings and beams colliding with rigid obstacles. Stability and other relevant properties of the proposed approach are discussed and further demonstrated with simulation examples. The methodology is exemplified through a case study on tanpura string vibration, with the results confirming the main findings of previous studies on the role of the bridge in sound generation with this type of string instrument.
Resumo:
Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.
Resumo:
Tanpura string vibrations have been investigated previously using numerical models based on energy conserving schemes derived from a Hamiltonian description in one-dimensional form. Such time-domain models have the property that, for the lossless case, the numerical Hamiltonian (representing total energy of the system) can be proven to be constant from one time step
to the next, irrespective of any of the system parameters; in practice the Hamiltonian can be shown to be conserved within machine precision. Models of this kind can reproduce a jvari effect, which results from the bridge-string interaction. However the one-dimensional formulation has recently been shown to fail to replicate the jvaris strong dependence on the thread placement. As a first step towards simulations which accurately emulate this sensitivity to the thread placement, a twodimensional model is proposed, incorporating coupling of controllable level between the two string polarisations at the string termination opposite from the barrier. In addition, a friction force acting when the string slides across the bridge in horizontal direction is introduced, thus effecting a further damping mechanism. In this preliminary study, the string is terminated at the position of the thread. As in the one-dimensional model, an implicit scheme has to be used to solve the system, employing Newton's method to calculate the updated positions and momentums of each string segment. The two-dimensional model is proven to be energy conserving when the loss parameters are set to zero, irrespective of the coupling constant. Both frequency-dependent and independent losses are then added to the string, so that the model can be compared to analogous instruments. The influence of coupling and the bridge friction are investigated.
Resumo:
This paper discusses compact-stencil finite difference time domain (FDTD) schemes for approximating the 2D wave equation in the context of digital audio. Stability, accuracy, and efficiency are investigated and new ways of viewing and interpreting the results are discussed. It is shown that if a tight accuracy constraint is applied, implicit schemes outperform explicit schemes. The paper also discusses the relevance to digital waveguide mesh modelling, and highlights the optimally efficient explicit scheme.
Resumo:
We present an improved, biologically inspired and multiscale keypoint operator. Models of single- and double-stopped hypercomplex cells in area V1 of the mammalian visual cortex are used to detect stable points of high complexity at multiple scales. Keypoints represent line and edge crossings, junctions and terminations at fine scales, and blobs at coarse scales. They are detected by applying first and second derivatives to responses of complex cells in combination with two inhibition schemes to suppress responses along lines and edges. A number of optimisations make our new algorithm much faster than previous biologically inspired models, achieving real-time performance on modern GPUs and competitive speeds on CPUs. In this paper we show that the keypoints exhibit state-of-the-art repeatability in standardised benchmarks, often yielding best-in-class performance. This makes them interesting both in biological models and as a useful detector in practice. We also show that keypoints can be used as a data selection step, significantly reducing the complexity in state-of-the-art object categorisation. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In competitive electricity markets with deep concerns for the efficiency level, demand response programs gain considerable significance. As demand response levels have decreased after the introduction of competition in the power industry, new approaches are required to take full advantage of demand response opportunities. This paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes in distribution networks. It undertakes the technical validation of the solution using realistic network simulation based on PSCAD. The use of DemSi by a retailer in a situation of energy shortage, is presented. Load reduction is obtained using a consumer based price elasticity approach supported by real time pricing. Non-linear programming is used to maximize the retailer’s profit, determining the optimal solution for each envisaged load reduction. The solution determines the price variations considering two different approaches, price variations determined for each individual consumer or for each consumer type, allowing to prove that the approach used does not significantly influence the retailer’s profit. The paper presents a case study in a 33 bus distribution network with 5 distinct consumer types. The obtained results and conclusions show the adequacy of the used methodology and its importance for supporting retailers’ decision making.
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.
Resumo:
Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme
Resumo:
In [4], Guillard and Viozat propose a finite volume method for the simulation of inviscid steady as well as unsteady flows at low Mach numbers, based on a preconditioning technique. The scheme satisfies the results of a single scale asymptotic analysis in a discrete sense and comprises the advantage that this can be derived by a slight modification of the dissipation term within the numerical flux function. Unfortunately, it can be observed by numerical experiments that the preconditioned approach combined with an explicit time integration scheme turns out to be unstable if the time step Dt does not satisfy the requirement to be O(M2) as the Mach number M tends to zero, whereas the corresponding standard method remains stable up to Dt=O(M), M to 0, which results from the well-known CFL-condition. We present a comprehensive mathematical substantiation of this numerical phenomenon by means of a von Neumann stability analysis, which reveals that in contrast to the standard approach, the dissipation matrix of the preconditioned numerical flux function possesses an eigenvalue growing like M-2 as M tends to zero, thus causing the diminishment of the stability region of the explicit scheme. Thereby, we present statements for both the standard preconditioner used by Guillard and Viozat [4] and the more general one due to Turkel [21]. The theoretical results are after wards confirmed by numerical experiments.
Resumo:
Optimal control theory is a powerful tool for solving control problems in quantum mechanics, ranging from the control of chemical reactions to the implementation of gates in a quantum computer. Gradient-based optimization methods are able to find high fidelity controls, but require considerable numerical effort and often yield highly complex solutions. We propose here to employ a two-stage optimization scheme to significantly speed up convergence and achieve simpler controls. The control is initially parametrized using only a few free parameters, such that optimization in this pruned search space can be performed with a simplex method. The result, considered now simply as an arbitrary function on a time grid, is the starting point for further optimization with a gradient-based method that can quickly converge to high fidelities. We illustrate the success of this hybrid technique by optimizing a geometric phase gate for two superconducting transmon qubits coupled with a shared transmission line resonator, showing that a combination of Nelder-Mead simplex and Krotov’s method yields considerably better results than either one of the two methods alone.
Resumo:
This thesis deals with the so-called Basis Set Superposition Error (BSSE) from both a methodological and a practical point of view. The purpose of the present thesis is twofold: (a) to contribute step ahead in the correct characterization of weakly bound complexes and, (b) to shed light the understanding of the actual implications of the basis set extension effects in the ab intio calculations and contribute to the BSSE debate. The existing BSSE-correction procedures are deeply analyzed, compared, validated and, if necessary, improved. A new interpretation of the counterpoise (CP) method is used in order to define counterpoise-corrected descriptions of the molecular complexes. This novel point of view allows for a study of the BSSE-effects not only in the interaction energy but also on the potential energy surface and, in general, in any property derived from the molecular energy and its derivatives A program has been developed for the calculation of CP-corrected geometry optimizations and vibrational frequencies, also using several counterpoise schemes for the case of molecular clusters. The method has also been implemented in Gaussian98 revA10 package. The Chemical Hamiltonian Approach (CHA) methodology has been also implemented at the RHF and UHF levels of theory for an arbitrary number interacting systems using an algorithm based on block-diagonal matrices. Along with the methodological development, the effects of the BSSE on the properties of molecular complexes have been discussed in detail. The CP and CHA methodologies are used for the determination of BSSE-corrected molecular complexes properties related to the Potential Energy Surfaces and molecular wavefunction, respectively. First, the behaviour of both BSSE-correction schemes are systematically compared at different levels of theory and basis sets for a number of hydrogen-bonded complexes. The Complete Basis Set (CBS) limit of both uncorrected and CP-corrected molecular properties like stabilization energies and intermolecular distances has also been determined, showing the capital importance of the BSSE correction. Several controversial topics of the BSSE correction are addressed as well. The application of the counterpoise method is applied to internal rotational barriers. The importance of the nuclear relaxation term is also pointed out. The viability of the CP method for dealing with charged complexes and the BSSE effects on the double-well PES blue-shifted hydrogen bonds is also studied in detail. In the case of the molecular clusters the effect of high-order BSSE effects introduced with the hierarchical counterpoise scheme is also determined. The effect of the BSSE on the electron density-related properties is also addressed. The first-order electron density obtained with the CHA/F and CHA/DFT methodologies was used to assess, both graphically and numerically, the redistribution of the charge density upon BSSE-correction. Several tools like the Atoms in Molecules topologycal analysis, density difference maps, Quantum Molecular Similarity, and Chemical Energy Component Analysis were used to deeply analyze, for the first time, the BSSE effects on the electron density of several hydrogen bonded complexes of increasing size. The indirect effect of the BSSE on intermolecular perturbation theory results is also pointed out It is shown that for a BSSE-free SAPT study of hydrogen fluoride clusters, the use of a counterpoise-corrected PES is essential in order to determine the proper molecular geometry to perform the SAPT analysis.
Resumo:
The major technical objectives of the RC-NSPES are to provide a framework for the concurrent operation of reactive and pro-active security functions to deliver efficient and optimised intrusion detection schemes as well as enhanced and highly correlated rule sets for more effective alerts management and root-cause analysis. The design and implementation of the RC-NSPES solution includes a number of innovative features in terms of real-time programmable embedded hardware (FPGA) deployment as well as in the integrated management station. These have been devised so as to deliver enhanced detection of attacks and contextualised alerts against threats that can arise from both the network layer and the application layer protocols. The resulting architecture represents an efficient and effective framework for the future deployment of network security systems.