882 resultados para Regularization scheme
Resumo:
The European Union Emissions Trading Scheme (EU ETS) is a cornerstone of the European Union's policy to combat climate change and its key tool for reducing industrial greenhouse gas emissions cost-effectively. The purpose of the present work is to evaluate the influence of CO2 opportunity cost on the Spanish wholesale electricity price. Our sample includes all Phase II of the EU ETS and the first year of Phase III implementation, from January 2008 to December 2013. A vector error correction model (VECM) is applied to estimate not only long-run equilibrium relations, but also short-run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The four commodities prices are modeled as joint endogenous variables with air temperature and renewable energy as exogenous variables. We found a long-run relationship (cointegration) between electricity price, carbon price, and fuel prices. By estimating the dynamic pass-through of carbon price into electricity price for different periods of our sample, it is possible to observe the weakening of the link between carbon and electricity prices as a result from the collapse on CO2 prices, therefore compromising the efficacy of the system to reach proposed environmental goals. This conclusion is in line with the need to shape new policies within the framework of the EU ETS that prevent excessive low prices for carbon over extended periods of time.
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.
Resumo:
The shifted Legendre orthogonal polynomials are used for the numerical solution of a new formulation for the multi-dimensional fractional optimal control problem (M-DFOCP) with a quadratic performance index. The fractional derivatives are described in the Caputo sense. The Lagrange multiplier method for the constrained extremum and the operational matrix of fractional integrals are used together with the help of the properties of the shifted Legendre orthonormal polynomials. The method reduces the M-DFOCP to a simpler problem that consists of solving a system of algebraic equations. For confirming the efficiency and accuracy of the proposed scheme, some test problems are implemented with their approximate solutions.
Resumo:
Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2013
Resumo:
Waveform tomographic imaging of crosshole georadar data is a powerful method to investigate the shallow subsurface because of its ability to provide images of pertinent petrophysical parameters with extremely high spatial resolution. All current crosshole georadar waveform inversion strategies are based on the assumption of frequency-independent electromagnetic constitutive parameters. However, in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behavior. In this paper, we evaluate synthetically the reconstruction limits of a recently published crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. Our results indicate that, when combined with a source wavelet estimation procedure that provides a means of partially accounting for the frequency-dependent effects through an "effective" wavelet, the inversion algorithm performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.
Resumo:
Tesis (Master of Science in Electrical Engineering) UANL, 2014.
Resumo:
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers.
Resumo:
Dans le but d’optimiser la représentation en mémoire des enregistrements Scheme dans le compilateur Gambit, nous avons introduit dans celui-ci un système d’annotations de type et des vecteurs contenant une représentation abrégée des enregistrements. Ces derniers omettent la référence vers le descripteur de type et l’entête habituellement présents sur chaque enregistrement et utilisent plutôt un arbre de typage couvrant toute la mémoire pour retrouver le vecteur contenant une référence. L’implémentation de ces nouvelles fonctionnalités se fait par le biais de changements au runtime de Gambit. Nous introduisons de nouvelles primitives au langage et modifions l’architecture existante pour gérer correctement les nouveaux types de données. On doit modifier le garbage collector pour prendre en compte des enregistrements contenants des valeurs hétérogènes à alignements irréguliers, et l’existence de références contenues dans d’autres objets. La gestion de l’arbre de typage doit aussi être faite automatiquement. Nous conduisons ensuite une série de tests de performance visant à déterminer si des gains sont possibles avec ces nouvelles primitives. On constate une amélioration majeure de performance au niveau de l’allocation et du comportement du gc pour les enregistrements typés de grande taille et des vecteurs d’enregistrements typés ou non. De légers surcoûts sont toutefois encourus lors des accès aux champs et, dans le cas des vecteurs d’enregistrements, au descripteur de type.
Resumo:
n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.
Resumo:
Health insurance has become a necessity for the common man, next to food, clothing and shelter. The financing of health expense is either catastrophic or sometimes even frequently contracted illnesses, is a major cause of mental agony for the common man. The cost of care may sometimes result in the complete erosion of the family savings or may even lead to indebtedness as many studies on causes of rural indebtedness bear testimony (Jayalakshmi, 2006). A suitable cover by way of health insurance is all that is required to cope with such situations. Health care insurance rightly provides the mechanism for both individuals and families to mitigate the financial burden of medical expenses in the present context. Hence a well designed affordable health insurance policy is the need of the hour.Therefore, it is very significant to study the extent to which the beneficiaries in Kerala make use of the benefits provided by a social health insurance scheme like RSBY-CHIS. Based on the above pertinent points, this study assumes national relevance even though the geographical area of the study is limited to two districts of Kerala. The findings of the study will bring forth valuable inputs on the services availed by the beneficiaries of RSBYCHIS and take appropriate measures to improve the effectiveness of the scheme whereby maximum quality benefit could be availed by the poorest of the poor and develop the scheme as a real dawn of the new era of health for them
Resumo:
Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained