970 resultados para Piecewise Convex Curves


Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of electricity demand in Brazil, the lack of the next major hydroelectric reservoirs implementation, and the growth of environmental concerns lead utilities to seek an improved system planning to meet these energy needs. The great diversity of economic, social, climatic, and cultural conditions in the country have been causing a more difficult planning of the power system. The work presented in this paper concerns the development of an algorithm that aims studying the influence of the issues mentioned in load curves. Focus is given to residential consumers. The consumption device with highest influence in the load curve is also identified. The methodology developed gains increasing importance in the system planning and operation, namely in the smart grids context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The atomic mean square displacement (MSD) and the phonon dispersion curves (PDC's) of a number of face-centred cubic (fcc) and body-centred cubic (bcc) materials have been calclllated from the quasiharmonic (QH) theory, the lowest order (A2 ) perturbation theory (PT) and a recently proposed Green's function (GF) method by Shukla and Hiibschle. The latter method includes certain anharmonic effects to all orders of anharmonicity. In order to determine the effect of the range of the interatomic interaction upon the anharmonic contributions to the MSD we have carried out our calculations for a Lennard-Jones (L-J) solid in the nearest-neighbour (NN) and next-nearest neighbour (NNN) approximations. These results can be presented in dimensionless units but if the NN and NNN results are to be compared with each other they must be converted to that of a real solid. When this is done for Xe, the QH MSD for the NN and NNN approximations are found to differ from each other by about 2%. For the A2 and GF results this difference amounts to 8% and 7% respectively. For the NN case we have also compared our PT results, which have been calculated exactly, with PT results calculated using a frequency-shift approximation. We conclude that this frequency-shift approximation is a poor approximation. We have calculated the MSD of five alkali metals, five bcc transition metals and seven fcc transition metals. The model potentials we have used include the Morse, modified Morse, and Rydberg potentials. In general the results obtained from the Green's function method are in the best agreement with experiment. However, this improvement is mostly qualitative and the values of MSD calculated from the Green's function method are not in much better agreement with the experimental data than those calculated from the QH theory. We have calculated the phonon dispersion curves (PDC's) of Na and Cu, using the 4 parameter modified Morse potential. In the case of Na, our results for the PDC's are in poor agreement with experiment. In the case of eu, the agreement between the tlleory and experiment is much better and in addition the results for the PDC's calclliated from the GF method are in better agreement with experiment that those obtained from the QH theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most important problems in the theory of cellular automata (CA) is determining the proportion of cells in a specific state after a given number of time iterations. We approach this problem using patterns in preimage sets - that is, the set of blocks which iterate to the desired output. This allows us to construct a response curve - a relationship between the proportion of cells in state 1 after niterations as a function of the initial proportion. We derive response curve formulae for many two-dimensional deterministic CA rules with L-neighbourhood. For all remaining rules, we find experimental response curves. We also use preimage sets to classify surjective rules. In the last part of the thesis, we consider a special class of one-dimensional probabilistic CA rules. We find response surface formula for these rules and experimental response surfaces for all remaining rules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

UANL

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study on some infinite convex invariants. The origin of convexity can be traced back to the period of Archimedes and Euclid. At the turn of the nineteenth centaury , convexicity became an independent branch of mathematics with its own problems, methods and theories. The convexity can be sorted out into two kinds, the first type deals with generalization of particular problems such as separation of convex sets[EL], extremality[FA], [DAV] or continuous selection Michael[M1] and the second type involved with a multi- purpose system of axioms. The theory of convex invariants has grown out of the classical results of Helly, Radon and Caratheodory in Euclidean spaces. Levi gave the first general definition of the invariants Helly number and Radon number. The notation of a convex structure was introduced by Jamison[JA4] and that of generating degree was introduced by Van de Vel[VAD8]. We also prove that for a non-coarse convex structure, rank is less than or equal to the generating degree, and also generalize Tverberg’s theorem using infinite partition numbers. Compare the transfinite topological and transfinite convex dimensions

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of convex extendability is introduced to answer the problem of finding the smallest distance convex simple graph containing a given tree. A problem of similar type with respect to minimal path convexity is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Communication is the process of transmitting data across channel. Whenever data is transmitted across a channel, errors are likely to occur. Coding theory is a stream of science that deals with finding efficient ways to encode and decode data, so that any likely errors can be detected and corrected. There are many methods to achieve coding and decoding. One among them is Algebraic Geometric Codes that can be constructed from curves. Cryptography is the science ol‘ security of transmitting messages from a sender to a receiver. The objective is to encrypt message in such a way that an eavesdropper would not be able to read it. A eryptosystem is a set of algorithms for encrypting and decrypting for the purpose of the process of encryption and decryption. Public key eryptosystem such as RSA and DSS are traditionally being prel‘en‘ec| for the purpose of secure communication through the channel. llowever Elliptic Curve eryptosystem have become a viable altemative since they provide greater security and also because of their usage of key of smaller length compared to other existing crypto systems. Elliptic curve cryptography is based on group of points on an elliptic curve over a finite field. This thesis deals with Algebraic Geometric codes and their relation to Cryptography using elliptic curves. Here Goppa codes are used and the curves used are elliptic curve over a finite field. We are relating Algebraic Geometric code to Cryptography by developing a cryptographic algorithm, which includes the process of encryption and decryption of messages. We are making use of fundamental properties of Elliptic curve cryptography for generating the algorithm and is used here to relate both.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Total energy SCF calculations were performed for noble gas difluorides in a relativistic procedure and compared with analogous non-relativistic calculations. The discrete variational method with numerical basis functions was used. Rather smooth potential energy curves could be obtained. The theoretical Kr - F and Xe - F bond distances were calculated to be 3.5 a.u. and 3.6 a.u. which should be compared with the experimental values of 3.54 a.u. and 3.7 a.u. Although the dissociation energies are off by a factor of about five it was found that ArF_2 may be a stable molecule. Theoretical ionization energies for the outer levels reproduce the experimental values for KrF_2 and XeF_2 to within 2 eV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A LCAO-MO (linear combination of atomic orbitals - molecular orbitals) relativistic Dirac-Fock-Slater program is presented, which allows one to calculate accurate total energies for diatomic molecules. Numerical atomic Dirac-Fock-Slater wave functions are used as basis functions. All integrations as well as the solution of the Poisson equation are done fully numerical, with a relative accuracy of 10{^-5} - 10{^-6}. The details of the method as well as first results are presented here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ab initio fully relativistic SCF molecular calculations of energy eigenvalues as well as coupling-matrix elements are used to calculate the 1s_\sigma excitation differential cross section for Ne-Ne and Ne-O in ion-atom collisions. A relativistic perturbation treatment which allows a direct comparison with analogous non-relativistic calculations is also performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aitchison and Bacon-Shone (1999) considered convex linear combinations of compositions. In other words, they investigated compositions of compositions, where the mixing composition follows a logistic Normal distribution (or a perturbation process) and the compositions being mixed follow a logistic Normal distribution. In this paper, I investigate the extension to situations where the mixing composition varies with a number of dimensions. Examples would be where the mixing proportions vary with time or distance or a combination of the two. Practical situations include a river where the mixing proportions vary along the river, or across a lake and possibly with a time trend. This is illustrated with a dataset similar to that used in the Aitchison and Bacon-Shone paper, which looked at how pollution in a loch depended on the pollution in the three rivers that feed the loch. Here, I explicitly model the variation in the linear combination across the loch, assuming that the mean of the logistic Normal distribution depends on the river flows and relative distance from the source origins

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In several computer graphics areas, a refinement criterion is often needed to decide whether to go on or to stop sampling a signal. When the sampled values are homogeneous enough, we assume that they represent the signal fairly well and we do not need further refinement, otherwise more samples are required, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is very sensitive to variability is necessary. In this paper, we present a family of discrimination measures, the f-divergences, meeting this requirement. These convex functions have been well studied and successfully applied to image processing and several areas of engineering. Two applications to global illumination are shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. We obtain significantly better results than with classic criteria, showing that f-divergences are worth further investigation in computer graphics. Also a discrimination measure based on entropy of the samples for refinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a natural method to deal with the adaptive subdivision of the sampling region