125 resultados para Iterated
Resumo:
The ALRED construction is a lightweight strategy for constructing message authentication algorithms from an underlying iterated block cipher. Even though this construction's original analyses show that it is secure against some attacks, the absence of formal security proofs in a strong security model still brings uncertainty on its robustness. In this paper, aiming to give a better understanding of the security level provided by different authentication algorithms based on this design strategy, we formally analyze two ALRED variants-the MARVIN message authentication code and the LETTERSOUP authenticated-encryption scheme,-bounding their security as a function of the attacker's resources and of the underlying cipher's characteristics.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
L'elaborato di tesi tratta dei vantaggi ottenibili dall'uso di tecniche di automatic parameter tuning, applicando un'implementazione di iterated racing su di un innovativo sistema di controllo semaforico auto-organizzante ispirato da concetti di swarm intelligence.
Resumo:
Il presente studio si colloca nell’ambito di una ricerca il cui obiettivo è la formulazione di criteri progettuali finalizzati alla ottimizzazione delle prestazioni energetiche delle cantine di aziende vitivinicole con dimensioni produttive medio-piccole. Nello specifico la ricerca si pone l’obiettivo di individuare degli indicatori che possano valutare l’influenza che le principali variabili progettuali hanno sul fabbisogno energetico dell’edificio e sull’andamento delle temperature all’interno dei locali di conservazione ed invecchiamento del vino. Tali indicatori forniscono informazioni sulla prestazione energetica dell’edificio e sull’idoneità dei locali non climatizzati finalizzata alla conservazione del vino Essendo la progettazione una complessa attività multidisciplinare, la ricerca ha previsto l’ideazione di un programma di calcolo in grado di gestire ed elaborare dati provenienti da diversi ambiti (ingegneristici, architettonici, delle produzioni agroindustriali, ecc.), e di restituire risultati sintetici attraverso indicatori allo scopo individuati. Il programma è stato applicato su un caso-studio aziendale rappresentativo del settore produttivo. Sono stati vagliati gli effetti di due modalità di vendemmia e di quattro soluzioni architettoniche differenti. Le soluzioni edilizie derivano dalla combinazione di diversi isolamenti termici e dalla presenza o meno di locali interrati. Per le analisi sul caso-studio ci si è avvalsi di simulazioni energetiche in regime dinamico, supportate e validate da campagne di monitoraggio termico e meteorologico all’interno dell’azienda oggetto di studio. I risultati ottenuti hanno evidenziato come il programma di calcolo concepito nell’ambito di questo studio individui le criticità dell’edificio in termini energetici e di “benessere termico” del vino e consenta una iterativa revisione delle variabili progettuale indagate. Esso quindi risulta essere uno strumento informatizzato di valutazione a supporto della progettazione, finalizzato ad una ottimizzazione del processo progettuale in grado di coniugare, in maniera integrata, gli obiettivi della qualità del prodotto, della efficienza produttiva e della sostenibilità economica ed ambientale.
Resumo:
Background Although evolutionary models of cooperation build on the intuition that costs of the donor and benefits to the receiver are the most general fundamental parameters, it is largely unknown how they affect the decision of animals to cooperate with an unrelated social partner. Here we test experimentally whether costs to the donor and need of the receiver decide about the amount of help provided by unrelated rats in an iterated prisoner's dilemma game. Results Fourteen unrelated Norway rats were alternately presented to a cooperative or defective partner for whom they could provide food via a mechanical apparatus. Direct costs for this task and the need of the receiver were manipulated in two separate experiments. Rats provided more food to cooperative partners than to defectors (direct reciprocity). The propensity to discriminate between helpful and non-helpful social partners was contingent on costs: An experimentally increased resistance in one Newton steps to pull food for the social partner reduced the help provided to defectors more strongly than the help returned to cooperators. Furthermore, test rats provided more help to hungry receivers that were light or in poor condition, which might suggest empathy, whereas this relationship was inverse when experimental partners were satiated. Conclusions In a prisoner's dilemma situation rats seem to take effect of own costs and potential benefits to a receiver when deciding about helping a social partner, which confirms the predictions of reciprocal cooperation. Thus, factors that had been believed to be largely confined to human social behaviour apparently influence the behaviour of other social animals as well, despite widespread scepticism. Therefore our results shed new light on the biological basis of reciprocity.
Resumo:
Goal: The Halex is an indicator of health status that combines self-rated health and activity limitations, which has been used by NCHS to predict future years of healthy life. The scores for each health state were developed based on strong assumptions, notably that a person in excellent health with ADL disabilities is as healthy as a person in poor health with no disabilities. Our goal was to examine the performance of the Halex as a longitudinal measure of health for older adults, and to improve the scoring if necessary. Methods: We used data from the Cardiovascular Health Study (CHS) to compare the relationship of baseline health to health 2 years later. Subject ages ranged from 65 to 103 (mean age 75). A total of 40,827 transitions were available for analysis. We examined whether Halex scores at time 0 were related monotonically to scores two years later, and iterated the original scores to improve the fit over time. Findings: The original Halex scores were not consistent over time. Persons in excellent health with ADL limitations were much healthier 2 years later than people in poor health with no limitations, even though they had been assumed to have identical health. People with ADL limitations had higher scores than predicted. The assumptions made in creating the Halex were not upheld in the data. Conclusions: The new iterated scores are specific to older adults, are appropriate for longitudinal data, and are relatively assumption-free. We recommend the use of these new scores for longitudinal studies of older adults that use the Halex health states.
Resumo:
In manual order picking systems, order pickers walk or drive through a distribution warehouse in order to collect items which are requested by (internal or external) customers. In order to perform these operations efficiently, it is usually required that customer orders are combined into (more substantial) picking orders of limited size. The Order Batching Problem considered in this paper deals with the question of how a given set of customer orders should be combined such that the total length of all tours is minimized which are necessary to collect all items. The authors introduce two metaheuristic approaches for the solution of this problem: the first one is based on Iterated Local Search; the second on Ant Colony Optimization. In a series of extensive numerical experiments, the newly developed approaches are benchmarked against classic solution methods. It is demonstrated that the proposed methods are not only superior to existing methods but provide solutions which may allow distribution warehouses to be operated significantly more efficiently.
Resumo:
La tesis MEDIDAS AUTOSEMEJANTES EN EL PLANO, MOMENTOS Y MATRICES DE HESSENBERG se enmarca entre las áreas de la teoría geométrica de la medida, la teoría de polinomios ortogonales y la teoría de operadores. La memoria aborda el estudio de medidas con soporte acotado en el plano complejo vistas con la óptica de las matrices infinitas de momentos y de Hessenberg asociadas a estas medidas que en la teoría de los polinomios ortogonales las representan. En particular se centra en el estudio de las medidas autosemejantes que son las medidas de equilibrio definidas por un sistema de funciones iteradas (SFI). Los conjuntos autosemejantes son conjuntos que tienen la propiedad geométrica de descomponerse en unión de piezas semejantes al conjunto total. Estas piezas pueden solaparse o no, cuando el solapamiento es pequeño la teoría de Hutchinson [Hut81] funciona bien, pero cuando no existen restricciones falla. El problema del solapamiento consiste en controlar la medida de este solapamiento. Un ejemplo de la complejidad de este problema se plantea con las convoluciones infinitas de distribuciones de Bernoulli, que han resultado ser un ejemplo de medidas autosemejantes en el caso real. En 1935 Jessen y A. Wintner [JW35] ya se planteaba este problema, lejos de ser sencillo ha sido estudiado durante más de setenta y cinco años y siguen sin resolverse las principales cuestiones planteadas ya por A. Garsia [Gar62] en 1962. El interés que ha despertado este problema así como la complejidad del mismo está demostrado por las numerosas publicaciones que abordan cuestiones relacionadas con este problema ver por ejemplo [JW35], [Erd39], [PS96], [Ma00], [Ma96], [Sol98], [Mat95], [PS96], [Sim05],[JKS07] [JKS11]. En el primer capítulo comenzamos introduciendo con detalle las medidas autosemejante en el plano complejo y los sistemas de funciones iteradas, así como los conceptos de la teoría de la medida necesarios para describirlos. A continuación se introducen las herramientas necesarias de teoría de polinomios ortogonales, matrices infinitas y operadores que se van a usar. En el segundo y tercer capítulo trasladamos las propiedades geométricas de las medidas autosemejantes a las matrices de momentos y de Hessenberg, respectivamente. A partir de estos resultados se describen algoritmos para calcular estas matrices a partir del SFI correspondiente. Concretamente, se obtienen fórmulas explícitas y algoritmos de aproximación para los momentos y matrices de momentos de medidas fractales, a partir de un teorema del punto fijo para las matrices. Además utilizando técnicas de la teoría de operadores, se han extendido al plano complejo los resultados que G. Mantica [Ma00, Ma96] obtenía en el caso real. Este resultado es la base para definir un algoritmo estable de aproximación de la matriz de Hessenberg asociada a una medida fractal u obtener secciones finitas exactas de matrices Hessenberg asociadas a una suma de medidas. En el último capítulo, se consideran medidas, μ, más generales y se estudia el comportamiento asintótico de los autovalores de una matriz hermitiana de momentos y su impacto en las propiedades de la medida asociada. En el resultado central se demuestra que si los polinomios asociados son densos en L2(μ) entonces necesariamente el autovalor mínimo de las secciones finitas de la matriz de momentos de la medida tiende a cero. ABSTRACT The Thesis work “Self-similar Measures on the Plane, Moments and Hessenberg Matrices” is framed among the geometric measure theory, orthogonal polynomials and operator theory. The work studies measures with compact support on the complex plane from the point of view of the associated infinite moments and Hessenberg matrices representing them in the theory of orthogonal polynomials. More precisely, it concentrates on the study of the self-similar measures that are equilibrium measures in a iterated functions system. Self-similar sets have the geometric property of being decomposable in a union of similar pieces to the complete set. These pieces can overlap. If the overlapping is small, Hutchinson’s theory [Hut81] works well, however, when it has no restrictions, the theory does not hold. The overlapping problem consists in controlling the measure of the overlap. The complexity of this problem is exemplified in the infinite convolutions of Bernoulli’s distributions, that are an example of self-similar measures in the real case. As early as 1935 [JW35], Jessen and Wintner posed this problem, that far from being simple, has been studied during more than 75 years. The main cuestiones posed by Garsia in 1962 [Gar62] remain unsolved. The interest in this problem, together with its complexity, is demonstrated by the number of publications that over the years have dealt with it. See, for example, [JW35], [Erd39], [PS96], [Ma00], [Ma96], [Sol98], [Mat95], [PS96], [Sim05], [JKS07] [JKS11]. In the first chapter, we will start with a detailed introduction to the self-similar measurements in the complex plane and to the iterated functions systems, also including the concepts of measure theory needed to describe them. Next, we introduce the necessary tools from orthogonal polynomials, infinite matrices and operators. In the second and third chapter we will translate the geometric properties of selfsimilar measures to the moments and Hessenberg matrices. From these results, we will describe algorithms to calculate these matrices from the corresponding iterated functions systems. To be precise, we obtain explicit formulas and approximation algorithms for the moments and moment matrices of fractal measures from a new fixed point theorem for matrices. Moreover, using techniques from operator theory, we extend to the complex plane the real case results obtained by Mantica [Ma00, Ma96]. This result is the base to define a stable algorithm that approximates the Hessenberg matrix associated to a fractal measure and obtains exact finite sections of Hessenberg matrices associated to a sum of measurements. In the last chapter, we consider more general measures, μ, and study the asymptotic behaviour of the eigenvalues of a hermitian matrix of moments, together with its impact on the properties of the associated measure. In the main result we demonstrate that, if the associated polynomials are dense in L2(μ), then necessarily follows that the minimum eigenvalue of the finite sections of the moments matrix goes to zero.
Resumo:
A basic evolutionary problem posed by the Iterated Prisoner’s Dilemma game is to understand when the paradigmatic cooperative strategy Tit-for-Tat can invade a population of pure defectors. Deterministically, this is impossible. We consider the role of demographic stochasticity by embedding the Iterated Prisoner’s Dilemma into a population dynamic framework. Tit-for-Tat can invade a population of defectors when their dynamics exhibit short episodes of high population densities with subsequent crashes and long low density periods with strong genetic drift. Such dynamics tend to have reddened power spectra and temporal distributions of population size that are asymmetric and skewed toward low densities. The results indicate that ecological dynamics are important for evolutionary shifts between adaptive peaks.
Resumo:
Many problems in human society reflect the inability of selfish parties to cooperate. The “Iterated Prisoner’s Dilemma” has been used widely as a model for the evolution of cooperation in societies. Axelrod’s computer tournaments and the extensive simulations of evolution by Nowak and Sigmund and others have shown that natural selection can favor cooperative strategies in the Prisoner’s Dilemma. Rigorous empirical tests, however, lag behind the progress made by theorists. Clear predictions differ depending on the players’ capacity to remember previous rounds of the game. To test whether humans use the kind of cooperative strategies predicted, we asked students to play the iterated Prisoner’s Dilemma game either continuously or interrupted after each round by a secondary memory task (i.e., playing the game “Memory”) that constrained the students’ working-memory capacity. When playing without interruption, most students used “Pavlovian” strategies, as predicted, for greater memory capacity, and the rest used “generous tit-for-tat” strategies. The proportion of generous tit-for-tat strategies increased when games of Memory interfered with the subjects’ working memory, as predicted. Students who continued to use complex Pavlovian strategies were less successful in the Memory game, but more successful in the Prisoner’s Dilemma, which indicates a trade-off in memory capacity for the two tasks. Our results suggest that the set of strategies predicted by game theorists approximates human reality.
Resumo:
The iterated Prisoner's Dilemma has become the paradigm for the evolution of cooperation among egoists. Since Axelrod's classic computer tournaments and Nowak and Sigmund's extensive simulations of evolution, we know that natural selection can favor cooperative strategies in the Prisoner's Dilemma. According to recent developments of theory the last champion strategy of "win--stay, lose--shift" ("Pavlov") is the winner only if the players act simultaneously. In the more natural situation of players alternating the roles of donor and recipient a strategy of "Generous Tit-for-Tat" wins computer simulations of short-term memory strategies. We show here by experiments with humans that cooperation dominated in both the simultaneous and the alternating Prisoner's Dilemma. Subjects were consistent in their strategies: 30% adopted a Generous Tit-for-Tat-like strategy, whereas 70% used a Pavlovian strategy in both the alternating and the simultaneous game. As predicted for unconditional strategies, Pavlovian players appeared to be more successful in the simultaneous game whereas Generous Tit-for-Tat-like players achieved higher payoffs in the alternating game. However, the Pavlovian players were smarter than predicted: they suffered less from defectors and exploited cooperators more readily. Humans appear to cooperate either with a Generous Tit-for-Tat-like strategy or with a strategy that appreciates Pavlov's advantages but minimizes its handicaps.
Resumo:
The evolutionary stability of cooperation is a problem of fundamental importance for the biological and social sciences. Different claims have been made about this issue: whereas Axelrod and Hamilton's [Axelrod, R. & Hamilton, W. (1981) Science 211, 1390-1398] widely recognized conclusion is that cooperative rules such as "tit for tat" are evolutionarily stable strategies in the iterated prisoner's dilemma (IPD), Boyd and Lorberbaum [Boyd, R. & Lorberbaum, J. (1987) Nature (London) 327, 58-59] have claimed that no pure strategy is evolutionarily stable in this game. Here we explain why these claims are not contradictory by showing in what sense strategies in the IPD can and cannot be stable and by creating a conceptual framework that yields the type of evolutionary stability attainable in the IPD and in repeated games in general. Having established the relevant concept of stability, we report theorems on some basic properties of strategies that are stable in this sense. We first show that the IPD has "too many" such strategies, so that being stable does not discriminate among behavioral rules. Stable strategies differ, however, on a property that is crucial for their evolutionary survival--the size of the invasion they can resist. This property can be interpreted as a strategy's evolutionary robustness. Conditionally cooperative strategies such as tit for tat are the most robust. Cooperative behavior supported by these strategies is the most robust evolutionary equilibrium: the easiest to attain, and the hardest to disrupt.
Resumo:
We examine the teleportation of an unknown spin-1/2 quantum state along a quantum spin chain with an even number of sites. Our protocol, using a sequence of Bell measurements, may be viewed as an iterated version of the 2-qubit protocol of C. H. Bennett et al. [Phys. Rev. Lett. 70, 1895 (1993)]. A decomposition of the Hilbert space of the spin chain into 4 vector spaces, called Bell subspaces, is given. It is established that any state from a Bell subspace may be used as a channel to perform unit fidelity teleportation. The space of all spin-0 many-body states, which includes the ground states of many known antiferromagnetic systems, belongs to a common Bell subspace. A channel-dependent teleportation parameter O is introduced, and a bound on the teleportation fidelity is given in terms of O.
Resumo:
We have developed a way to represent Mohr-Coulomb failure within a mantle-convection fluid dynamics code. We use a viscous model of deformation with an orthotropic viscoplasticity (a different viscosity is used for pure shear to that used for simple shear) to define a prefered plane for slip to occur given the local stress field. The simple-shear viscosity and the deformation can then be iterated to ensure that the yield criterion is always satisfied. We again assume the Boussinesq approximation, neglecting any effect of dilatancy on the stress field. An additional criterion is required to ensure that deformation occurs along the plane aligned with maximum shear strain-rate rather than the perpendicular plane, which is formally equivalent in any symmetric formulation. We also allow for strain-weakening of the material. The material can remember both the accumulated failure history and the direction of failure. We have included this capacity in a Lagrangian-integration-point finite element code and show a number of examples of extension and compression of a crustal block with a Mohr-Coulomb failure criterion. The formulation itself is general and applies to 2- and 3-dimensional problems.
Resumo:
A novel algorithm for performing registration of dynamic contrast-enhanced (DCE) MRI data of the breast is presented. It is based on an algorithm known as iterated dynamic programming originally devised to solve the stereo matching problem. Using artificially distorted DCE-MRI breast images it is shown that the proposed algorithm is able to correct for movement and distortions over a larger range than is likely to occur during routine clinical examination. In addition, using a clinical DCE-MRI data set with an expertly labeled suspicious region, it is shown that the proposed algorithm significantly reduces the variability of the enhancement curves at the pixel level yielding more pronounced uptake and washout phases.