997 resultados para LATTICE-QCD


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present paper, we study the geometric discrepancy with respect to families of rotated rectangles. The well-known extremal cases are the axis-parallel rectangles (logarithmic discrepancy) and rectangles rotated in all possible directions (polynomial discrepancy). We study several intermediate situations: lacunary sequences of directions, lacunary sets of finite order, and sets with small Minkowski dimension. In each of these cases, extensions of a lemma due to Davenport allow us to construct appropriate rotations of the integer lattice which yield small discrepancy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A multiple-partners assignment game with heterogeneous sales and multiunit demands consists of a set of sellers that own a given number of indivisible units of (potentially many different) goods and a set of buyers who value those units and want to buy at most an exogenously fixed number of units. We define a competitive equilibrium for this generalized assignment game and prove its existence by using only linear programming. In particular, we show how to compute equilibrium price vectors from the solutions of the dual linear program associated to the primal linear program defined to find optimal assignments. Using only linear programming tools, we also show (i) that the set of competitive equilibria (pairs of price vectors and assignments) has a Cartesian product structure: each equilibrium price vector is part of a competitive equilibrium with all optimal assignments, and vice versa; (ii) that the set of (restricted) equilibrium price vectors has a natural lattice structure; and (iii) how this structure is translated into the set of agents' utilities that are attainable at equilibrium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectiu: valorar l'índex de recidiva simple o recidiva clínicament significativa dels diferents tipus de distròfies estromals anteriors en empelts corneals després cirurgia de queratoplàstia. Material i mètodes: S'ha realitzat una revisió retrospectiva del 1954 al 2008 al Centre d'Oftalmologia Barraquer, identificant a tots els pacients diagnosticats de distròfia corneal estromal anterior i de la membrana de Bowman (Distròfia de Reis Bücklers, Distròfia Granular, Distròfia reticular, distròfia macular) que han estat intervinguts de queratoplàstia penetrant (QP) o queratoplàstia laminar (QL). S'han utilitzat estadístics descriptius i la funció de supervivència amb les corbes de Kaplan Meier per dur a terme l'anàlisi de la mostra. Resultats: La mostra total del nostre estudi ha estat de 109 ulls de 66 pacients amb distròfies estromals anteriors operats de queratoplàstia: 8 casos de distròfies de la capa de Bowman (6 ulls amb CDRB o CDB-I (distròfia de Reis Bücklers), i 2 ulls amb CDTB o CDB-I (distròfia de Thiel Behnke)), 19 casos de distròfies granulars (CDG), 53 casos de distròfia de lattice (CDL) i 29 casos de distròfia macular (CDM). Amb un llarg de temps de seguiment (75-180 mesos de mitjana), la recurrència simple ha estat del 33% dels casos de CDRB amb 46 mesos de temps mitjà de supervivència, del 58, 8% dels casos de CDG amb 74.6 mesos de supervivència, el 41'5% de les CDL amb 106 mesos de supervivència i en el 10% dels casos amb CDM amb 96 mesos de temps mitjà de supervivència. La recurrència clínicament significativa es va manifestar en la seva majoria en els casos de CDL i CDG amb un temps mitjà de supervivència de 127.6 mesos i 124.1 mesos respectivament. El símptoma principal d'aquestes manifestacions clíniques va ser la disminució de la AV. Conclusió: La queratoplàstia penetrant és un tractament clàssic eficaç per als casos de distròfies estromals anteriors simptomàtics, amb opacitats profundes en l'estroma o fracàs previ d'altres teràpies menys invasives com la PTK. Com és d'esperar, per la seva origen genètic, l'aparició de recurrència de la distròfia després del trasplantament existeix i s'incrementa amb el major temps de seguiment. La tendència d'aquesta recidiva és a presentar inicialment en el pla cornial epitelial i requereix un llarg temps d'evolució per fer-se simptomàtica, especialment en els casos de CDG, CDL i CDM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a la Center for European Integration de la Freie Universität Berlin, Alemania, entre 2007 i 2009. El tema central del projecte consisteix en la descripció matemàtica de processos espai-temporals mitjançant la teoria dels Continuous-Time Random Walks. L'aportació més significativa del nostre treball en aquest camp consisteix en considerar per primera vegada la interacció entre diversos processos actuant de manera acoblada, ja que fins ara els models existents es limitaven a l'estudi de processos individuals o independents. Aquesta idea fa possible, per exemple, plantejar un sistema de transport en l'espai i a la vegada un procés de reacció (una reacció química, per exemple), i estudiar estadísticament com cada un pot alterar el comportament de l'altre. Això suposa un salt qualitatiu important en la descripció de processos de reacció-dispersió, ja que els nostres models permeten incorporar patrons de dispersió i comportaments temporals (cicles de vida) força realistes en comparació amb els models convencionals. Per tal de completar aquest treball teòric ha estat necessari també desenvolupar algunes eines numèriques (models de xarxa) per facilitar la implementació dels models. En la vessant pràctica, hem aplicat aquestes idees al cas de la dinàmica entre virus i el sistema immunològic que té lloc quan es produeix una infecció a l'organisme. Diferents estudis experimentals portats a terme els últims anys mostren com la resposta immunològica dels organismes superiors presenta una dinàmica temporal força complexa (per exemple, en el cas de la resposta programada). Per aquest motiu, les nostres tècniques matemàtiques són d'especial utilitat per a l'anàlisi d'aquests sistemes. Finalment, altres possibles aplicacions dels models, com ara l'estudi d'invasions biològiques, també han estat considerades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recrystallization rims are a common feature of zircon crystals that underwent metamorphism. We present a microstructural and microchemical study of partially recrystallized zircon grains collected in polymetamorphic migmatites (Valle d'Arbedo, Ticino, Switzerland). The rims are bright in cathodo-luminescence (CL), with sharp and convex contacts characterized by inward-penetrating embayments transgressing igneous zircon cores. Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) data and transmission electron microscopy (TEM) imaging indicate that the rims are chemically and microstructurally different from the cores. The rims are strongly depleted in REE, with concentrations up to two orders of magnitude lower than in the cores, indicating a significant loss of REE during zircon recrystallization. Enrichment in non-formula elements, such as Ca, has not been observed in the rims. The microstructure of zircon cores shows a dappled intensity at and below the 100 nm scale, possibly due to radiation damage. Other defects such as pores and dislocations are absent in the core except at healed cracks. Zircon rims are mostly dapple-free, but contain nanoscale pores and strain centers, interpreted as fluid inclusions and chemical residues, respectively. Sensitive high-resolution ion microprobe (SHRIMP) U-Pb ages show that the recrystallization of the rims took place >200 Ma ago when the parent igneous zircon was not metamict. The chemical composition and the low-Ti content of the rims indicate that they form at sub-solidus temperatures (550-650 degrees C). Recrystallization rims in Valle d'Arbedo zircon are interpreted as the result of the migration of chemical reaction fronts in which fluid triggered in situ and contemporaneous interface-coupled dissolution-reprecipitation mechanisms. This study indicates that strong lattice strain resulting from the incorporation of a large amount of impurities and structural defects is not a necessary condition for zircon to recrystallize. Our observations suggest that the early formation of recrystallization rims played a major role in preserving zircon from the more recent Alpine metamorphic overprint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In standard multivariate statistical analysis common hypotheses of interest concern changes in mean vectors and subvectors. In compositional data analysis it is now well established that compositional change is most readily described in terms of the simplicial operation of perturbation and that subcompositions replace the marginal concept of subvectors. To motivate the statistical developments of this paper we present two challenging compositional problems from food production processes.Against this background the relevance of perturbations and subcompositions can beclearly seen. Moreover we can identify a number of hypotheses of interest involvingthe specification of particular perturbations or differences between perturbations and also hypotheses of subcompositional stability. We identify the two problems as being the counterpart of the analysis of paired comparison or split plot experiments and of separate sample comparative experiments in the jargon of standard multivariate analysis. We then develop appropriate estimation and testing procedures for a complete lattice of relevant compositional hypotheses

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Todos los cuerpos emiten luz espontaneamente al ser calentados. El espectro de radiacion es una funcion de la temperatura y el material. Sin embargo, la mayoria de los materiales irradia, en general, en una banda espectral amplia. Algunas matereiales, por el contrario, son capaces de concentrar la radiacion termica en una banda espectral mucho mas estrecha. Estos materiales se conocen como emisores selectivos y su uso tiene un profundo impacto en la eficiencia de sistemas sistemas tales como iluminacion y conversion de energia termofotovoltaica. De los emisores selectivos se espera que sean capaces de operar a altas temperaturas y que emitan en una banda espectral muy concisa. Uno de los metodos mas prometedores para controlar y disenar el espectro de emision termico es la utilizacion de cristales fotonicos. Los cristales fotonicos son estructuras periodicas artificiales capaces de controlar y confinar la luz de formas sin precedentes. Sin embargo, la produccion de dichas estructuras con grandes superficies y capaces de soportar altas temperaturas sigue siendo una dificil tarea. Este trabajo esta dedicada al estudio de las propiedades de emision termica de estructuras 3D de silicio macroporoso en el rango espectral mid-IR (2-30 m). En particular, este trabajo se enfoca en reducir la elevada emisividad del silicio cristalino. Las muestras estudiadas en este trabajo tienen una periodicidad de 4 m, lo que limitan los resultados obtenidos a la banda del infrarrojo medio, aunque estructuras mucho mas pequenas son tecnologicamente realizables con el metodo de fabricacion utilizado. Hemos demostrado que el silicio macroporoso 3D puede inhibir completamente la emision termica en su superficie. Mas aun, esta banda se puede ajustar en un amplio margen mediante pequenos cambios durante la formacion de los macroporos. Tambien hemos demostrado que tanto el ancho como la frecuencia de la banda de inhibicion se puede doblar mediante la aplicacion de tecnicas de postprocesado adecuadas. Finalmente hemos mostrado que es posible crear bandas de baja emisividad arbitrariamente anchas mediante estructuras macroporosas aperiodicas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differential scanning calorimetry (DSC) was used to study the dehydrogenation processes that take place in three hydrogenated amorphous silicon materials: nanoparticles, polymorphous silicon, and conventional device-quality amorphous silicon. Comparison of DSC thermograms with evolved gas analysis (EGA) has led to the identification of four dehydrogenation processes arising from polymeric chains (A), SiH groups at the surfaces of internal voids (A'), SiH groups at interfaces (B), and in the bulk (C). All of them are slightly exothermic with enthalpies below 50 meV/H atoms , indicating that, after dissociation of any SiH group, most dangling bonds recombine. The kinetics of the three low-temperature processes [with DSC peak temperatures at around 320 (A),360 (A'), and 430°C (B)] exhibit a kinetic-compensation effect characterized by a linea relationship between the activation entropy and enthalpy, which constitutes their signature. Their Si-H bond-dissociation energies have been determined to be E (Si-H)0=3.14 (A), 3.19 (A'), and 3.28 eV (B). In these cases it was possible to extract the formation energy E(DB) of the dangling bonds that recombine after Si-H bond breaking [0.97 (A), 1.05 (A'), and 1.12 (B)]. It is concluded that E(DB) increases with the degree of confinement and that E(DB)>1.10 eV for the isolated dangling bond in the bulk. After Si-H dissociation and for the low-temperature processes, hydrogen is transported in molecular form and a low relaxation of the silicon network is promoted. This is in contrast to the high-temperature process for which the diffusion of H in atomic form induces a substantial lattice relaxation that, for the conventional amorphous sample, releases energy of around 600 meV per H atom. It is argued that the density of sites in the Si network for H trapping diminishes during atomic diffusion

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atomic force microscopy (AFM) in situ has been used to observe the cold disassembly dynamics of microtubules at a previously unrealised spatial resolution. Microtubules either electrostatically or covalently bound to aminosilane surfaces disassembled at room temperature under buffer solutions with no free tubulin present. This process was followed by taking sequential tapping-mode AFM images and measuring the change in the microtubule end position as a function of time, with an spatial accuracy down to +/-20nm and a temporal accuracy of +/-1s. As well as giving average disassembly rates on the order of 1-10 tubulin monomers per second, large fluctuations in the disassembly rate were revealed, indicating that the process is far from smooth and linear under these experimental conditions. The surface bound rates measured here are comparable to the rates for GMPCPP-tubulin microtubules free in solution, suggesting that inhibition of tubulin curvature through steric hindrance controls the average, relatively low disassembly rate. The large fluctuations in this rate are thought to be due to multiple pathways in the kinetics of disassembly with differing rate constants and/or stalling due to defects in the microtubule lattice. Microtubules that were covalently bound to the surface left behind the protofilaments covalently cross-linked to the aminosilane via glutaraldehyde during the disassembly process. Further work is needed to quantitatively assess the effects of surface binding on protofibril disassembly rates, reveal any differences in disassembly rates between the plus and minus ends and to enable assembly as well as disassembly to be imaged in the microscope fluid cell in real-time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A series of InxAl1-xAs samples (0.51≪x≪0.55)coherently grown on InP was studied in order to measure the band-gap energy of the lattice matched composition. As the substrate is opaque to the relevant photon energies, a method is developed to calculate the optical absorption coefficient from the photoluminescence excitation spectra. The effect of strain on the band-gap energy has been taken into account. For x=0.532, at 14 K we have obtained Eg0=1549±6 meV

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the disadvantages of old age is that there is more past than future: this,however, may be turned into an advantage if the wealth of experience and, hopefully,wisdom gained in the past can be reflected upon and throw some light on possiblefuture trends. To an extent, then, this talk is necessarily personal, certainly nostalgic,but also self critical and inquisitive about our understanding of the discipline ofstatistics. A number of almost philosophical themes will run through the talk: searchfor appropriate modelling in relation to the real problem envisaged, emphasis onsensible balances between simplicity and complexity, the relative roles of theory andpractice, the nature of communication of inferential ideas to the statistical layman, theinter-related roles of teaching, consultation and research. A list of keywords might be:identification of sample space and its mathematical structure, choices betweentransform and stay, the role of parametric modelling, the role of a sample spacemetric, the underused hypothesis lattice, the nature of compositional change,particularly in relation to the modelling of processes. While the main theme will berelevance to compositional data analysis we shall point to substantial implications forgeneral multivariate analysis arising from experience of the development ofcompositional data analysis…