898 resultados para Conformal Field Models in String Theory
Resumo:
Ecosystems consist of complex dynamic interactions among species and the environment, the understanding of which has implications for predicting the environmental response to changes in climate and biodiversity. However, with the recent adoption of more explorative tools, like Bayesian networks, in predictive ecology, few assumptions can be made about the data and complex, spatially varying interactions can be recovered from collected field data. In this study, we compare Bayesian network modelling approaches accounting for latent effects to reveal species dynamics for 7 geographically and temporally varied areas within the North Sea. We also apply structure learning techniques to identify functional relationships such as prey–predator between trophic groups of species that vary across space and time. We examine if the use of a general hidden variable can reflect overall changes in the trophic dynamics of each spatial system and whether the inclusion of a specific hidden variable can model unmeasured group of species. The general hidden variable appears to capture changes in the variance of different groups of species biomass. Models that include both general and specific hidden variables resulted in identifying similarity with the underlying food web dynamics and modelling spatial unmeasured effect. We predict the biomass of the trophic groups and find that predictive accuracy varies with the models' features and across the different spatial areas thus proposing a model that allows for spatial autocorrelation and two hidden variables. Our proposed model was able to produce novel insights on this ecosystem's dynamics and ecological interactions mainly because we account for the heterogeneous nature of the driving factors within each area and their changes over time. Our findings demonstrate that accounting for additional sources of variation, by combining structure learning from data and experts' knowledge in the model architecture, has the potential for gaining deeper insights into the structure and stability of ecosystems. Finally, we were able to discover meaningful functional networks that were spatially and temporally differentiated with the particular mechanisms varying from trophic associations through interactions with climate and commercial fisheries.
Resumo:
This article shows the main results obtained from the Delphi study, which was made of politicians and technicians from the Department of Social Policy in the County Council of Gipuzkoa, concerning the possibility of cooperativizing the provision of social services in this historical territory. With this in mind, the structure of this article is in two different parts. The first part develops the theoretical framework which serves as inspiration for the empirical work, where note is made of the main theoretical proposals that have a bearing on the collective dimension of citizen participation in the management of public services. Among the various models, those which prioritise public participation through social and solidarity economy entities stand out. The second part concerns itself with the presentation of the field research results. To this end, the methodological notes concerning the preparation process for the Delphi analysis are presented first and this is immediately followed by a synthesis of the main results obtained in this study. The article ends with a section of conclusions and future lines of action.
Resumo:
We have conducted a broad survey of switching behavior in thin films of a range of ferroelectric materials, including some materials that are not typically considered for FeRAM applications, and are hence less studied. The materials studied include: strontium bismuth tantalate (SBT), barium strontium titanate (BST), lead zicronate titanate (PZT), and potassium nitrate (KNO3). Switching in ferroelectric thin films is typically considered to occur by domain nucleation and growth. We discuss two models of frequency dependence of coercive field, the Ishisbashi-Orihara theory where the limiting step is domain growth and the model of Du and Chen where the limiting step is nucleation. While both models fit the data fairly well the temperature dependence of our results on PZT and BST suggest that the nucleation model of Du and Chen is more appropriate for the experimental results that we have obtained.
Resumo:
In an effort to contribute to greater understanding of norms and identity in the theory of planned behaviour, an extended model was used to predict residential kerbside recycling, with self-identity, personal norms, neighbourhood identification, and injunctive and descriptive social norms as additional predictors. Data from a field study (N = 527) using questionnaire measures of predictor variables and an observational measure of recycling behaviour supported the theory. Intentions predicted behaviour, while attitudes, perceived control, and the personal norm predicted intention to recycle. The interaction between neighbourhood identification and injunctive social norms in turn predicted personal norms. Self-identity and the descriptive social norm significantly added to the original theory in predicting intentions as well as behaviour directly. A replication survey on the self-reported recycling behaviours of a random residential sample (N = 264) supported the model obtained previously. These findings offer a useful extension of the theory of planned behaviour and some practicable suggestions for pro-recycling interventions. It may be productive to appeal to self-identity by making people feel like recyclers, and to stimulate both injunctive and descriptive norms in the neighbourhood.
Resumo:
The random walk of magnetic field lines in the presence of magnetic turbulence in plasmas is investigated from first principles. An isotropic model is employed for the magnetic turbulence spectrum. An analytical investigation of the asymptotic behavior of the field-line mean-square displacement is carried out. in terms of the position variable z. It is shown that varies as similar to z ln z for large distance z. This result corresponds to a superdiffusive behavior of field line wandering. This investigation complements previous work, which relied on a two-component model for the turbulence spectrum. Contrary to that model, quasilinear theory appears to provide an adequate description of the field line random walk for isotropic turbulence.
Resumo:
We investigate the entanglement spectrum near criticality in finite quantum spin chains. Using finite size scaling we show that when approaching a quantum phase transition, the Schmidt gap, i.e., the difference between the two largest eigenvalues of the reduced density matrix ?1, ?2, signals the critical point and scales with universal critical exponents related to the relevant operators of the corresponding perturbed conformal field theory describing the critical point. Such scaling behavior allows us to identify explicitly the Schmidt gap as a local order parameter.
Resumo:
Current conceptual models of reciprocal interactions linking soil structure, plants and arbuscular mycorrhizal fungi emphasise positive feedbacks among the components of the system. However, dynamical systems with high dimensionality and several positive feedbacks (i.e. mutualism) are prone to instability. Further, organisms such as arbuscular mycorrhizal fungi (AMF) are obligate biotrophs of plants and are considered major biological agents in soil aggregate stabilization. With these considerations in mind, we developed dynamical models of soil ecosystems that reflect the main features of current conceptual models and empirical data, especially positive feedbacks and linear interactions among plants, AMF and the component of soil structure dependent on aggregates. We found that systems become increasingly unstable the more positive effects with Type I functional response (i.e., the growth rate of a mutualist is modified by the density of its partner through linear proportionality) are added to the model, to the point that increasing the realism of models by adding linear effects produces the most unstable systems. The present theoretical analysis thus offers a framework for modelling and suggests new directions for experimental studies on the interrelationship between soil structure, plants and AMF. Non-linearity in functional responses, spatial and temporal heterogeneity, and indirect effects can be invoked on a theoretical basis and experimentally tested in laboratory and field experiments in order to account for and buffer the local instability of the simplest of current scenarios. This first model presented here may generate interest in more explicitly representing the role of biota in soil physical structure, a phenomenon that is typically viewed in a more process- and management-focused context. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.
Resumo:
Neural stem cells have been proposed as a new and promising treatment modality in various pathologies of the central nervous system, including malignant brain tumors. However, the underlying mechanism by which neural stem cells target tumor areas remains elusive. Monitoring of these cells is currently done by use of various modes of molecular imaging, such as optical imaging, magnetic resonance imaging and positron emission tomography, which is a novel technology for visualizing metabolism and signal transduction to gene expression. In this new context, the microenvironment of (malignant) brain tumors and the blood-brain barrier gains increased interest. The authors of this review give a unique overview of the current molecular-imaging techniques used in different therapeutic experimental brain tumor models in relation to neural stem cells. Such methods for molecular imaging of gene-engineered neural stem/progenitor cells are currently used to trace the location and temporal level of expression of therapeutic and endogenous genes in malignant brain tumors, closing the gap between in vitro and in vivo integrative biology of disease in neural stem cell transplantation.
Resumo:
The purpose of this study was to conduct a comparative textual analysis on the role of movement in 3 texts in Drama in Education in Canada. As the subject is holistic and encourages creative, active participation, movement was expected to appear, even inadvertently, in both theory and practice. It was hoped that guidelines for the use of movement within Drama in Education would emerge from the texts and that these guidelines would serve as models for others to use. A total of 26 Drama in Education experts in Canada were each asked to list the 10 most important texts in the field. Those who answered were assigned numbers and charted according to age, gender, and geography. An objective colleague helped narrow the group to 16 participants. A frequency count was used, assigning 10 points to the first text on each list, and descending to 1 point for the tenth text listed. Based on the highest number of points calculated, the 5 most frequently used texts were identified. These were compared to ascertain the widest representation ofthe authors' geographic location and gender, as well as differences in theory and practice. The final selection included 3 texts that represented differing approaches in their presentation and discussion of Drama in Education theories and practices. Analysis involved applying 5 levels of commitment to determine if,how, why, when, and with what results movement was explicitly or implicitly addressed in the 3 texts. Analysis resulted in several unexpected surprises around each of the 3 texts. The study also provided suggestions for extending and clarifying the role of movement in teaching and learning in general, as well as for Drama in Education in particular.
Resumo:
Les modèles sur réseau comme ceux de la percolation, d’Ising et de Potts servent à décrire les transitions de phase en deux dimensions. La recherche de leur solution analytique passe par le calcul de la fonction de partition et la diagonalisation de matrices de transfert. Au point critique, ces modèles statistiques bidimensionnels sont invariants sous les transformations conformes et la construction de théories des champs conformes rationnelles, limites continues des modèles statistiques, permet un calcul de la fonction de partition au point critique. Plusieurs chercheurs pensent cependant que le paradigme des théories des champs conformes rationnelles peut être élargi pour inclure les modèles statistiques avec des matrices de transfert non diagonalisables. Ces modèles seraient alors décrits, dans la limite d’échelle, par des théories des champs logarithmiques et les représentations de l’algèbre de Virasoro intervenant dans la description des observables physiques seraient indécomposables. La matrice de transfert de boucles D_N(λ, u), un élément de l’algèbre de Temperley- Lieb, se manifeste dans les théories physiques à l’aide des représentations de connectivités ρ (link modules). L’espace vectoriel sur lequel agit cette représentation se décompose en secteurs étiquetés par un paramètre physique, le nombre d de défauts. L’action de cette représentation ne peut que diminuer ce nombre ou le laisser constant. La thèse est consacrée à l’identification de la structure de Jordan de D_N(λ, u) dans ces représentations. Le paramètre β = 2 cos λ = −(q + 1/q) fixe la théorie : β = 1 pour la percolation et √2 pour le modèle d’Ising, par exemple. Sur la géométrie du ruban, nous montrons que D_N(λ, u) possède les mêmes blocs de Jordan que F_N, son plus haut coefficient de Fourier. Nous étudions la non diagonalisabilité de F_N à l’aide des divergences de certaines composantes de ses vecteurs propres, qui apparaissent aux valeurs critiques de λ. Nous prouvons dans ρ(D_N(λ, u)) l’existence de cellules de Jordan intersectorielles, de rang 2 et couplant des secteurs d, d′ lorsque certaines contraintes sur λ, d, d′ et N sont satisfaites. Pour le modèle de polymères denses critique (β = 0) sur le ruban, les valeurs propres de ρ(D_N(λ, u)) étaient connues, mais les dégénérescences conjecturées. En construisant un isomorphisme entre les modules de connectivités et un sous-espace des modules de spins du modèle XXZ en q = i, nous prouvons cette conjecture. Nous montrons aussi que la restriction de l’hamiltonien de boucles à un secteur donné est diagonalisable et trouvons la forme de Jordan exacte de l’hamiltonien XX, non triviale pour N pair seulement. Enfin nous étudions la structure de Jordan de la matrice de transfert T_N(λ, ν) pour des conditions aux frontières périodiques. La matrice T_N(λ, ν) a des blocs de Jordan intrasectoriels et intersectoriels lorsque λ = πa/b, et a, b ∈ Z×. L’approche par F_N admet une généralisation qui permet de diagnostiquer des cellules intersectorielles dont le rang excède 2 dans certains cas et peut croître indéfiniment avec N. Pour les blocs de Jordan intrasectoriels, nous montrons que les représentations de connectivités sur le cylindre et celles du modèle XXZ sont isomorphes sauf pour certaines valeurs précises de q et du paramètre de torsion v. En utilisant le comportement de la transformation i_N^d dans un voisinage des valeurs critiques (q_c, v_c), nous construisons explicitement des vecteurs généralisés de Jordan de rang 2 et discutons l’existence de blocs de Jordan intrasectoriels de plus haut rang.
Resumo:
Cette thèse porte sur les phénomènes critiques survenant dans les modèles bidimensionnels sur réseau. Les résultats sont l'objet de deux articles : le premier porte sur la mesure d'exposants critiques décrivant des objets géométriques du réseau et, le second, sur la construction d'idempotents projetant sur des modules indécomposables de l'algèbre de Temperley-Lieb pour la chaîne de spins XXZ. Le premier article présente des expériences numériques Monte Carlo effectuées pour une famille de modèles de boucles en phase diluée. Baptisés "dilute loop models (DLM)", ceux-ci sont inspirés du modèle O(n) introduit par Nienhuis (1990). La famille est étiquetée par les entiers relativement premiers p et p' ainsi que par un paramètre d'anisotropie. Dans la limite thermodynamique, il est pressenti que le modèle DLM(p,p') soit décrit par une théorie logarithmique des champs conformes de charge centrale c(\kappa)=13-6(\kappa+1/\kappa), où \kappa=p/p' est lié à la fugacité du gaz de boucles \beta=-2\cos\pi/\kappa, pour toute valeur du paramètre d'anisotropie. Les mesures portent sur les exposants critiques représentant la loi d'échelle des objets géométriques suivants : l'interface, le périmètre externe et les liens rouges. L'algorithme Metropolis-Hastings employé, pour lequel nous avons introduit de nombreuses améliorations spécifiques aux modèles dilués, est détaillé. Un traitement statistique rigoureux des données permet des extrapolations coïncidant avec les prédictions théoriques à trois ou quatre chiffres significatifs, malgré des courbes d'extrapolation aux pentes abruptes. Le deuxième article porte sur la décomposition de l'espace de Hilbert \otimes^nC^2 sur lequel la chaîne XXZ de n spins 1/2 agit. La version étudiée ici (Pasquier et Saleur (1990)) est décrite par un hamiltonien H_{XXZ}(q) dépendant d'un paramètre q\in C^\times et s'exprimant comme une somme d'éléments de l'algèbre de Temperley-Lieb TL_n(q). Comme pour les modèles dilués, le spectre de la limite continue de H_{XXZ}(q) semble relié aux théories des champs conformes, le paramètre q déterminant la charge centrale. Les idempotents primitifs de End_{TL_n}\otimes^nC^2 sont obtenus, pour tout q, en termes d'éléments de l'algèbre quantique U_qsl_2 (ou d'une extension) par la dualité de Schur-Weyl quantique. Ces idempotents permettent de construire explicitement les TL_n-modules indécomposables de \otimes^nC^2. Ceux-ci sont tous irréductibles, sauf si q est une racine de l'unité. Cette exception est traitée séparément du cas où q est générique. Les problèmes résolus par ces articles nécessitent une grande variété de résultats et d'outils. Pour cette raison, la thèse comporte plusieurs chapitres préparatoires. Sa structure est la suivante. Le premier chapitre introduit certains concepts communs aux deux articles, notamment une description des phénomènes critiques et de la théorie des champs conformes. Le deuxième chapitre aborde brièvement la question des champs logarithmiques, l'évolution de Schramm-Loewner ainsi que l'algorithme de Metropolis-Hastings. Ces sujets sont nécessaires à la lecture de l'article "Geometric Exponents of Dilute Loop Models" au chapitre 3. Le quatrième chapitre présente les outils algébriques utilisés dans le deuxième article, "The idempotents of the TL_n-module \otimes^nC^2 in terms of elements of U_qsl_2", constituant le chapitre 5. La thèse conclut par un résumé des résultats importants et la proposition d'avenues de recherche qui en découlent.
Resumo:
Dans cette thèse, nous étudions quelques problèmes fondamentaux en mathématiques financières et actuarielles, ainsi que leurs applications. Cette thèse est constituée de trois contributions portant principalement sur la théorie de la mesure de risques, le problème de l’allocation du capital et la théorie des fluctuations. Dans le chapitre 2, nous construisons de nouvelles mesures de risque cohérentes et étudions l’allocation de capital dans le cadre de la théorie des risques collectifs. Pour ce faire, nous introduisons la famille des "mesures de risque entropique cumulatifs" (Cumulative Entropic Risk Measures). Le chapitre 3 étudie le problème du portefeuille optimal pour le Entropic Value at Risk dans le cas où les rendements sont modélisés par un processus de diffusion à sauts (Jump-Diffusion). Dans le chapitre 4, nous généralisons la notion de "statistiques naturelles de risque" (natural risk statistics) au cadre multivarié. Cette extension non-triviale produit des mesures de risque multivariées construites à partir des données financiéres et de données d’assurance. Le chapitre 5 introduit les concepts de "drawdown" et de la "vitesse d’épuisement" (speed of depletion) dans la théorie de la ruine. Nous étudions ces concepts pour des modeles de risque décrits par une famille de processus de Lévy spectrallement négatifs.
Resumo:
Designing is a heterogeneous, fuzzily defined, floating field of various activities and chunks of ideas and knowledge. Available theories about the foundations of designing as presented in "the basic PARADOX" (Jonas and Meyer-Veden 2004) have evoked the impression of Babylonian confusion. We located the reasons for this "mess" in the "non-fit", which is the problematic relation of theories and subject field. There seems to be a comparable interface problem in theory-building as in designing itself. "Complexity" sounds promising, but turns out to be a problematic and not really helpful concept. I will argue for a more precise application of systemic and evolutionary concepts instead, which - in my view - are able to model the underlying generative structures and processes that produce the visible phenomenon of complexity. It does not make sense to introduce a new fashionable meta-concept and to hope for a panacea before having clarified the more basic and still equally problematic older meta-concepts. This paper will take one step away from "theories of what" towards practice and doing and try to have a closer look at existing process models or "theories of how" to design instead. Doing this from a systemic perspective leads to an evolutionary view of the process, which finally allows to specify more clearly the "knowledge gaps" inherent in the design process. This aspect has to be taken into account as constitutive of any attempt at theory-building in design, which can be characterized as a "practice of not-knowing". I conclude, that comprehensive "unified" theories, or methods, or process models run aground on the identified knowledge gaps, which allow neither reliable models of the present, nor reliable projections into the future. Consolation may be found in performing a shift from the effort of adaptation towards strategies of exaptation, which means the development of stocks of alternatives for coping with unpredictable situations in the future.
Resumo:
Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.