997 resultados para Supersymmetric models
Resumo:
By using the Y(gl(m|n)) super Yangian symmetry of the SU(m|n) supersymmetric Haldane-Shastry spin chain, we show that the partition function of this model satisfies a duality relation under the exchange of bosonic and fermionic spin degrees of freedom. As a byproduct of this study of the duality relation, we find a novel combinatorial formula for the super Schur polynomials associated with some irreducible representations of the Y(gl(m|n)) Yangian algebra. Finally, we reveal an intimate connection between the global SU(m|n) symmetry of a spin chain and the boson-fermion duality relation. (C) 2007 Elsevier B.V. All rights reserved.
Genetic analysis of structural brain connectivity using DICCCOL models of diffusion MRI in 522 twins
Resumo:
Genetic and environmental factors affect white matter connectivity in the normal brain, and they also influence diseases in which brain connectivity is altered. Little is known about genetic influences on brain connectivity, despite wide variations in the brain's neural pathways. Here we applied the 'DICCCOL' framework to analyze structural connectivity, in 261 twin pairs (522 participants, mean age: 21.8 y ± 2.7SD). We encoded connectivity patterns by projecting the white matter (WM) bundles of all 'DICCCOLs' as a tracemap (TM). Next we fitted an A/C/E structural equation model to estimate additive genetic (A), common environmental (C), and unique environmental/error (E) components of the observed variations in brain connectivity. We found 44 'heritable DICCCOLs' whose connectivity was genetically influenced (α2>1%); half of them showed significant heritability (α2>20%). Our analysis of genetic influences on WM structural connectivity suggests high heritability for some WM projection patterns, yielding new targets for genome-wide association studies.
Resumo:
In the world today there are many ways in which we measure, count and determine whether something is worth the effort or not. In Australia and many other countries, new government legislation is requiring government-funded entities to become more transparent in their practice and to develop a more cohesive narrative about the worth, or impact, for the betterment of society. This places the executives of such entities in a position of needing evaluative thinking and practice to guide how they may build the narrative that documents and demonstrates this type of impact. In thinking about where to start, executives, project and program managers may consider this workshop as a professional development opportunity to explore both the intended and unintended consequences of performance models as tools of evaluation. This workshop will offer participants an opportunity to unpack the place of performance models as an evaluative tool through the following: · What shape does an ethical, sound and valid performance measure for an organization or personnel take? · What role does cultural specificity play in the design and development of a performance model for an organization or for personnel? · How are stakeholders able to identify risk during the design and development of such models? · When and where will dissemination strategies be required? · And so what? How can you determine that your performance model implementation has made a difference now or in the future?
Resumo:
The ground state and low energy excitations of the SU(m|n) supersymmetric Haldane–Shastry spin chain are analyzed. In the thermodynamic limit, it is found that the ground state degeneracy is finite only for the SU(m|0) and SU(m|1) spin chains, while the dispersion relation for the low energy and low momentum excitations is linear for all values of m and n. We show that the low energy excitations of the SU(m|1) spin chain are described by a conformal field theory of m non-interacting Dirac fermions which have only positive energies; the central charge of this theory is m/2. Finally, for ngreater-or-equal, slanted1, the partition functions of the SU(m|n) Haldane–Shastry spin chain and the SU(m|n) Polychronakos spin chain are shown to be related in a simple way in the thermodynamic limit at low temperatures.
Resumo:
We provide analytical models for capacity evaluation of an infrastructure IEEE 802.11 based network carrying TCP controlled file downloads or full-duplex packet telephone calls. In each case the analytical models utilize the attempt probabilities from a well known fixed-point based saturation analysis. For TCP controlled file downloads, following Bruno et al. (In Networking '04, LNCS 2042, pp. 626-637), we model the number of wireless stations (STAs) with ACKs as a Markov renewal process embedded at packet success instants. In our work, analysis of the evolution between the embedded instants is done by using saturation analysis to provide state dependent attempt probabilities. We show that in spite of its simplicity, our model works well, by comparing various simulated quantities, such as collision probability, with values predicted from our model. Next we consider N constant bit rate VoIP calls terminating at N STAs. We model the number of STAs that have an up-link voice packet as a Markov renewal process embedded at so called channel slot boundaries. Analysis of the evolution over a channel slot is done using saturation analysis as before. We find that again the AP is the bottleneck, and the system can support (in the sense of a bound on the probability of delay exceeding a given value) a number of calls less than that at which the arrival rate into the AP exceeds the average service rate applied to the AP. Finally, we extend the analytical model for VoIP calls to determine the call capacity of an 802.11b WLAN in a situation where VoIP calls originate from two different types of coders. We consider N-1 calls originating from Type 1 codecs and N-2 calls originating from Type 2 codecs. For G711 and G729 voice coders, we show that the analytical model again provides accurate results in comparison with simulations.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.
Resumo:
We study quench dynamics and defect production in the Kitaev and the extended Kitaev models. For the Kitaev model in one dimension, we show that in the limit of slow quench rate, the defect density n∼1/√τ, where 1/τ is the quench rate. We also compute the defect correlation function by providing an exact calculation of all independent nonzero spin correlation functions of the model. In two dimensions, where the quench dynamics takes the system across a critical line, we elaborate on the results of earlier work [K. Sengupta, D. Sen, and S. Mondal, Phys. Rev. Lett. 100, 077204 (2008)] to discuss the unconventional scaling of the defect density with the quench rate. In this context, we outline a general proof that for a d-dimensional quantum model, where the quench takes the system through a d−m dimensional gapless (critical) surface characterized by correlation length exponent ν and dynamical critical exponent z, the defect density n∼1/τmν/(zν+1). We also discuss the variation of the shape and spatial extent of the defect correlation function with both the rate of quench and the model parameters and compute the entropy generated during such a quenching process. Finally, we study the defect scaling law, entropy generation and defect correlation function of the two-dimensional extended Kitaev model.
Resumo:
This paper presents the results of shaking table tests on model reinforced soil retaining walls in the laboratory. The influence of backfill relative density on the seismic response was studied through a series of laboratory model tests on retaining walls. Construction of model retaining walls in the laminar box mounted on shaking table, instrumentation and results from the shaking table tests are described in detail. Three types of walls: wrap- and rigid-faced reinforced soil walls and unreinforced rigid-faced walls constructed to different densities were tested for a relatively small excitation. Wrap-faced walls are further tested for higher base excitation at different frequencies and relative densities. It is observed from these tests that the effect of backfill density on the seismic performance of reinforced retaining walls is pronounced only at very low relative density and at the higher base excitation. The walls constructed with higher backfill relative density showed lesser face deformations and more acceleration amplifications compared to the walls constructed with lower densities when tested at higher base excitation. The response of wrap- and rigid-faced retaining walls is not much affected by the backfill relative density when tested at smaller base excitation. The effects of facing rigidity were evaluated to a limited extent. Displacements in wrap-faced walls are many times higher compared to rigid-faced walls. The results obtained from this study are helpful in understanding the relative performance of reinforced soil retaining walls constructed to when subjected to smaller and higher base excitation for the range of relative density employed in the testing program. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.
Resumo:
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.