998 resultados para Consistency conditions
Resumo:
We derive an one-parameter family of consistency conditions to braneworlds in the Brans-Dicke gravity. The General Relativity case is recovered by taking a correct limit of the Brans-Dicke parameter. We show that it is possible to build a multiple AdS brane scenario in a six-dimensional bulk only if the brane tensions are negative. Besides, in the five-dimensional case, it is showed that no fine tuning is necessary between the bulk cosmological constant and the brane tensions, in contrast to the Randall-Sundrum model. Copyright © owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial- ShareAlike Licence.
Resumo:
A twisted generalized Weyl algebra A of degree n depends on a. base algebra R, n commuting automorphisms sigma(i) of R, n central elements t(i) of R and on some additional scalar parameters. In a paper by Mazorchuk and Turowska, it is claimed that certain consistency conditions for sigma(i) and t(i) are sufficient for the algebra to be nontrivial. However, in this paper we give all example which shows that this is false. We also correct the statement by finding a new set of consistency conditions and prove that the old and new conditions together are necessary and sufficient for the base algebra R to map injectively into A. In particular they are sufficient for the algebra A to be nontrivial. We speculate that these consistency relations may play a role in other areas of mathematics, analogous to the role played by the Yang-Baxter equation in the theory of integrable systems.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this note we describe the most general coupling of abelian vector and tensor multiplets to six-dimensional (1,0) supergravity. As was recently pointed out, it is of interest to consider more general Chern-Simons couplings to abelian vectors of the type H(r) = dB(r) - 1/2 c(rab)AadAb, with c(r) matrices that may not be simultaneously diagonalized. We show that these couplings can be related to Green-Schwarz terms of the form B(r)c(r)/abFaFb, and how the complete local Lagrangian, that embodies factorized gauge and supersymmetry anomalies (to be disposed of by fermion loops) is uniquely determined by Wess-Zumino consistency conditions, aside from an arbitrary quartic coupling for the gauginos. (C) 2000 Elsevier Science B.V.
Resumo:
Pós-graduação em Física - IFT
Resumo:
In this work we study extra dimensional theories, taking emphasis in braneworld models generated by real scalar fields. Firstly, we revise the Randall-Sundrum models and we discuss about some thick braneworld scenarios already considered in the literature. We introduce a new thick brane model in order to address the Standard Model hierarchy problem. Furthermore, we show that there exists a class of scalar fields models which are very interesting for analytical studies of thick brane scenarios. Finally, we analyze the braneworld consistency conditions in the context of f(R) and Brans-Dicke gravities, where we show that it is possible to evade a no-go theorem regarding thick brane scenarios
Resumo:
We study soft limits of correlation functions for the density and velocity fields in the theory of structure formation. First, we re-derive the (resummed) consistency conditions at unequal times using the eikonal approximation. These are solely based on symmetry arguments and are therefore universal. Then, we explore the existence of equal-time relations in the soft limit which, on the other hand, depend on the interplay between soft and hard modes. We scrutinize two approaches in the literature: the time-flow formalism, and a background method where the soft mode is absorbed into a locally curved cosmology. The latter has been recently used to set up (angular averaged) 'equal-time consistency relations'. We explicitly demonstrate that the time-flow relations and 'equal-time consistency conditions'are only fulfilled at the linear level, and fail at next-to-leading order for an Einstein de-Sitter universe. While applied to the velocities both proposals break down beyond leading order, we find that the 'equal-time consistency conditions'quantitatively approximates the perturbative results for the density contrast. Thus, we generalize the background method to properly incorporate the effect of curvature in the density and velocity fluctuations on short scales, and discuss the reasons behind this discrepancy. We conclude with a few comments on practical implementations and future directions.
Resumo:
We consider a flux formulation of Double Field Theory in which fluxes are dynamical and field-dependent. Gauge consistency imposes a set of quadratic constraints on the dynamical fluxes, which can be solved by truly double configurations. The constraints are related to generalized Bianchi Identities for (non-)geometric fluxes in the double space, sourced by (exotic) branes. Following previous constructions, we then obtain generalized connections, torsion and curvatures compatible with the consistency conditions. The strong constraint-violating terms needed to make contact with gauged supergravities containing duality orbits of non-geometric fluxes, systematically arise in this formulation.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
We suggest a variant of the nonlinear σ model for the description of disordered superconductors. The main distinction from existing models lies in the fact that the saddle point equation is solved nonperturbatively in the superconducting pairing field. It allows one to use the model both in the vicinity of the metal-superconductor transition and well below its critical temperature with full account for the self-consistency conditions. We show that the model reproduces a set of known results in different limiting cases, and apply it for a self-consistent description of the proximity effect at the superconductor-metal interface.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
Time-domain reflectometry (TDR) is an important technique to obtain series of soil water content measurements in the field. Diode-segmented probes represent an improvement in TDR applicability, allowing measurements of the soil water content profile with a single probe. In this paper we explore an extensive soil water content dataset obtained by tensiometry and TDR from internal drainage experiments in two consecutive years in a tropical soil in Brazil. Comparisons between the variation patterns of the water content estimated by both methods exhibited evidences of deterioration of the TDR system during this two year period at field conditions. The results showed consistency in the variation pattern for the tensiometry data, whereas TDR estimates were inconsistent, with sensitivity decreasing over time. This suggests that difficulties may arise for the long-term use of this TDR system under tropical field conditions. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The importance of disturbance and the subsequent rate and pattern of recovery has been long recognised as an important driver of community structure. Community recovery is affected by processes operating at local and regional scales yet the examination of community level responses to a standardised disturbance at regional scales (i.e. among regions under different environmental conditions) has seldom been attempted. Here, we mechanically disturbed rocky intertidal lower shore algal dominated assemblages at three locations within each of three different regions within the Lusitanian biogeographical province (Azores, northern Portugal and the Canary Islands). All organisms were cleared from experimental plots and succession followed over a period of 12 months at which time we formally compared the assemblage structure to that of unmanipulated controls. Early patterns of recovery of disturbed communities varied among regions and was positively influenced by temperature, but not by regional species richness. Different components of the assemblage responded differently to disturbance. Regional differences in the relative abundance and identity of species had a key influence on the overall assemblage recovery. This study highlights how regional-scales differences in environmental conditions and species pool are important determinants of recovery of disturbed communities.
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.