30 resultados para Large detector systems for particle and astroparticle physics
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Recent results in the literature concerning holography indicate that the thermodynamics of quantum gravity (at least with a negative cosmological constant) can be modeled by the large N thermodynamics of quantum field theory. We emphasize that this suggests a completely unitary evolution of processes in quantum gravity, including black hole formation and decay, and even more extreme examples involving topology change. As concrete examples which show that this correspondence holds even when the space-time is only locally asymptotically AdS, we compute the thermodynamical phase structure of the AdS-Taub-NUT and AdS-Taub-bolt spacetimes, and compare them to a (2+1)-dimensional conformal field theory (at large N) compactified on a squashed three-sphere and on the twisted plane.
Resumo:
The scalar sector of the effective low-energy six-dimensional Kaluza-Klein theory is seen to represent an anisotropic fluid composed of two perfect fluids if the extra space metric has a Euclidean signature, or a perfect fluid of geometric strings if it has an indefinite signature. The Einstein field equations with such fluids can be explicitly integrated when the four-dimensional space-time has two commuting Killing vectors.
Resumo:
The report presents a grammar capable of analyzing the process of production of electricity in modular elements for different power-supply systems, defined using semantic and formal categories. In this way it becomes possible to individuate similarities and differences in the process of production of electricity, and then measure and compare “apples” with “apples” and “oranges” with “oranges”. For instance, when comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. As a matter of facts, the performance of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. By adopting this approach, it becomes possible to compare the performance of the two power-supply systems by comparing their relative biophysical requirements for the phases that both nuclear energy power plants and fossil energy power plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. This report presents the evaluation of the biophysical requirements for the two powersupply systems: nuclear energy and fossil energy. In particular, the report focuses on the following requirements: (i) electricity; (ii) fossil-fuels, (iii) labor; and (iv) materials.
Resumo:
Vegeu el resum a l'inici del document del fitxer adjunt.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
We experimentally question the assertion of Prospect Theory that people display risk attraction in choices involving high-probability losses. Indeed, our experimental participants tend to avoid fair risks for large (up to ? 90), high-probability (80%) losses. Our research hinges on a novel experimental method designed to alleviate the house-money bias that pervades experiments with real (not hypothetical) loses.Our results vindicate Daniel Bernoulli?s view that risk aversion is the dominant attitude,But, contrary to the Bernoulli-inspired canonical expected utility theory, we do find frequent risk attraction for small amounts of money at stake.In any event, we attempt neither to test expected utility versus nonexpected utility theories, nor to contribute to the important literature that estimates value and weighting functions. The question that we ask is more basic, namely: do people display risk aversion when facing large losses, or large gains? And, at the risk of oversimplifying, our answer is yes.
Resumo:
Two main coal-bearing sequences developed during the Oligocene in the Tertiary Ebro Basin, the Calaf (early Oligocene) and Mequinenza (late Oligocene) coal basins. Coal deposition took place in shallow marsh-swamp-lake complexes which sometimes became closed and evolved under warm climatic conditions with fluctuating humidity. These shallow lacustrine systems are closely interrelated with the terminal parts of the distributive fluvial systems which spread from the tectonically active Ebro basin margins. Laterally extensive lignite-bearing sequences, including rather thin, lenticular autochthonous and/or hypautochthonous coal seams with high ash and sulphur contents, characterized coal deposition in the shallow lacustrine systems. Coal seam geometry, which makes them nearly subeconomic, resulted from the tectonic instability during basin margin evolution and the sometimes closed, arid conditions under which the lacustrine systems evolved. High ash and sulphur contents resulted from the inadequate isolation of peat forming environments from clastic influx and from the very low acidity and sometimes high sulphate contents of the lacustrine waters. Coal exploration in shallow lacustrine sequences similar to those described here must take into account that the spread of coal-forming environments and maxima of coal deposition are usually coincident with lake expansions and retraction or shifting of the terminal fluvial zones interrelated with the lacustrine areas.
Resumo:
Artificial reefs have barely been used in Neotropical reservoirs (about five studies in three reservoirs), despite their potential as a fishery management tool to create new habitats and also to understand fish ecology. We experimentally assessed how reef material (ceramic, concrete, and PVC) and time modulated fish colonization of artificial reefs deployed in Itaipu Reservoir, a large reservoir of the mainstem Parana´ River, Brazil. Fish richness, abundance, and biomass were significantly greater in the reef treatments than at control sites. Among the experimental reefs, ceramic followed by the concrete treatments were the materials most effectively colonized, harboring the majority of the 13 fish species recorded. Although dependent on material type, many of the regularities of ecological successions were also observed in the artificial reefs, including decelerating increases in species richness, abundance, mean individual size, and species loss rates with time and decelerating decreases of species gain and turnover rates. Species composition also varied with material type and time, together with suites of life history traits: more equilibrium species (i.e., fishes of intermediate size that often exhibit parental care and produce fewer but larger offspring) of the Winemiller-Rose model of fish life histories prevailed in later successional stages. Overall, our study suggests that experimental reefs are a promising tool to understand ecological succession of fish assemblages, particularly in tropical ecosystems given their high species richness and low seasonality
Resumo:
We study particle dispersion advected by a synthetic turbulent flow from a Lagrangian perspective and focus on the two-particle and cluster dispersion by the flow. It has been recently reported that Richardson¿s law for the two-particle dispersion can stem from different dispersion mechanisms, and can be dominated by either diffusive or ballistic events. The nature of the Richardson dispersion depends on the parameters of our flow and is discussed in terms of the values of a persistence parameter expressing the relative importance of the two above-mentioned mechanisms. We support this analysis by studying the distribution of interparticle distances, the relative velocity correlation functions, as well as the relative trajectories.
Resumo:
We study particle dispersion advected by a synthetic turbulent flow from a Lagrangian perspective and focus on the two-particle and cluster dispersion by the flow. It has been recently reported that Richardson¿s law for the two-particle dispersion can stem from different dispersion mechanisms, and can be dominated by either diffusive or ballistic events. The nature of the Richardson dispersion depends on the parameters of our flow and is discussed in terms of the values of a persistence parameter expressing the relative importance of the two above-mentioned mechanisms. We support this analysis by studying the distribution of interparticle distances, the relative velocity correlation functions, as well as the relative trajectories.
Resumo:
Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.
Resumo:
Critical real-time ebedded (CRTE) Systems require safe and tight worst-case execution time (WCET) estimations to provide required safety levels and keep costs low. However, CRTE Systems require increasing performance to satisfy performance needs of existing and new features. Such performance can be only achieved by means of more agressive hardware architectures, which are much harder to analyze from a WCET perspective. The main features considered include cache memòries and multi-core processors.Thus, althoug such features provide higher performance, corrent WCET analysis methods are unable to provide tight WCET estimations. In fact, WCET estimations become worse than for simple rand less powerful hardware. The main reason is the fact that hardware behavior is deterministic but unknown and, therefore, the worst-case behavior must be assumed most of the time, leading to large WCET estimations. The purpose of this project is developing new hardware designs together with WCET analysis tools able to provide tight and safe WCET estimations. In order to do so, those pieces of hardware whose behavior is not easily analyzable due to lack of accurate information during WCET analysis will be enhanced to produce a probabilistically analyzable behavior. Thus, even if the worst-case behavior cannot be removed, its probabilty can be bounded, and hence, a safe and tight WCET can be provided for a particular safety level in line with the safety levels of the remaining components of the system. During the first year the project we have developed molt of the evaluation infraestructure as well as the techniques hardware techniques to analyze cache memories. During the second year those techniques have been evaluated, and new purely-softwar techniques have been developed.
Resumo:
CSCL applications are complex distributed systems that posespecial requirements towards achieving success in educationalsettings. Flexible and efficient design of collaborative activitiesby educators is a key precondition in order to provide CSCL tailorable systems, capable of adapting to the needs of eachparticular learning environment. Furthermore, some parts ofthose CSCL systems should be reused as often as possible inorder to reduce development costs. In addition, it may be necessary to employ special hardware devices, computational resources that reside in other organizations, or even exceed thepossibilities of one specific organization. Therefore, theproposal of this paper is twofold: collecting collaborativelearning designs (scripting) provided by educators, based onwell-known best practices (collaborative learning flow patterns) in a standard way (IMS-LD) in order to guide the tailoring of CSCL systems by selecting and integrating reusable CSCL software units; and, implementing those units in the form of grid services offered by third party providers. More specifically, this paper outlines a grid-based CSCL system having these features and illustrates its potential scope and applicability by means of a sample collaborative learning scenario.
Resumo:
We investigate the influence of the driving mechanism on the hysteretic response of systems with athermal dynamics. In the framework of local mean-field theory at finite temperature (but neglecting thermally activated processes), we compare the rate-independent hysteresis loops obtained in the random field Ising model when controlling either the external magnetic field H or the extensive magnetization M. Two distinct behaviors are observed, depending on disorder strength. At large disorder, the H-driven and M-driven protocols yield identical hysteresis loops in the thermodynamic limit. At low disorder, when the H-driven magnetization curve is discontinuous (due to the presence of a macroscopic avalanche), the M-driven loop is reentrant while the induced field exhibits strong intermittent fluctuations and is only weakly self-averaging. The relevance of these results to the experimental observations in ferromagnetic materials, shape memory alloys, and other disordered systems is discussed.