998 resultados para colate detritiche, terreni granulari, prove triax ACU e CSD
Resumo:
We study analytically a thermal Brownian motor model and calculate exactly the Onsager coefficients. We show how the reciprocity relation holds and that the determinant of the Onsager matrix vanishes. Such a condition implies that the device is built with tight coupling. This explains why Carnot¿s efficiency can be achieved in the limit of infinitely slow velocities. We also prove that the efficiency at maximum power has the maximum possible value, which corresponds to the Curzon-Alhborn bound. Finally, we discuss the model acting as a Brownian refrigerator.
Resumo:
The aim of this article is to prove the real possibility of travelling intellectually to the Platonic image of the cave from different films. In this sense, one can speak of explicit references as in The Conformist by B. Bertolucci or in Shadowlands by R. Attenborough -if one bears in mind the Chronicles of Narnia by C. S. Lewis- or The Picture of Dorian Gray ¿if one bears in mind the well-known O. Wilde¿s novel-, but, on other occasions, although the Platonic influence cannot be proved, for instance in The Truman Show, A Room with a View or Brideshead Revisited, one can perfectly think of these films in order to guide the contemporary audiences to that Platonic image, since Plato himself affirms that it deals with an image which can be easily applied and, in first place, to his idealistic philosophy.
Resumo:
The determination of the age of a document is a very frequent query, however it is also one of the most challenging and controversial areas of questioned document examination. Several approaches were defined to address this problem. The first is based on the introduction date of raw constituents (like paper or ink) on the market. The second approach is based on the aging of documents, which is unfortunately not only influenced by passing time, but also by storage conditions and document composition. The third approach considers the relative age of documents and aims at reconstructing their chronology. The three approaches are equally complex to develop and encounter quantity of problems which are not negligible. This article aims to expose the potential applications and limitations of current ink dating methods. Method development and validation and the interpretation of evidence prove to be essential criteria for the dating of documents.
Resumo:
Although the sport of triathlon provides an opportunity to research the effect of multi-disciplinary exercise on health across the lifespan, much remains to be done. The literature has failed to consistently or adequately report subject age group, sex, ability level, and/or event-distance specialization. The demands of training and racing are relatively unquantified. Multiple definitions and reporting methods for injury and illness have been implemented. In general, risk factors for maladaptation have not been well-described. The data thus far collected indicate that the sport of triathlon is relatively safe for the well-prepared, well-supplied athlete. Most injuries 'causing cessation or reduction of training or seeking of medical aid' are not serious. However, as the extent to which they recur may be high and is undocumented, injury outcome is unclear. The sudden death rate for competition is 1.5 (0.9-2.5) [mostly swim-related] occurrences for every 100,000 participations. The sudden death rate is unknown for training, although stroke risk may be increased, in the long-term, in genetically susceptible athletes. During heavy training and up to 5 days post-competition, host protection against pathogens may also be compromised. The incidence of illness seems low, but its outcome is unclear. More prospective investigation of the immunological, oxidative stress-related and cardiovascular effects of triathlon training and competition is warranted. Training diaries may prove to be a promising method of monitoring negative adaptation and its potential risk factors. More longitudinal, medical-tent-based studies of the aetiology and treatment demands of race-related injury and illness are needed.
Resumo:
Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
Resumo:
The Monte San Giorgio (Southern Alps, Ticino, Switzerland) is the most important locality in the world for vertebrates dating back to the Middle Triassic. For this reason it was registered in 2003 as a UNESCO World Heritage Site. One of the objectives of this doctoral thesis was to fill some of the cognitive gaps regarding the Ladinian succession, including in particular the San Giorgio Dolomite and the Meride Limestone. In order to achieve this, the entire succession, more than 600 metres thick, was measured and sampled. Biostratigraphic research based on new finds of fossil invertebrates and microfossils and on the palynological analysis of the entire section was integrated with single-zircon U-Pb dating of volcanic ash layers intercalated in the carbonate succession. This enabled a redefinition of the bio-chronostratigraphic and geochronologic framework of the succession, which encompasses a significantly shorter time interval than previously held. The Ladinian section extends from the E. curionii Ammonoid Zone (Early Fassanian) to the P. archelaus Ammonoid Zone (Early Longobardian). The age of the classic fossiliferous levels of the Meride Limestone, rich in organic matter and containing vertebrate fossils which are known all over the world, was defined in both biostratigraphic and geochronologic terms. The presumed stratigraphie significance of the pachypleurosaurid reptiles found in such levels is called into question by new finds. These fossiliferous horizons were found to correspond to the main volcanoclastic intervals of the Buchenstein Formation (Middle and Upper Pietra Verde). Thus, a correlation with the Bagolino Section (Italy) containing the GSSP for the base of the Ladinian was proposed. Bulk sedimentation rates in the studied succession average 200 m/Myr and therefore prove to be 20 times higher than those of the South-Alpine pelagic basins. These values express high carbonate productivity from the surrounding platforms on one hand, and on the other a marked subsidence of the basin. Only in the intervals consisting of laminated limestones did the sedimentation rates drop to average values of around 30 m/Myr. The distribution of organic and inorganic facies appears to be the consequence of relative variations in sea-level. The laminated and organic-matter- rich intervals of the Meride Limestone are linked to a relative sea-level drop which favoured dysoxic to anoxic bottom-water conditions, coupled with an increase in runoff, perhaps due to recurrent explosive volcanic activity. The transient development under dysoxic conditions of monospecific benthic meio-/macrofaunas was documented. Organic matter suggests a predominant origin due to benthic bacterial activity, as can be witnessed in alveolar structures typical of exopolymeric substances secreted by bacteria within microbial mats. A microbial contribution to the carbonate (peloidal) precipitation was documented. The protective effect exerted by these microbial mats is also indicated as the main taphonomic factor contributing to the excellent preservation of vertebrate fossils. A radiolarian assemblage discovered in the lower part of the section (earliest Ladinian, E. curionii Zone) suggests the transient existence of open-marine but not deep-water connections with the tethyan pelagic basins. It shows marked similarities to the faunas typical of the late Anisian, suggesting therefore a low resolution power provided by radiolarian biostratigraphy in recognizing the Anisian/Ladinian boundary. The present thesis describes a new species of conifer (Elatocladus cassinae), a new species of insect (Dasyleptus triassicus) and seven new species of radiolarians (Eptingium danieli, Eptingium neriae, Parentactinosphaera eoladinica, Sepsagon ticinensis, Sepsagon? valporinae, Novamuria wirzi and Pessagnollum? hexaspinosum). In addition, following revision of the type material of already existent taxa, four new genera of radiolarians are introduced: Bernoulliella, Eohexastylus, Ticinosphaera and Lahmosphaera.
Resumo:
This study was designed to evaluate the potential of gas-filled microbubbles (MB) to be internalized by antigen-presenting cells (APC). Fluorescently labeled MB were prepared, thus permitting to track binding to, and internalization in, APC. Both human and mouse cells, including monocytes and dendritic cells (DC), prove capable to phagocyte MB in vitro. Observation by confocal laser scanning microscopy showed that interaction between MB and target cells resulted in a rapid internalization in cellular compartments and to a lesser extent in the cytoplasm. Capture of MB by APC resulted in phagolysosomal targeting as verified by double staining with anti-lysosome-associated membrane protein-1 monoclonal antibody and decrease of internalization by phagocytosis inhibitors. Fluorescent MB injected subcutaneously (s.c.) in mice were found to be associated with CD11c(+)DC in lymph nodes draining the injection sites 24 h after administration. Altogether, our study demonstrates that MB can successfully target APC both in vitro and in vivo, and thus may serve as a potent Ag delivery system without requirement for ultrasound-based sonoporation. This adds to the potential of applications of MB already extensively used for diagnostic imaging in humans.
Resumo:
We show that the solution published in the paper by Senovilla [Phys. Rev. Lett. 64, 2219 (1990)] is geodesically complete and singularity-free. We also prove that the solution satisfies the stronger energy and causality conditions, such as global hyperbolicity, the strong energy condition, causal symmetry, and causal stability. A detailed discussion about which assumptions in the singularity theorems are not satisfied is performed, and we show explicitly that the solution is in accordance with those theorems. A brief discussion of the results is given.
Resumo:
We present a general class of solutions to Einstein's field equations with two spacelike commuting Killing vectors by assuming the separation of variables of the metric components. The solutions can be interpreted as inhomogeneous cosmological models. We show that the singularity structure of the solutions varies depending on the different particular choices of the parameters and metric functions. There exist solutions with a universal big-bang singularity, solutions with timelike singularities in the Weyl tensor only, solutions with singularities in both the Ricci and the Weyl tensors, and also singularity-free solutions. We prove that the singularity-free solutions have a well-defined cylindrical symmetry and that they are generalizations of other singularity-free solutions obtained recently.
Resumo:
Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.
Resumo:
We demonstrate that the self-similarity of some scale-free networks with respect to a simple degree-thresholding renormalization scheme finds a natural interpretation in the assumption that network nodes exist in hidden metric spaces. Clustering, i.e., cycles of length three, plays a crucial role in this framework as a topological reflection of the triangle inequality in the hidden geometry. We prove that a class of hidden variable models with underlying metric spaces are able to accurately reproduce the self-similarity properties that we measured in the real networks. Our findings indicate that hidden geometries underlying these real networks are a plausible explanation for their observed topologies and, in particular, for their self-similarity with respect to the degree-based renormalization.
Resumo:
Fifty years after the clinical introduction of total parenteral nutrition (TPN) the Arvid Wretlind lecture is an opportunity to critically analyse the evolution and changes that have marked its development and clinical use. The standard crystalline amino acid solutions, while devoid of side effects, remain incomplete regarding their composition (e.g. glutamine). Lipid emulsions have evolved tremendously and are now included in bi- and tri-compartmental feeding bags enabling a true "total" PN provided daily micronutrients are prescribed. The question of exact individual energy, macro- and micro-nutrient requirements is still unsolved. Many complications attributed to TPN are in fact the consequence of under- or over-feeding: the historical hyperalimentation concept is the main cause, along with the use of fixed weight based predictive equations (incorrect in 70% of the critically ill patients). In the late 80's many complications (hyperglycemia, sepsis, fatty liver, exacerbation of inflammation, mortality) were attributed to TPN leading to its near abandon in favour of enteral nutrition (EN). Enteral feeding, although desirable for many reasons, is difficult causing a worldwide recurrence of malnutrition by insufficient feed delivery. TPN indications have evolved towards its use either alone or in combination with EN: several controversial trials published 2011-13 have investigated TPN timing, an issue which is not yet resolved. The initiation time varies according to the country between admission (Australia and Israel), day 4 (Swiss) and day 7 (Belgium, USA). The most important issue may prove to be and individualized and time dependent prescription of feeding route, energy and substrates.
Resumo:
In this paper we address the problem of consistently constructing Langevin equations to describe fluctuations in nonlinear systems. Detailed balance severely restricts the choice of the random force, but we prove that this property, together with the macroscopic knowledge of the system, is not enough to determine all the properties of the random force. If the cause of the fluctuations is weakly coupled to the fluctuating variable, then the statistical properties of the random force can be completely specified. For variables odd under time reversal, microscopic reversibility and weak coupling impose symmetry relations on the variable-dependent Onsager coefficients. We then analyze the fluctuations in two cases: Brownian motion in position space and an asymmetric diode, for which the analysis based in the master equation approach is known. We find that, to the order of validity of the Langevin equation proposed here, the phenomenological theory is in agreement with the results predicted by more microscopic models
Resumo:
We present exact equations and expressions for the first-passage-time statistics of dynamical systems that are a combination of a diffusion process and a random external force modeled as dichotomous Markov noise. We prove that the mean first passage time for this system does not show any resonantlike behavior.
Resumo:
We prove that Brownian market models with random diffusion coefficients provide an exact measure of the leverage effect [J-P. Bouchaud et al., Phys. Rev. Lett. 87, 228701 (2001)]. This empirical fact asserts that past returns are anticorrelated with future diffusion coefficient. Several models with random diffusion have been suggested but without a quantitative study of the leverage effect. Our analysis lets us to fully estimate all parameters involved and allows a deeper study of correlated random diffusion models that may have practical implications for many aspects of financial markets.