42 resultados para large deviation theory
em CentAUR: Central Archive University of Reading - UK
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
Recent radar and rain-gauge observations from the island of Dominica, which lies in the eastern Caribbean sea at 15 N, show a strong orographic enhancement of trade-wind precipitation. The mechanisms behind this enhancement are investigated using idealized large-eddy simulations with a realistic representation of the shallow trade-wind cumuli over the open ocean upstream of the island. The dominant mechanism is found to be the rapid growth of convection by the bulk lifting of the inhomogenous impinging flow. When rapidly lifted by the terrain, existing clouds and other moist parcels gain buoyancy relative to rising dry air because of their different adiabatic lapse rates. The resulting energetic, closely-packed convection forms precipitation readily and brings frequent heavy showers to the high terrain. Despite this strong precipitation enhancement, only a small fraction (1%) of the impinging moisture flux is lost over the island. However, an extensive rain shadow forms to the lee of Dominica due to the convective stabilization, forced descent, and wave breaking. A linear model is developed to explain the convective enhancement over the steep terrain.
Resumo:
Mega-scale glacial lineations (MSGLs) are longitudinally aligned corrugations (ridge-groove structures 6-100 km long) in sediment produced subglacially. They are indicators of fast flow and a common signature of ice-stream beds. We develop a qualitative theory that accounts for their formation, and use numerical modelling, and observations of ice-stream beds to provide supporting evidence. Ice in contact with a rough (scale of 10-10(3) m) bedrock surface will mimic the form of the bed. Because of flow acceleration and convergence in ice-stream onset zones, the ice-base roughness elements experience transverse strain, transforming them from irregular bumps into longitudinally aligned keels of ice protruding downwards. Where such keels slide across a soft sedimentary bed, they plough through the sediments, carving elongate grooves, and deforming material up into intervening ridges. This explains MSGLs and has important implications for ice-stream mechanics. Groove ploughing provides the means to acquire new lubricating sediment and to transport large volumes of it downstream. Keels may provide basal drag in the force budget of ice streams, thereby playing a role in flow regulation and stability We speculate that groove ploughing permits significant ice-stream widening, thus facilitating high-magnitude ice discharge.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Using the Met Office large-eddy model (LEM) we simulate a mixed-phase altocumulus cloud that was observed from Chilbolton in southern England by a 94 GHz Doppler radar, a 905 nm lidar, a dual-wavelength microwave radiometer and also by four radiosondes. It is important to test and evaluate such simulations with observations, since there are significant differences between results from different cloud-resolving models for ice clouds. Simulating the Doppler radar and lidar data within the LEM allows us to compare observed and modelled quantities directly, and allows us to explore the relationships between observed and unobserved variables. For general-circulation models, which currently tend to give poor representations of mixed-phase clouds, the case shows the importance of using: (i) separate prognostic ice and liquid water, (ii) a vertical resolution that captures the thin layers of liquid water, and (iii) an accurate representation the subgrid vertical velocities that allow liquid water to form. It is shown that large-scale ascents and descents are significant for this case, and so the horizontally averaged LEM profiles are relaxed towards observed profiles to account for these. The LEM simulation then gives a reasonable. cloud, with an ice-water path approximately two thirds of that observed, with liquid water at the cloud top, as observed. However, the liquid-water cells that form in the updraughts at cloud top in the LEM have liquid-water paths (LWPs) up to half those observed, and there are too few cells, giving a mean LWP five to ten times smaller than observed. In reality, ice nucleation and fallout may deplete ice-nuclei concentrations at the cloud top, allowing more liquid water to form there, but this process is not represented in the model. Decreasing the heterogeneous nucleation rate in the LEM increased the LWP, which supports this hypothesis. The LEM captures the increase in the standard deviation in Doppler velocities (and so vertical winds) with height, but values are 1.5 to 4 times smaller than observed (although values are larger in an unforced model run, this only increases the modelled LWP by a factor of approximately two). The LEM data show that, for values larger than approximately 12 cm s(-1), the standard deviation in Doppler velocities provides an almost unbiased estimate of the standard deviation in vertical winds, but provides an overestimate for smaller values. Time-smoothing the observed Doppler velocities and modelled mass-squared-weighted fallspeeds shows that observed fallspeeds are approximately two-thirds of the modelled values. Decreasing the modelled fallspeeds to those observed increases the modelled IWC, giving an IWP 1.6 times that observed.
Resumo:
Calculations are reported of the magnetic anisotropy energy of two-dimensional (2D) Co nanostructures on a Pt(111) substrate. The perpendicular magnetic anisotropy (PMA) of the 2D Co clusters strongly depends on their size and shape, and rapidly decreases with increasing cluster size. The PMA calculated is in reasonable agreement with experimental results. The sensitivity of the results to the Co-Pt spacing at the interface is also investigated and, in particular, for a complete Co monolayer we note that the value of the spacing at the interface determines whether PMA or in-plane anisotropy occurs. We find that the PMA can be greatly enhanced by the addition of Pt adatoms to the top surface of the 2D Co clusters. A single Pt atom can induce in excess of 5 meV to the anisotropy energy of a cluster. In the absence of the Pt adatoms the PMA of the Co clusters falls below 1 meV/Co atom for clusters of about 10 atoms whereas, with Pt atoms added to the surface of the clusters, a PMA of 1 meV/Co atom can be maintained for clusters as large as about 40 atoms. The effect of placing Os atoms on the top of the Co clusters is also considered. The addition of 5d atoms and clusters on the top of ferromagnetic nanoparticles may provide an approach to tune the magnetic anisotropy and moment separately.
Resumo:
We compare laboratory observations of equilibrated baroclinic waves in the rotating two-layer annulus, with numerical simulations from a quasi-geostrophic model. The laboratory experiments lie well outside the quasi-geostrophic regime: the Rossby number reaches unity; the depth-to-width aspect ratio is large; and the fluid contains ageostrophic inertia–gravity waves. Despite being formally inapplicable, the quasi-geostrophic model captures the laboratory flows reasonably well. The model displays several systematic biases, which are consequences of its treatment of boundary layers and neglect of interfacial surface tension and which may be explained without invoking the dynamical effects of the moderate Rossby number, large aspect ratio or inertia–gravity waves. We conclude that quasi-geostrophic theory appears to continue to apply well outside its formal bounds.
Resumo:
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth.
Resumo:
Slantwise convective available potential energy (SCAPE) is a measure of the degree to which the atmosphere is unstable to conditional symmetric instability (CSI). It has, until now, been defined by parcel theory in which the atmosphere is assumed to be nonevolving and balanced, that is, two-dimensional. When applying this two-dimensional theory to three-dimensional evolving flows, these assumptions can be interpreted as an implicit assumption that a timescale separation exists between a relatively rapid timescale for slantwise ascent and a slower timescale for the development of the system. An approximate extension of parcel theory to three dimensions is derived and it is shown that calculations of SCAPE based on the assumption of relatively rapid slantwise ascent can be qualitatively in error. For a case study example of a developing extratropical cyclone, SCAPE calculated along trajectories determined without assuming the existence of the timescale separation show large SCAPE values for parcels ascending from the warm sector and along the warm front. These parcels ascend into the cloud head within which there is some evidence consistent with the release of CSI from observational and model cross sections. This region of high SCAPE was not found for calculations along the relatively rapidly ascending trajectories determined by assuming the existence of the timescale separation.
Resumo:
A large number of processes are involved in the pathogenesis of atherosclerosis but it is unclear which of them play a rate-limiting role. One way of resolving this problem is to investigate the highly non-uniform distribution of disease within the arterial system; critical steps in lesion development should be revealed by identifying arterial properties that differ between susceptible and protected sites. Although the localisation of atherosclerotic lesions has been investigated intensively over much of the 20th century, this review argues that the factor determining the distribution of human disease has only recently been identified. Recognition that the distribution changes with age has, for the first time, allowed it to be explained by variation in transport properties of the arterial wall; hitherto, this view could only be applied to experimental atherosclerosis in animals. The newly discovered transport variations which appear to play a critical role in the development of adult disease have underlying mechanisms that differ from those elucidated for the transport variations relevant to experimental atherosclerosis: they depend on endogenous NO synthesis and on blood flow. Manipulation of transport properties might have therapeutic potential. Copyright (C) 2004 S. Karger AG, Basel.
Resumo:
Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to Mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.
Resumo:
We have favoured the variational (secular equation) method for the determination of the (ro-) vibrational energy levels of polyatomic molecules. We use predominantly the Watson Hamiltonian in normal coordinates and an associated given potential in the variational code 'Multimode'. The dominant cost is the construction and diagonalization of matrices of ever-increasing size. Here we address this problem, using pertubation theory to select dominant expansion terms within the Davidson-Liu iterative diagonalization method. Our chosen example is the twelve-mode molecule methanol, for which we have an ab initio representation of the potential which includes the internal rotational motion of the OH group relative to CH3. Our new algorithm allows us to obtain converged energy levels for matrices of dimensions in excess of 100 000.
Resumo:
Firms form consortia in order to win contracts. Once a project has been awarded to a consortium each member then concentrates on his or her own contract with the client. Therefore, consortia are marketing devices, which present the impression of teamworking, but the production process is just as fragmented as under conventional procurement methods. In this way, the consortium forms a barrier between the client and the actual construction production process. Firms form consortia, not as a simple development of normal ways of working, but because the circumstances for specific projects make it a necessary vehicle. These circumstances include projects that are too large or too complex to undertake alone or projects that require on-going services which cannot be provided by the individual firms inhouse. It is not a preferred way of working, because participants carry extra risk in the form of liability for the actions of their partners in the consortium. The behaviour of members of consortia is determined by their relative power, based on several factors, including financial commitment and ease of replacement. The level of supply chain visibility to the public sector client and to the industry is reduced by the existence of a consortium because the consortium forms an additional obstacle between the client and the firms undertaking the actual construction work. Supply chain visibility matters to the client who otherwise loses control over the process of construction or service provision, while remaining accountable for cost overruns. To overcome this separation there is a convincing argument in favour of adopting the approach put forward in the Project Partnering Contract 2000 (PPC2000) Agreement. Members of consortia do not necessarily go on to work in the same consortia again because members need to respond flexibly to opportunities as and when they arise. Decision-making processes within consortia tend to be on an ad hoc basis. Construction risk is taken by the contractor and the construction supply chain but the reputational risk is carried by all the firms associated with a consortium. There is a wide variation in the manner that consortia are formed, determined by the individual circumstances of each project; its requirements, size and complexity, and the attitude of individual project leaders. However, there are a number of close working relationships based on generic models of consortia-like arrangements for the purpose of building production, such as the Housing Corporation Guidance Notes and the PPC2000.
Resumo:
Theoretical understanding of the implementation and use of innovations within construction contexts is discussed and developed. It is argued that both the rhetoric of the 'improvement agenda' within construction and theories of innovation fail to account for the complex contexts and disparate perspectives which characterize construction work. To address this, the concept of relative boundedness is offered. Relatively unbounded innovation is characterized by a lack of a coherent central driving force or mediator with the ability to reconcile potential conflicts and overcome resistance to implementation. This is a situation not exclusive to, but certainly indicative of, much construction project work. Drawing on empirical material from the implementation of new design and coordination technologies on a large construction project, the concept is developed, concentrating on the negotiations and translations implementation mobilized. An actor-network theory (ANT) approach is adopted, which emphasizes the roles that both human actors and non-human agents play in the performance and outcomes of these interactions. Three aspects of how relative boundedness is constituted and affected are described; through the robustness of existing practices and expectations, through the delegation of interests on to technological artefacts and through the mobilization of actors and artefacts to constrain and limit the scope of negotiations over new technology implementation.
Resumo:
Chain in both its forms - common (or stud-less) and stud-link - has many engineering applications. It is widely used as a component in the moorings of offshore floating systems, where its ruggedness and corrosion resistance make it an attractive choice. Chain exhibits some interesting behaviour in that when straight and subject to an axial load it does not twist or generate any torque, but if twisted or loaded when in a twisted condition it behaves in a highly non-linear manner, with the torque dependent upon the level of twist and axial load. Clearly an understanding of the way in which chains may behave and interact with other mooring components (such as wire rope, which also exhibits coupling between axial load and generated torque) when they are in service is essential. However, the sizes of chain that are in use in offshore moorings (typical bar diameters are 75 mm and greater) are too large to allow easy testing. This paper, which is in two parts, aims to address the issues and considerations relevant to torque in mooring chain. The first part introduces a frictionless theory that predicts the resultant torques and 'lift' in the links as non-dimensionalized functions of the angle of twist. Fortran code is presented in an Appendix, which allows the reader to make use of the analysis. The second part of the paper presents results from experimental work on both stud-less (41 mm) and stud-link (20.5 and 56 mm) chains. Torsional data are presented in both 'constant twist' and 'constant load' forms, as well as considering the lift between the links.