917 resultados para Probabilistic constraints
Resumo:
Linear models of bidirectional reflectance distribution are useful tools for understanding the angular variability of surface reflectance as observed by medium-resolution sensors such as the Moderate Resolution Imaging Spectrometer. These models are operationally used to normalize data to common view and illumination geometries and to calculate integral quantities such as albedo. Currently, to compensate for noise in observed reflectance, these models are inverted against data collected during some temporal window for which the model parameters are assumed to be constant. Despite this, the retrieved parameters are often noisy for regions where sufficient observations are not available. This paper demonstrates the use of Lagrangian multipliers to allow arbitrarily large windows and, at the same time, produce individual parameter sets for each day even for regions where only sparse observations are available.
Resumo:
Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.
Resumo:
Several methods are examined which allow to produce forecasts for time series in the form of probability assignments. The necessary concepts are presented, addressing questions such as how to assess the performance of a probabilistic forecast. A particular class of models, cluster weighted models (CWMs), is given particular attention. CWMs, originally proposed for deterministic forecasts, can be employed for probabilistic forecasting with little modification. Two examples are presented. The first involves estimating the state of (numerically simulated) dynamical systems from noise corrupted measurements, a problem also known as filtering. There is an optimal solution to this problem, called the optimal filter, to which the considered time series models are compared. (The optimal filter requires the dynamical equations to be known.) In the second example, we aim at forecasting the chaotic oscillations of an experimental bronze spring system. Both examples demonstrate that the considered time series models, and especially the CWMs, provide useful probabilistic information about the underlying dynamical relations. In particular, they provide more than just an approximation to the conditional mean.
Resumo:
This is one of the first papers in which arguments are given to treat code-switching and borrowing as similar phenomena. It is argued that it is theoretically undesirable to distinguish both phenomena, and empirically very problematic. A probabilistic account of code-switching and a hierarchy of switched constituents (similar to hierarchies of borrowability) are proposed which account for the fact that some constituents are more likely to be borrowed/switched than others. It is argued that the same kinds of constraints apply to both code-switching and borrowing.
Resumo:
This volume is a serious attempt to open up the subject of European philosophy of science to real thought, and provide the structural basis for the interdisciplinary development of its specialist fields, but also to provoke reflection on the idea of ‘European philosophy of science’. This efforts should foster a contemporaneous reflection on what might be meant by philosophy of science in Europe and European philosophy of science, and how in fact awareness of it could assist philosophers interpret and motivate their research through a stronger collective identity. The overarching aim is to set the background for a collaborative project organising, systematising, and ultimately forging an identity for, European philosophy of science by creating research structures and developing research networks across Europe to promote its development.
Resumo:
The probabilistic projections of climate change for the United Kingdom (UK Climate Impacts Programme) show a trend towards hotter and drier summers. This suggests an expected increase in cooling demand for buildings – a conflicting requirement to reducing building energy needs and related CO2 emissions. Though passive design is used to reduce thermal loads of a building, a supplementary cooling system is often necessary. For such mixed-mode strategies, indirect evaporative cooling is investigated as a low energy option in the context of a warmer and drier UK climate. Analysis of the climate projections shows an increase in wet-bulb depression; providing a good indication of the cooling potential of an evaporative cooler. Modelling a mixed-mode building at two different locations, showed such a building was capable of maintaining adequate thermal comfort in future probable climates. Comparing the control climate to the scenario climate, an increase in the median of evaporative cooling load is evident. The shift is greater for London than for Glasgow with a respective 71.6% and 3.3% increase in the median annual cooling load. The study shows evaporative cooling should continue to function as an effective low-energy cooling technique in future, warming climates.
Resumo:
The Chartered Institute of Building Service Engineers (CIBSE) produced a technical memorandum (TM36) presenting research on future climate impacting building energy use and thermal comfort. One climate projection for each of four CO2 emissions scenario were used in TM36, so providing a deterministic outlook. As part of the UK Climate Impacts Programme (UKCIP) probabilistic climate projections are being studied in relation to building energy simulation techniques. Including uncertainty in climate projections is considered an important advance to climate impacts modelling and is included in the latest UKCIP data (UKCP09). Incorporating the stochastic nature of these new climate projections in building energy modelling requires a significant increase in data handling and careful statistical interpretation of the results to provide meaningful conclusions. This paper compares the results from building energy simulations when applying deterministic and probabilistic climate data. This is based on two case study buildings: (i) a mixed-mode office building with exposed thermal mass and (ii) a mechanically ventilated, light-weight office building. Building (i) represents an energy efficient building design that provides passive and active measures to maintain thermal comfort. Building (ii) relies entirely on mechanical means for heating and cooling, with its light-weight construction raising concern over increased cooling loads in a warmer climate. Devising an effective probabilistic approach highlighted greater uncertainty in predicting building performance, depending on the type of building modelled and the performance factors under consideration. Results indicate that the range of calculated quantities depends not only on the building type but is strongly dependent on the performance parameters that are of interest. Uncertainty is likely to be particularly marked with regard to thermal comfort in naturally ventilated buildings.
Resumo:
Abstract: Long-term exposure of skylarks to a fictitious insecticide and of wood mice to a fictitious fungicide were modelled probabilistically in a Monte Carlo simulation. Within the same simulation the consequences of exposure to pesticides on reproductive success were modelled using the toxicity-exposure-linking rules developed by R.S. Bennet et al. (2005) and the interspecies extrapolation factors suggested by R. Luttik et al.(2005). We built models to reflect a range of scenarios and as a result were able to show how exposure to pesticide might alter the number of individuals engaged in any given phase of the breeding cycle at any given time and predict the numbers of new adults at the season’s end.
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Resumo:
Healthcare information systems have the potential to enhance productivity, lower costs, and reduce medication errors by automating business processes. However, various issues such as system complexity and system abilities in a relation to user requirements as well as rapid changes in business needs have an impact on the use of these systems. In many cases failure of a system to meet business process needs has pushed users to develop alternative work processes (workarounds) to fill this gap. Some research has been undertaken on why users are motivated to perform and create workarounds. However, very little research has assessed the consequences on patient safety. Moreover, the impact of performing these workarounds on the organisation and how to quantify risks and benefits is not well analysed. Generally, there is a lack of rigorous understanding and qualitative and quantitative studies on healthcare IS workarounds and their outcomes. This project applies A Normative Approach for Modelling Workarounds to develop A Model of Motivation, Constraints, and Consequences. It aims to understand the phenomenon in-depth and provide guidelines to organisations on how to deal with workarounds. Finally the method is demonstrated on a case study example and its relative merits discussed.
Resumo:
Parameterization schemes for the drag due to atmospheric gravity waves are discussed and compared in the context of a simple one-dimensional model of the quasi-biennial oscillation (QBO). A number of fundamental issues are examined in detail, with the goal of providing a better understanding of the mechanism by which gravity wave drag can produce an equatorial zonal wind oscillation. The gravity wave–driven QBOs are compared with those obtained from a parameterization of equatorial planetary waves. In all gravity wave cases, it is seen that the inclusion of vertical diffusion is crucial for the descent of the shear zones and the development of the QBO. An important difference between the schemes for the two types of waves is that in the case of equatorial planetary waves, vertical diffusion is needed only at the lowest levels, while for the gravity wave drag schemes it must be included at all levels. The question of whether there is downward propagation of influence in the simulated QBOs is addressed. In the gravity wave drag schemes, the evolution of the wind at a given level depends on the wind above, as well as on the wind below. This is in contrast to the parameterization for the equatorial planetary waves in which there is downward propagation of phase only. The stability of a zero-wind initial state is examined, and it is determined that a small perturbation to such a state will amplify with time to the extent that a zonal wind oscillation is permitted.
Resumo:
This study examines the effect of combining equatorial planetary wave drag and gravity wave drag in a one-dimensional zonal mean model of the quasi-biennial oscillation (QBO). Several different combinations of planetary wave and gravity wave drag schemes are considered in the investigations, with the aim being to assess which aspects of the different schemes affect the nature of the modeled QBO. Results show that it is possible to generate a realistic-looking QBO with various combinations of drag from the two types of waves, but there are some constraints on the wave input spectra and amplitudes. For example, if the phase speeds of the gravity waves in the input spectrum are large relative to those of the equatorial planetary waves, critical level absorption of the equatorial planetary waves may occur. The resulting mean-wind oscillation, in that case, is driven almost exclusively by the gravity wave drag, with only a small contribution from the planetary waves at low levels. With an appropriate choice of wave input parameters, it is possible to obtain a QBO with a realistic period and to which both types of waves contribute. This is the regime in which the terrestrial QBO appears to reside. There may also be constraints on the initial strength of the wind shear, and these are similar to the constraints that apply when gravity wave drag is used without any planetary wave drag. In recent years, it has been observed that, in order to simulate the QBO accurately, general circulation models require parameterized gravity wave drag, in addition to the drag from resolved planetary-scale waves, and that even if the planetary wave amplitudes are incorrect, the gravity wave drag can be adjusted to compensate. This study provides a basis for knowing that such a compensation is possible.
Resumo:
A theory of available potential energy (APE) for symmetric circulations, which includes momentum constraints, is presented. The theory is a generalization of the classical theory of APE, which includes only thermal constraints on the circulation. Physically, centrifugal potential energy is included along with gravitational potential energy. The generalization relies on the Hamiltonian structure of the conservative dynamics, although (as with classical APE) it still defines the energetics in a nonconservative framework. It follows that the theory is exact at finite amplitude, has a local form, and can be applied to a variety of fluid models. It is applied here to the f -plane Boussinesq equations. It is shown that, by including momentum constraints, the APE of a symmetrically stable flow is zero, while the energetics of a mechanically driven symmetric circulation properly reflect its causality.
Resumo:
We study two-dimensional (2D) turbulence in a doubly periodic domain driven by a monoscale-like forcing and damped by various dissipation mechanisms of the form νμ(−Δ)μ. By “monoscale-like” we mean that the forcing is applied over a finite range of wavenumbers kmin≤k≤kmax, and that the ratio of enstrophy injection η≥0 to energy injection ε≥0 is bounded by kmin2ε≤η≤kmax2ε. Such a forcing is frequently considered in theoretical and numerical studies of 2D turbulence. It is shown that for μ≥0 the asymptotic behaviour satisfies ∥u∥12≤kmax2∥u∥2, where ∥u∥2 and ∥u∥12 are the energy and enstrophy, respectively. If the condition of monoscale-like forcing holds only in a time-mean sense, then the inequality holds in the time mean. It is also shown that for Navier–Stokes turbulence (μ=1), the time-mean enstrophy dissipation rate is bounded from above by 2ν1kmax2. These results place strong constraints on the spectral distribution of energy and enstrophy and of their dissipation, and thereby on the existence of energy and enstrophy cascades, in such systems. In particular, the classical dual cascade picture is shown to be invalid for forced 2D Navier–Stokes turbulence (μ=1) when it is forced in this manner. Inclusion of Ekman drag (μ=0) along with molecular viscosity permits a dual cascade, but is incompatible with the log-modified −3 power law for the energy spectrum in the enstrophy-cascading inertial range. In order to achieve the latter, it is necessary to invoke an inverse viscosity (μ<0). These constraints on permissible power laws apply for any spectrally localized forcing, not just for monoscale-like forcing.