71 resultados para continuous disclosure
Resumo:
The cheese industry has continually sought a robust method to monitor milk coagulation. Measurement of whey separation is also critical to control cheese moisture content, which affects quality. The objective of this study was to demonstrate that an online optical sensor detecting light backscatter in a vat could be applied to monitor both coagulation and syneresis during cheesemaking. A prototype sensor having a large field of view (LFV) relative to curd particle size was constructed. Temperature, cutting time, and calcium chloride addition were varied to evaluate the response of the sensor over a wide range of coagulation and syneresis rates. The LFV sensor response was related to casein micelle aggregation and curd firming during coagulation and to changes in curd moisture and whey fat contents during syneresis. The LFV sensor has potential as an online, continuous sensor technology for monitoring both coagulation and syneresis during cheesemaking.
Resumo:
Dynamic multi-user interactions in a single networked virtual environment suffer from abrupt state transition problems due to communication delays arising from network latency--an action by one user only becoming apparent to another user after the communication delay. This results in a temporal suspension of the environment for the duration of the delay--the virtual world `hangs'--followed by an abrupt jump to make up for the time lost due to the delay so that the current state of the virtual world is displayed. These discontinuities appear unnatural and disconcerting to the users. This paper proposes a novel method of warping times associated with users to ensure that each user views a continuous version of the virtual world, such that no hangs or jumps occur despite other user interactions. Objects passed between users within the environment are parameterized, not by real time, but by a virtual local time, generated by continuously warping real time. This virtual time periodically realigns itself with real time as the virtual environment evolves. The concept of a local user dynamically warping the local time is also introduced. As a result, the users are shielded from viewing discontinuities within their virtual worlds, consequently enhancing the realism of the virtual environment.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.
Resumo:
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
We summarise the response of the EAA’s FRSC to Towards a Disclosure Framework for the Notes, a Discussion Paper (DP) issued jointly by EFRAG, ANC and FRC. While supportive of much of the DP, and in particular of the underlying aim to place disclosures on a sounder conceptual foundation, we identify two broad themes for further development. The first concerns the DP’s diagnosis of the problem, which is that the existing financial reporting is characterised by, on the one hand, disclosure overload and, on the other hand, an absence of a conceptual framework for organising and communicating disclosures. Our review of the literature suggests much greater support for the second of these two factors than for the first. The second broad theme is the purpose of the proposed DF, and the principles that are derived from this purpose. Here, we stress the need for the framework to better accommodate the context within which financial statement disclosures are used. In practice, this context is characterised by variation in information, incentives and enforcement, each of which has a considerable effect on the appropriate disclosure policy and practice in any given situation.
Resumo:
We consider in this paper the solvability of linear integral equations on the real line, in operator form (λ−K)φ=ψ, where and K is an integral operator. We impose conditions on the kernel, k, of K which ensure that K is bounded as an operator on . Let Xa denote the weighted space as |s|→∞}. Our first result is that if, additionally, |k(s,t)|⩽κ(s−t), with and κ(s)=O(|s|−b) as |s|→∞, for some b>1, then the spectrum of K is the same on Xa as on X, for 01. As an example where kernels of this latter form occur we discuss a boundary integral equation formulation of an impedance boundary value problem for the Helmholtz equation in a half-plane.