60 resultados para A-PRIORI
em CentAUR: Central Archive University of Reading - UK
Validation of a priori CME arrival predictions made using real-time heliospheric imager observations
Resumo:
Between December 2010 and March 2013, volunteers for the Solar Stormwatch (SSW) Citizen Science project have identified and analyzed coronal mass ejections (CMEs) in the near real-time Solar Terrestrial Relations Observatory Heliospheric Imager observations, in order to make “Fearless Forecasts” of CME arrival times and speeds at Earth. Of the 60 predictions of Earth-directed CMEs, 20 resulted in an identifiable Interplanetary CME (ICME) at Earth within 1.5–6 days, with an average error in predicted transit time of 22 h, and average transit time of 82.3 h. The average error in predicting arrival speed is 151 km s−1, with an average arrival speed of 425km s−1. In the same time period, there were 44 CMEs for which there are no corresponding SSW predictions, and there were 600 days on which there was neither a CME predicted nor observed. A number of metrics show that the SSW predictions do have useful forecast skill; however, there is still much room for improvement. We investigate potential improvements by using SSW inputs in three models of ICME propagation: two of constant acceleration and one of aerodynamic drag. We find that taking account of interplanetary acceleration can improve the average errors of transit time to 19 h and arrival speed to 77 km s−1.
Resumo:
We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.
Resumo:
This paper uses the exploration of the grounds of a common criticism of luck egalitarianism to try to make an argument about both the proper subject of theorising about justice and how to approach that subject. It draws a distinction between what it calls basic structure views and a priori baseline views, where the former take the institutional aspects of political prescriptions seriously and the latter do not. It argues that objections to luck egalitarianism on the grounds of its harshness can in part be explained by this blindness to relevant features of institutions. Further, it may be that luck egalitarianism cannot regard its own enactment as just. A related objection to Dworkin’s equality of resources, which claims that it cannot pick a particular institutional background to set the costs of resources and so is radically indeterminate, is also presented. These results, I argue, give us good reason to reject all a priori baseline views.
Resumo:
We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.
Resumo:
Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.
Resumo:
Four-dimensional variational data assimilation (4D-Var) combines the information from a time sequence of observations with the model dynamics and a background state to produce an analysis. In this paper, a new mathematical insight into the behaviour of 4D-Var is gained from an extension of concepts that are used to assess the qualitative information content of observations in satellite retrievals. It is shown that the 4D-Var analysis increments can be written as a linear combination of the singular vectors of a matrix which is a function of both the observational and the forecast model systems. This formulation is used to consider the filtering and interpolating aspects of 4D-Var using idealized case-studies based on a simple model of baroclinic instability. The results of the 4D-Var case-studies exhibit the reconstruction of the state in unobserved regions as a consequence of the interpolation of observations through time. The results also exhibit the filtering of components with small spatial scales that correspond to noise, and the filtering of structures in unobserved regions. The singular vector perspective gives a very clear view of this filtering and interpolating by the 4D-Var algorithm and shows that the appropriate specification of the a priori statistics is vital to extract the largest possible amount of useful information from the observations. Copyright © 2005 Royal Meteorological Society
Resumo:
A low resolution coupled ocean-atmosphere general circulation model OAGCM is used to study the characteristics of the large scale ocean circulation and its climatic impacts in a series of global coupled aquaplanet experiments. Three configurations, designed to produce fundamentally different ocean circulation regimes, are considered. The first has no obstruction to zonal flow, the second contains a low barrier that blocks zonal flow in the ocean at all latitudes, creating a single enclosed basin, whilst the third contains a gap in the barrier to allow circumglobal flow at high southern latitudes. Warm greenhouse climates with a global average air surface temperature of around 27C result in all cases. Equator to pole temperature gradients are shallower than that of a current climate simulation. Whilst changes in the land configuration cause regional changes in temperature, winds and rainfall, heat transports within the system are little affected. Inhibition of all ocean transport on the aquaplanet leads to a reduction in global mean surface temperature of 8C, along with a sharpening of the meridional temperature gradient. This results from a reduction in global atmospheric water vapour content and an increase in tropical albedo, both of which act to reduce global surface temperatures. Fitting a simple radiative model to the atmospheric characteristics of the OAGCM solutions suggests that a simpler atmosphere model, with radiative parameters chosen a priori based on the changing surface configuration, would have produced qualitatively different results. This implies that studies with reduced complexity atmospheres need to be guided by more complex OAGCM results on a case by case basis.
Resumo:
A recent report in Consciousness and Cognition provided evidence from a study of the rubber hand illusion (RHI) that supports the multisensory principle of inverse effectiveness (PoIE). I describe two methods of assessing the principle of inverse effectiveness ('a priori' and 'post-hoc'), and discuss how the post-hoc method is affected by the statistical artefact of,regression towards the mean'. I identify several cases where this artefact may have affected particular conclusions about the PoIE, and relate these to the historical origins of 'regression towards the mean'. Although the conclusions of the recent report may not have been grossly affected, some of the inferential statistics were almost certainly biased by the methods used. I conclude that, unless such artefacts are fully dealt with in the future, and unless the statistical methods for assessing the PoIE evolve, strong evidence in support of the PoIE will remain lacking. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The study of motor unit action potential (MUAP) activity from electrornyographic signals is an important stage on neurological investigations that aim to understand the state of the neuromuscular system. In this context, the identification and clustering of MUAPs that exhibit common characteristics, and the assessment of which data features are most relevant for the definition of such cluster structure are central issues. In this paper, we propose the application of an unsupervised Feature Relevance Determination (FRD) method to the analysis of experimental MUAPs obtained from healthy human subjects. In contrast to approaches that require the knowledge of a priori information from the data, this FRD method is embedded on a constrained mixture model, known as Generative Topographic Mapping, which simultaneously performs clustering and visualization of MUAPs. The experimental results of the analysis of a data set consisting of MUAPs measured from the surface of the First Dorsal Interosseous, a hand muscle, indicate that the MUAP features corresponding to the hyperpolarization period in the physisiological process of generation of muscle fibre action potentials are consistently estimated as the most relevant and, therefore, as those that should be paid preferential attention for the interpretation of the MUAP groupings.
Resumo:
In this article, an overview of some of the latest developments in the field of cerebral cortex to computer interfacing (CCCI) is given. This is posed in the more general context of Brain-Computer Interfaces in order to assess advantages and disadvantages. The emphasis is clearly placed on practical studies that have been undertaken and reported on, as opposed to those speculated, simulated or proposed as future projects. Related areas are discussed briefly only in the context of their contribution to the studies being undertaken. The area of focus is notably the use of invasive implant technology, where a connection is made directly with the cerebral cortex and/or nervous system. Tests and experimentation which do not involve human subjects are invariably carried out a priori to indicate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies from this area are discussed. The paper goes on to describe human experimentation, in which neural implants have linked the human nervous system bidirectionally with technology and the internet. A view is taken as to the prospects for the future for CCCI, in terms of its broad therapeutic role.
Resumo:
A basic principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: the underlying data generating mechanism exhibits known symmetric property and the underlying process obeys a set of given boundary value constraints. The class of orthogonal least squares regression algorithms can readily be applied to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
In this paper we consider a cooperative communication system where some a priori information of wireless channels is available at the transmitter. Several opportunistic relaying strategies are developed to fully utilize the available channel information. Then an explicit expression of the outage probability is developed for each proposed cooperative scheme as well as the diversity-multiplexing tradeoff by using order statistics. Our analytical results show that the more channel information available at the transmitter, the better performance a cooperative system can achieve. When the exact values of the source-relay channels are available, the performance loss at low SNR can be effectively suppressed. When the source node has the access to the source-relay and relay-destination channels, the full diversity can be achieved by costing only one extra channel used for relaying transmission, and an optimal diversity-multiplexing tradeoff can be achieved d(r) = (N + 1)(1 - 2r), where N is the number of all possible relaying nodes.