852 resultados para Initial data problem
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
Resumo:
An initial validation of the Along Track Scanning Radiometer (ATSR) Reprocessing for Climate (ARC) retrievals of sea surface temperature (SST) is presented. ATSR-2 and Advanced ATSR (AATSR) SST estimates are compared to drifting buoy and moored buoy observations over the period 1995 to 2008. The primary ATSR estimates are of skin SST, whereas buoys measure SST below the surface. Adjustment is therefore made for the skin effect, for diurnal stratification and for differences in buoy–satellite observation time. With such adjustments, satellite-in situ differences are consistent between day and night within ~ 0.01 K. Satellite-in situ differences are correlated with differences in observation time, because of the diurnal warming and cooling of the ocean. The data are used to verify the average behaviour of physical and empirical models of the warming/cooling rates. Systematic differences between adjusted AATSR and in-situ SSTs against latitude, total column water vapour (TCWV), and wind speed are less than 0.1 K, for all except the most extreme cases (TCWV < 5 kg m–2, TCWV > 60 kg m–2). For all types of retrieval except the nadir-only two-channel (N2), regional biases are less than 0.1 K for 80% of the ocean. Global comparison against drifting buoys shows night time dual-view two-channel (D2) SSTs are warm by 0.06 ± 0.23 K and dual-view three-channel (D3) SSTs are warm by 0.06 ± 0.21 K (day-time D2: 0.07 ± 0.23 K). Nadir-only results are N2: 0.03 ± 0.33 K and N3: 0.03 ± 0.19 K showing the improved inter-algorithm consistency to ~ 0.02 K. This represents a marked improvement from the existing operational retrieval algorithms for which inter-algorithm inconsistency is > 0.5 K. Comparison against tropical moored buoys, which are more accurate than drifting buoys, gives lower error estimates (N3: 0.02 ± 0.13 K, D2: 0.03 ± 0.18 K). Comparable results are obtained for ATSR-2, except that the ATSR-2 SSTs are around 0.1 K warm compared to AATSR
Resumo:
In Britain, substantial cuts in police budgets alongside controversial handling of incidents such as politically sensitive enquiries, public disorder and relations with the media have recently triggered much debate about public knowledge and trust in the police. To date, however, little academic research has investigated how knowledge of police performance impacts citizens’ trust. We address this long-standing lacuna by exploring citizens’ trust before and after exposure to real performance data in the context of a British police force. The results reveal that being informed of performance data affects citizens’ trust significantly. Furthermore, direction and degree of change in trust are related to variations across the different elements of the reported performance criteria. Interestingly, the volatility of citizens’ trust is related to initial performance perceptions (such that citizens with low initial perceptions of police performance react more significantly to evidence of both good and bad performance than citizens with high initial perceptions), and citizens’ intentions to support the police do not always correlate with their cognitive and affective trust towards the police. In discussing our findings, we explore the implications of how being transparent with performance data can both hinder and be helpful in developing citizens’ trust towards a public organisation such as the police. From our study, we pose a number of ethical challenges that practitioners face when deciding what data to highlight, to whom, and for what purpose.
Resumo:
The optimal utilisation of hyper-spectral satellite observations in numerical weather prediction is often inhibited by incorrectly assuming independent interchannel observation errors. However, in order to represent these observation-error covariance structures, an accurate knowledge of the true variances and correlations is needed. This structure is likely to vary with observation type and assimilation system. The work in this article presents the initial results for the estimation of IASI interchannel observation-error correlations when the data are processed in the Met Office one-dimensional (1D-Var) and four-dimensional (4D-Var) variational assimilation systems. The method used to calculate the observation errors is a post-analysis diagnostic which utilises the background and analysis departures from the two systems. The results show significant differences in the source and structure of the observation errors when processed in the two different assimilation systems, but also highlight some common features. When the observations are processed in 1D-Var, the diagnosed error variances are approximately half the size of the error variances used in the current operational system and are very close in size to the instrument noise, suggesting that this is the main source of error. The errors contain no consistent correlations, with the exception of a handful of spectrally close channels. When the observations are processed in 4D-Var, we again find that the observation errors are being overestimated operationally, but the overestimation is significantly larger for many channels. In contrast to 1D-Var, the diagnosed error variances are often larger than the instrument noise in 4D-Var. It is postulated that horizontal errors of representation, not seen in 1D-Var, are a significant contributor to the overall error here. Finally, observation errors diagnosed from 4D-Var are found to contain strong, consistent correlation structures for channels sensitive to water vapour and surface properties.
Resumo:
JASMIN is a super-data-cluster designed to provide a high-performance high-volume data analysis environment for the UK environmental science community. Thus far JASMIN has been used primarily by the atmospheric science and earth observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment. This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced.
Resumo:
This document outlines a practical strategy for achieving an observationally based quantification of direct climate forcing by anthropogenic aerosols. The strategy involves a four-step program for shifting the current assumption-laden estimates to an increasingly empirical basis using satellite observations coordinated with suborbital remote and in situ measurements and with chemical transport models. Conceptually, the problem is framed as a need for complete global mapping of four parameters: clear-sky aerosol optical depth δ, radiative efficiency per unit optical depth E, fine-mode fraction of optical depth ff, and the anthropogenic fraction of the fine mode faf. The first three parameters can be retrieved from satellites, but correlative, suborbital measurements are required for quantifying the aerosol properties that control E, for validating the retrieval of ff, and for partitioning fine-mode δ between natural and anthropogenic components. The satellite focus is on the “A-Train,” a constellation of six spacecraft that will fly in formation from about 2005 to 2008. Key satellite instruments for this report are the Moderate Resolution Imaging Spectroradiometer (MODIS) and Clouds and the Earth's Radiant Energy System (CERES) radiometers on Aqua, the Ozone Monitoring Instrument (OMI) radiometer on Aura, the Polarization and Directionality of Earth's Reflectances (POLDER) polarimeter on the Polarization and Anistropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (PARASOL), and the Cloud and Aerosol Lider with Orthogonal Polarization (CALIOP) lidar on the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). This strategy is offered as an initial framework—subject to improvement over time—for scientists around the world to participate in the A-Train opportunity. It is a specific implementation of the Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON) program, presented earlier in this journal, which identified the integration of diverse data as the central challenge to progress in quantifying global-scale aerosol effects. By designing a strategy around this need for integration, we develop recommendations for both satellite data interpretation and correlative suborbital activities that represent, in many respects, departures from current practice
Resumo:
A potential problem with Ensemble Kalman Filter is the implicit Gaussian assumption at analysis times. Here we explore the performance of a recently proposed fully nonlinear particle filter on a high-dimensional but simplified ocean model, in which the Gaussian assumption is not made. The model simulates the evolution of the vorticity field in time, described by the barotropic vorticity equation, in a highly nonlinear flow regime. While common knowledge is that particle filters are inefficient and need large numbers of model runs to avoid degeneracy, the newly developed particle filter needs only of the order of 10-100 particles on large scale problems. The crucial new ingredient is that the proposal density cannot only be used to ensure all particles end up in high-probability regions of state space as defined by the observations, but also to ensure that most of the particles have similar weights. Using identical twin experiments we found that the ensemble mean follows the truth reliably, and the difference from the truth is captured by the ensemble spread. A rank histogram is used to show that the truth run is indistinguishable from any of the particles, showing statistical consistency of the method.
Resumo:
Urban land surface models (LSM) are commonly evaluated for short periods (a few weeks to months) because of limited observational data. This makes it difficult to distinguish the impact of initial conditions on model performance or to consider the response of a model to a range of possible atmospheric conditions. Drawing on results from the first urban LSM comparison, these two issues are considered. Assessment shows that the initial soil moisture has a substantial impact on the performance. Models initialised with soils that are too dry are not able to adjust their surface sensible and latent heat fluxes to realistic values until there is sufficient rainfall. Models initialised with too wet soils are not able to restrict their evaporation appropriately for periods in excess of a year. This has implications for short term evaluation studies and implies the need for soil moisture measurements to improve data assimilation and model initialisation. In contrast, initial conditions influencing the thermal storage have a much shorter adjustment timescale compared to soil moisture. Most models partition too much of the radiative energy at the surface into the sensible heat flux at the probable expense of the net storage heat flux.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.
Resumo:
Low self-esteem is a common, disabling, and distressing problem that has been shown to be involved in the etiology and maintenance of range of Axis I disorders. Hence, it is a priority to develop effective treatments for low self-esteem. A cognitive-behavioral conceptualization of low self-esteem has been proposed and a cognitive-behavioral treatment (CBT) program described (Fennell, 1997, 1999). As yet there has been no systematic evaluation of this treatment with routine clinical populations. The current case report describes the assessment, formulation, and treatment of a patient with low self-esteem, depression, and anxiety symptoms. At the end of treatment (12 sessions over 6 months), and at 1-year follow-up, the treatment showed large effect sizes on measures of depression, anxiety, and self-esteem. The patient no longer met diagnostic criteria for any psychiatric disorder, and showed reliable and clinically significant change on all measures. As far as we are aware, there are no other published case studies of CBT for low self-esteem that report pre- and posttreatment evaluations, or follow-up data. Hence, this case provides an initial contribution to the evidence base for the efficacy of CBT for low self-esteem. However, further research is needed to confirm the efficacy of CBT for low self-esteem and to compare its efficacy and effectiveness to alternative treatments, including diagnosis-specific CBT protocols.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.