44 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
Resumo:
Numerical weather prediction (NWP) centres use numerical models of the atmospheric flow to forecast future weather states from an estimate of the current state. Variational data assimilation (VAR) is used commonly to determine an optimal state estimate that miminizes the errors between observations of the dynamical system and model predictions of the flow. The rate of convergence of the VAR scheme and the sensitivity of the solution to errors in the data are dependent on the condition number of the Hessian of the variational least-squares objective function. The traditional formulation of VAR is ill-conditioned and hence leads to slow convergence and an inaccurate solution. In practice, operational NWP centres precondition the system via a control variable transform to reduce the condition number of the Hessian. In this paper we investigate the conditioning of VAR for a single, periodic, spatially-distributed state variable. We present theoretical bounds on the condition number of the original and preconditioned Hessians and hence demonstrate the improvement produced by the preconditioning. We also investigate theoretically the effect of observation position and error variance on the preconditioned system and show that the problem becomes more ill-conditioned with increasingly dense and accurate observations. Finally, we confirm the theoretical results in an operational setting by giving experimental results from the Met Office variational system.
Resumo:
The accurate prediction of the biochemical function of a protein is becoming increasingly important, given the unprecedented growth of both structural and sequence databanks. Consequently, computational methods are required to analyse such data in an automated manner to ensure genomes are annotated accurately. Protein structure prediction methods, for example, are capable of generating approximate structural models on a genome-wide scale. However, the detection of functionally important regions in such crude models, as well as structural genomics targets, remains an extremely important problem. The method described in the current study, MetSite, represents a fully automatic approach for the detection of metal-binding residue clusters applicable to protein models of moderate quality. The method involves using sequence profile information in combination with approximate structural data. Several neural network classifiers are shown to be able to distinguish metal sites from non-sites with a mean accuracy of 94.5%. The method was demonstrated to identify metal-binding sites correctly in LiveBench targets where no obvious metal-binding sequence motifs were detectable using InterPro. Accurate detection of metal sites was shown to be feasible for low-resolution predicted structures generated using mGenTHREADER where no side-chain information was available. High-scoring predictions were observed for a recently solved hypothetical protein from Haemophilus influenzae, indicating a putative metal-binding site.
Resumo:
For linear multivariable time-invariant continuous or discrete-time singular systems it is customary to use a proportional feedback control in order to achieve a desired closed loop behaviour. Derivative feedback is rarely considered. This paper examines how derivative feedback in descriptor systems can be used to alter the structure of the system pencil under various controllability conditions. It is shown that derivative and proportional feedback controls can be constructed such that the closed loop system has a given form and is also regular and has index at most 1. This property ensures the solvability of the resulting system of dynamic-algebraic equations. The construction procedures used to establish the theory are based only on orthogonal matrix decompositions and can therefore be implemented in a numerically stable way. The problem of pole placement with derivative feedback alone and in combination with proportional state feedback is also investigated. A computational algorithm for improving the “conditioning” of the regularized closed loop system is derived.
Resumo:
We study the regularization problem for linear, constant coefficient descriptor systems Ex' = Ax+Bu, y1 = Cx, y2 = Γx' by proportional and derivative mixed output feedback. Necessary and sufficient conditions are given, which guarantee that there exist output feedbacks such that the closed-loop system is regular, has index at most one and E+BGΓ has a desired rank, i.e., there is a desired number of differential and algebraic equations. To resolve the freedom in the choice of the feedback matrices we then discuss how to obtain the desired regularizing feedback of minimum norm and show that this approach leads to useful results in the sense of robustness only if the rank of E is decreased. Numerical procedures are derived to construct the desired feedback gains. These numerical procedures are based on orthogonal matrix transformations which can be implemented in a numerically stable way.
Resumo:
The construction sector is often described as lagging behind other major industries. At first this appears fair when considering the concept of corporate social responsibility (CSR). It is argued that CSR is ill-defined, with firms struggling to make sense of and engage with it. Literature suggests that the short-termism view of construction firms renders the long-term, triple-bottom-line principle of CSR untenable. This seems to be borne out by literature indicating that construction firms typically adopt a compliance-based approach to CSR instead of discretionary CSR which is regarded as adding most value to firms and benefiting the broadest group of stakeholders. However, this research conducted in the UK using a regional construction firm offers a counter argument whereby discretionary CSR approaches are well embedded and enacted within the firms’ business operations even though they are not formally articulated as CSR strategies and thus remain 'hidden'. This raises questions in the current CSR debate. First, is ‘hidden’ CSR relevant to the long term success of construction firms? and to what extent do these firms need to reinvent themselves to formally take advantage of the CSR agenda?
Resumo:
This paper examined the incidence of intrafirmcausalambiguity in the management's perception concerning the critical drivers of their firms’ performance. Building on insights from the resource-based view we developed and tested hypotheses that examine (1) linkage ambiguity as a discrepancy between perceived and measured resource–performance linkages, (2) characteristic ambiguity for resources and capabilities with a high degree of complexity and tacitness, and (3) the negative association between linkage ambiguity and performance. The observations based on the explicit perceptions of 356 surveyed managers were contrasted with the empirical findings of the resource/performance relationship derived by structural equation modelling from the same data sample. The findings validate the presence of linkage ambiguity particularly in the case of resources and capabilities with higher degree of characteristic ambiguity. The findings also provide empirical evidence in support of the advocacy for a negative relationship between intrafirmcausalambiguity and performance. The paper discusses the potential reasons for the disparities between empirical findings and management's perceptions of the key determinants of export success and makes recommendations for future research.
Resumo:
Global climate and weather models tend to produce rainfall that is too light and too regular over the tropical ocean. This is likely because of convective parametrizations, but the problem is not well understood. Here, distributions of precipitation rates are analyzed for high-resolution UK Met Office Unified Model simulations of a 10 day case study over a large tropical domain (∼20°S–20°N and 42°E–180°E). Simulations with 12 km grid length and parametrized convection have too many occurrences of light rain and too few of heavier rain when interpolated onto a 1° grid and compared with Tropical Rainfall Measuring Mission (TRMM) data. In fact, this version of the model appears to have a preferred scale of rainfall around 0.4 mm h−1 (10 mm day−1), unlike observations of tropical rainfall. On the other hand, 4 km grid length simulations with explicit convection produce distributions much more similar to TRMM observations. The apparent preferred scale at lighter rain rates seems to be a feature of the convective parametrization rather than the coarse resolution, as demonstrated by results from 12 km simulations with explicit convection and 40 km simulations with parametrized convection. In fact, coarser resolution models with explicit convection tend to have even more heavy rain than observed. Implications for models using convective parametrizations, including interactions of heating and moistening profiles with larger scales, are discussed. One important implication is that the explicit convection 4 km model has temperature and moisture tendencies that favour transitions in the convective regime. Also, the 12 km parametrized convection model produces a more stable temperature profile at its extreme high-precipitation range, which may reduce the chance of very heavy rainfall. Further study is needed to determine whether unrealistic precipitation distributions are due to some fundamental limitation of convective parametrizations or whether parametrizations can be improved, in order to better simulate these distributions.
Resumo:
Considerable efforts are currently invested into the setup of a Global Climate Observing System (GCOS) for monitoring climate change over the coming decades, which is of high relevance given concerns on increasing human influences. A promising potential contribution to the GCOS is a suite of spaceborne Global Navigation Satellite System (GNSS) occultation sensors for global long-term monitoring of atmospheric change in temperature and other variables with high vertical resolution and accuracy. Besides the great importance with respect to climate change, the provision of high quality data is essential for the improvement of numerical weather prediction and for reanalysis efforts. We review the significance of GNSS radio occultation sounding in the climate observations context. In order to investigate the climate change detection capability of GNSS occultation sensors, we are currently performing an end-to-end GNSS occultation observing system simulation experiment over the 25-year period 2001 to 2025. We report on this integrated analysis, which involves in a realistic manner all aspects from modeling the atmosphere via generating a significant set of stimulated measurements to an objective statistical analysis and assessment of 2001–2025 temporal trends.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
Sea surface temperature (SST) measurements are required by operational ocean and atmospheric forecasting systems to constrain modeled upper ocean circulation and thermal structure. The Global Ocean Data Assimilation Experiment (GODAE) High Resolution SST Pilot Project (GHRSST-PP) was initiated to address these needs by coordinating the provision of accurate, high-resolution, SST products for the global domain. The pilot project is now complete, but activities continue within the Group for High Resolution SST (GHRSST). The pilot project focused on harmonizing diverse satellite and in situ data streams that were indexed, processed, quality controlled, analyzed, and documented within a Regional/Global Task Sharing (R/GTS) framework implemented in an internationally distributed manner. Data with meaningful error estimates developed within GHRSST are provided by services within R/GTS. Currently, several terabytes of data are processed at international centers daily, creating more than 25 gigabytes of product. Ensemble SST analyses together with anomaly SST outputs are generated each day, providing confidence in SST analyses via diagnostic outputs. Diagnostic data sets are generated and Web interfaces are provided to monitor the quality of observation and analysis products. GHRSST research and development projects continue to tackle problems of instrument calibration, algorithm development, diurnal variability, skin temperature deviation, and validation/verification of GHRSST products. GHRSST also works closely with applications and users, providing a forum for discussion and feedback between SST users and producers on a regular basis. All data within the GHRSST R/GTS framework are freely available. This paper reviews the progress of GHRSST-PP, highlighting achievements that have been fundamental to the success of the pilot project.
Resumo:
The results of coupled high resolution global models (CGCMs) over South America are discussed. HiGEM1.2 and HadGEM1.2 simulations, with horizontal resolution of ~90 and 135 km, respectively, are compared. Precipitation estimations from CMAP (Climate Prediction Center—Merged Analysis of Precipitation), CPC (Climate Prediction Center) and GPCP (Global Precipitation Climatology Project) are used for validation. HiGEM1.2 and HadGEM1.2 simulated seasonal mean precipitation spatial patterns similar to the CMAP. The positioning and migration of the Intertropical Convergence Zone and of the Pacific and Atlantic subtropical highs are correctly simulated by the models. In HiGEM1.2 and HadGEM1.2, the intensity and locations of the South Atlantic Convergence Zone are in agreement with the observed dataset. The simulated annual cycles are in phase with estimations of rainfall for most of the six regions considered. An important result is that HiGEM1.2 and HadGEM1.2 eliminate a common problem of coarse resolution CGCMs, which is the simulation of a semiannual cycle of precipitation due to the semiannual solar forcing. Comparatively, the use of high resolution in HiGEM1.2 reduces the dry biases in the central part of Brazil during austral winter and spring and in most part of the year over an oceanic box in eastern Uruguay.
Resumo:
Flooding is a particular hazard in urban areas worldwide due to the increased risks to life and property in these regions. Synthetic Aperture Radar (SAR) sensors are often used to image flooding because of their all-weather day-night capability, and now possess sufficient resolution to image urban flooding. The flood extents extracted from the images may be used for flood relief management and improved urban flood inundation modelling. A difficulty with using SAR for urban flood detection is that, due to its side-looking nature, substantial areas of urban ground surface may not be visible to the SAR due to radar layover and shadow caused by buildings and taller vegetation. This paper investigates whether urban flooding can be detected in layover regions (where flooding may not normally be apparent) using double scattering between the (possibly flooded) ground surface and the walls of adjacent buildings. The method estimates double scattering strengths using a SAR image in conjunction with a high resolution LiDAR (Light Detection and Ranging) height map of the urban area. A SAR simulator is applied to the LiDAR data to generate maps of layover and shadow, and estimate the positions of double scattering curves in the SAR image. Observations of double scattering strengths were compared to the predictions from an electromagnetic scattering model, for both the case of a single image containing flooding, and a change detection case in which the flooded image was compared to an un-flooded image of the same area acquired with the same radar parameters. The method proved successful in detecting double scattering due to flooding in the single-image case, for which flooded double scattering curves were detected with 100% classification accuracy (albeit using a small sample set) and un-flooded curves with 91% classification accuracy. The same measures of success were achieved using change detection between flooded and un-flooded images. Depending on the particular flooding situation, the method could lead to improved detection of flooding in urban areas.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.