873 resultados para Data compression (Electronic computers)
Resumo:
A series of half-sandwich bis(phosphine) ruthenium acetylide complexes [Ru(C CAr)(L-2)Cp'] (Ar = phenyl, p-tolyl, 1-naphthyl, 9-anthryl; L2 = (PPh3)(2), Cp' = Cp; L-2 = dppe; Cp' = Cp*) have been examined using electrochemical and spectroelectrochemical methods. One-electron oxidation of these complexes gave the corresponding radical cations [Ru(C CAr)(L2)Cp'](+). Those cations based on Ru(dppe)Cp*, or which feature a para-tolyl acetylide substituent, are more chemically robust than examples featuring the Ru(PPh3)(2)Cp moiety, permitting good quality UV-Vis-NIR and IR spectroscopic data to be obtained using spectroelectrochemical methods. On the basis of TD DFT calculations, the low energy (NIR) absorption bands in the experimental electronic spectra for most of these radical cations are assigned to transitions between the beta-HOSO and beta-LUSO, both of which have appreciable metal d and ethynyl pi character. However, the large contribution from the anthryl moiety to the frontier orbitals of [Ru(C CC14H9)(L2)CP'](+) suggests compounds containing this moiety should be described as metal-stabilised anthryl radical cations.
Resumo:
Stepwise electrochemical reduction of the complex fac-[Mn(Br)(CO)(3)(tmbp)] (tmbp = 4,4',5,5'-tetramethyl-2,2'-biphosphinine) produces the dimer [Mn(CO)(3)(tmbp)](2) and the five-coordinate anion [Mn(CO)(3)(tmbp)](-). All three members of the redox series have been characterized by single-crystal X-ray diffraction. The crystallographic data provide valuable insight into the localization of the added electrons on the (carbonyl)manganese and tmbp centers. In particular, the formulation of the two-electron-reduced anion as [Mn-0(CO)(3)(tmbp(-))](-) also agrees with the analysis of its IR nu(CO) wavenumbers and with the results of density functional theoretical (DFT) MO calculations on this compound. The strongly delocalized pi-bonding in the anion stabilizes its five-coordinate geometry and results in the appearance of several mixed Mn-to-tmbp charge-transfer/IL(tmbp) transitions in the near-UV-vis spectral region. A thorough voltammetric and UV-vis/IR spectroelectrochemical study of the reduction path provided evidence for a direct formation of [Mn(CO)(3)(tmbp)](-) via a two-electron ECE mechanism involving the [Mn(CO)(3)(tmbp)](.) radical transient. At ambient temperature [Mn(CO)(3)(tmbp)](-) reacts rapidly with nonreduced fac-[Mn(Br)(CO)(3)(tmbp)] to produce [Mn(CO)(3)(tmbp)](2). Comparison with the analogous 2,2'-bipyridine complexes has revealed striking similarity in the bonding properties and reactivity, despite the stronger pi-acceptor character of the tmbp ligand.
Resumo:
In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.
Resumo:
Four-dimensional variational data assimilation (4D-Var) is used in environmental prediction to estimate the state of a system from measurements. When 4D-Var is applied in the context of high resolution nested models, problems may arise in the representation of spatial scales longer than the domain of the model. In this paper we study how well 4D-Var is able to estimate the whole range of spatial scales present in one-way nested models. Using a model of the one-dimensional advection–diffusion equation we show that small spatial scales that are observed can be captured by a 4D-Var assimilation, but that information in the larger scales may be degraded. We propose a modification to 4D-Var which allows a better representation of these larger scales.
Resumo:
We present a novel algorithm for joint state-parameter estimation using sequential three dimensional variational data assimilation (3D Var) and demonstrate its application in the context of morphodynamic modelling using an idealised two parameter 1D sediment transport model. The new scheme combines a static representation of the state background error covariances with a flow dependent approximation of the state-parameter cross-covariances. For the case presented here, this involves calculating a local finite difference approximation of the gradient of the model with respect to the parameters. The new method is easy to implement and computationally inexpensive to run. Experimental results are positive with the scheme able to recover the model parameters to a high level of accuracy. We expect that there is potential for successful application of this new methodology to larger, more realistic models with more complex parameterisations.
Resumo:
Measured process data normally contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, techniques for the detection and identification of both systematic bias and outliers in dynamic process data are presented. A novel technique for the detection and identification of systematic bias is formulated and presented. The problem of detection, identification and elimination of outliers is also treated using a modified version of a previously available clustering technique. These techniques are also combined to provide a global dynamic data reconciliation (DDR) strategy. The algorithms presented are tested in isolation and in combination using dynamic simulations of two continuous stirred tank reactors (CSTR).
Resumo:
The background error covariance matrix, B, is often used in variational data assimilation for numerical weather prediction as a static and hence poor approximation to the fully dynamic forecast error covariance matrix, Pf. In this paper the concept of an Ensemble Reduced Rank Kalman Filter (EnRRKF) is outlined. In the EnRRKF the forecast error statistics in a subspace defined by an ensemble of states forecast by the dynamic model are found. These statistics are merged in a formal way with the static statistics, which apply in the remainder of the space. The combined statistics may then be used in a variational data assimilation setting. It is hoped that the nonlinear error growth of small-scale weather systems will be accurately captured by the EnRRKF, to produce accurate analyses and ultimately improved forecasts of extreme events.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
It is widely recognized that small businesses with less than 50 employees make significant contributions to the prosperity of local, regional, and national economies. They are a major source of job creation and a driving force of economic growth for developed countries like the USA (Headd, 2005; SBA, 2005), the UK (Dixon, Thompson, & McAllister, 2002; SBS, 2005), Europe (European Commission, 2003), and developing countries such as China (Bo, 2005). The economic potential is further strengthened when firms collaborate with each other; for example, formation of a supply chain, strategic alliances, or sharing of information and resources (Horvath, 2001; O’Donnell, Cilmore, Cummins, & Carson, 2001; MacGregor, 2004; Todeva & Knoke, 2005). Owing to heterogeneous aspects of small businesses, such as firm size and business sector, a single e-business solution is unlikely to be suitable for all firms (Dixon et al., 2002; Taylor & Murphy, 2004a); however, collaboration requires individual firms to adopt standardized, simplified solutions based on open architectures and data design (Horvath, 2001). The purpose of this article is to propose a conceptual e-business framework and a generic e-catalogue, which enables small businesses to collaborate through the creation of an e-marketplace. To assist with the task, analysis of data from 6,000 small businesses situated within a locality of Greater Manchester, England within the context of an e-business portal is incorporated within this study.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Naphthalene and anthracene transition metalates are potent reagents, but their electronic structures have remained poorly explored. A study of four Cp*-substituted iron complexes (Cp* = pentamethylcyclopentadienyl) now gives rare insight into the bonding features of such species. The highly oxygen- and water-sensitive compounds [K(18-crown- 6){Cp*Fe(η4-C10H8)}] (K1), [K(18-crown-6){Cp*Fe(η4-C14H10)}] (K2), [Cp*Fe(η4-C10H8)] (1), and [Cp*Fe(η4-C14H10)] (2) were synthesized and characterized by NMR, UV−vis, and 57Fe Mössbauer spectroscopy. The paramagnetic complexes 1 and 2 were additionally characterized by electron paramagnetic resonance (EPR) spectroscopy and magnetic susceptibility measurements. The molecular structures of complexes K1, K2, and 2 were determined by single-crystal X-ray crystallography. Cyclic voltammetry of 1 and 2 and spectroelectrochemical experiments revealed the redox properties of these complexes, which are reversibly reduced to the monoanions [Cp*Fe(η4-C10H8)]− (1−) and [Cp*Fe(η4-C14H10)]− (2−) and reversibly oxidized to the cations [Cp*Fe(η6-C10H8)]+ (1+) and [Cp*Fe(η6-C14H10)]+ (2+). Reduced orbital charges and spin densities of the naphthalene complexes 1−/0/+ and the anthracene derivatives 2−/0/+ were obtained by density functional theory (DFT) methods. Analysis of these data suggests that the electronic structures of the anions 1− and 2− are best represented by low-spin FeII ions coordinated by anionic Cp* and dianionic naphthalene and anthracene ligands. The electronic structures of the neutral complexes 1 and 2 may be described by a superposition of two resonance configurations which, on the one hand, involve a low-spin FeI ion coordinated by the neutral naphthalene or anthracene ligand L, and, on the other hand, a low-spin FeII ion coordinated to a ligand radical L•−. Our study thus reveals the redox noninnocent character of the naphthalene and anthracene ligands, which effectively stabilize the iron atoms in a low formal, but significantly higher spectroscopic oxidation state.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.