41 resultados para Data-driven energy e ciency
Resumo:
This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
Effective public policy to mitigate climate change footprints should build on data-driven analysis of firm-level strategies. This article’s conceptual approach augments the resource-based view (RBV) of the firm and identifies investments in four firm-level resource domains (Governance, Information management, Systems, and Technology [GISTe]) to develop capabilities in climate change impact mitigation. The authors denote the resulting framework as the GISTe model, which frames their analysis and public policy recommendations. This research uses the 2008 Carbon Disclosure Project (CDP) database, with high-quality information on firm-level climate change strategies for 552 companies from North America and Europe. In contrast to the widely accepted myth that European firms are performing better than North American ones, the authors find a different result. Many firms, whether European or North American, do not just “talk” about climate change impact mitigation, but actually do “walk the talk.” European firms appear to be better than their North American counterparts in “walk I,” denoting attention to governance, information management, and systems. But when it comes down to “walk II,” meaning actual Technology-related investments, North American firms’ performance is equal or superior to that of the European companies. The authors formulate public policy recommendations to accelerate firm-level, sector-level, and cluster-level implementation of climate change strategies.
Resumo:
Empirical mode decomposition (EMD) is a data-driven method used to decompose data into oscillatory components. This paper examines to what extent the defined algorithm for EMD might be susceptible to data format. Two key issues with EMD are its stability and computational speed. This paper shows that for a given signal there is no significant difference between results obtained with single (binary32) and double (binary64) floating points precision. This implies that there is no benefit in increasing floating point precision when performing EMD on devices optimised for single floating point format, such as graphical processing units (GPUs).
Resumo:
Empirical Mode Decomposition (EMD) is a data driven technique for extraction of oscillatory components from data. Although it has been introduced over 15 years ago, its mathematical foundations are still missing which also implies lack of objective metrics for decomposed set evaluation. Most common technique for assessing results of EMD is their visual inspection, which is very subjective. This article provides objective measures for assessing EMD results based on the original definition of oscillatory components.
Resumo:
We present a data-driven mathematical model of a key initiating step in platelet activation, a central process in the prevention of bleeding following Injury. In vascular disease, this process is activated inappropriately and causes thrombosis, heart attacks and stroke. The collagen receptor GPVI is the primary trigger for platelet activation at sites of injury. Understanding the complex molecular mechanisms initiated by this receptor is important for development of more effective antithrombotic medicines. In this work we developed a series of nonlinear ordinary differential equation models that are direct representations of biological hypotheses surrounding the initial steps in GPVI-stimulated signal transduction. At each stage model simulations were compared to our own quantitative, high-temporal experimental data that guides further experimental design, data collection and model refinement. Much is known about the linear forward reactions within platelet signalling pathways but knowledge of the roles of putative reverse reactions are poorly understood. An initial model, that includes a simple constitutively active phosphatase, was unable to explain experimental data. Model revisions, incorporating a complex pathway of interactions (and specifically the phosphatase TULA-2), provided a good description of the experimental data both based on observations of phosphorylation in samples from one donor and in those of a wider population. Our model was used to investigate the levels of proteins involved in regulating the pathway and the effect of low GPVI levels that have been associated with disease. Results indicate a clear separation in healthy and GPVI deficient states in respect of the signalling cascade dynamics associated with Syk tyrosine phosphorylation and activation. Our approach reveals the central importance of this negative feedback pathway that results in the temporal regulation of a specific class of protein tyrosine phosphatases in controlling the rate, and therefore extent, of GPVI-stimulated platelet activation.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
Optical data are compared with EISCAT radar observations of multiple Naturally Enhanced Ion-Acoustic Line (NEIAL) events in the dayside cusp. This study uses narrow field of view cameras to observe small-scale, short-lived auroral features. Using multiple-wavelength optical observations, a direct link between NEIAL occurrences and low energy (about 100 eV) optical emissions is shown. This is consistent with the Langmuir wave decay interpretation of NEIALs being driven by streams of low-energy electrons. Modelling work connected with this study shows that, for the measured ionospheric conditions and precipitation characteristics, growth of unstable Langmuir (electron plasma) waves can occur, which decay into ion-acoustic wave modes. The link with low energy optical emissions shown here, will enable future studies of the shape, extent, lifetime, grouping and motions of NEIALs.
Resumo:
Several global quantities are computed from the ERA40 reanalysis for the period 1958-2001 and explored for trends. These are discussed in the context of changes to the global observing system. Temperature, integrated water vapor (IWV), and kinetic energy are considered. The ERA40 global mean temperature in the lower troposphere has a trend of +0.11 K per decade over the period of 1979-2001, which is slightly higher than the MSU measurements, but within the estimated error limit. For the period 1958 2001 the warming trend is 0.14 K per decade but this is likely to be an artifact of changes in the observing system. When this is corrected for, the warming trend is reduced to 0.10 K per decade. The global trend in IWV for the period 1979-2001 is +0.36 mm per decade. This is about twice as high as the trend determined from the Clausius-Clapeyron relation assuming conservation of relative humidity. It is also larger than results from free climate model integrations driven by the same observed sea surface temperature as used in ERA40. It is suggested that the large trend in IWV does not represent a genuine climate trend but an artifact caused by changes in the global observing system such as the use of SSM/I and more satellite soundings in later years. Recent results are in good agreement with GPS measurements. The IWV trend for the period 1958-2001 is still higher but reduced to +0.16 mm per decade when corrected for changes in the observing systems. Total kinetic energy shows an increasing global trend. Results from data assimilation experiments strongly suggest that this trend is also incorrect and mainly caused by the huge changes in the global observing system in 1979. When this is corrected for, no significant change in global kinetic energy from 1958 onward can be found.
Resumo:
A combination of satellite data, reanalysis products and climate models are combined to monitor changes in water vapour, clear-sky radiative cooling of the atmosphere and precipitation over the period 1979-2006. Climate models are able to simulate observed increases in column integrated water vapour (CWV) with surface temperature (Ts) over the ocean. Changes in the observing system lead to spurious variability in water vapour and clear-sky longwave radiation in reanalysis products. Nevertheless all products considered exhibit a robust increase in clear-sky longwave radiative cooling from the atmosphere to the surface; clear-sky longwave radiative cooling of the atmosphere is found to increase with Ts at the rate of ~4 Wm-2 K-1 over tropical ocean regions of mean descending vertical motion. Precipitation (P) is tightly coupled to atmospheric radiative cooling rates and this implies an increase in P with warming at a slower rate than the observed increases in CWV. Since convective precipitation depends on moisture convergence, the above implies enhanced precipitation over convective regions and reduced precipitation over convectively suppressed regimes. To quantify this response, observed and simulated changes in precipitation rate are analysed separately over regions of mean ascending and descending vertical motion over the tropics. The observed response is found to be substantially larger than the model simulations and climate change projections. It is currently not clear whether this is due to deficiencies in model parametrizations or errors in satellite retrievals.
Resumo:
Current changes in the tropical hydrological cycle, including water vapour and precipitation, are presented over the period 1979-2008 based on a diverse suite of observational datasets and atmosphere-only climate models. Models capture the observed variability in tropical moisture while reanalyses cannot. Observed variability in precipitation is highly dependent upon the satellite instruments employed and only cursory agreement with model simulations, primarily relating to the interannual variability associated with the El Niño Southern Oscillation. All datasets display a positive relationship between precipitation and surface temperature but with a large spread. The tendency for wet, ascending regions to become wetter at the expense of dry, descending regimes is in general reproduced. Finally, the frequency of extreme precipitation is shown to rise with warming in the observations and for the model ensemble mean but with large spread in the model simulations. The influence of the Earth’s radiative energy balance in relation to changes in the tropical water cycle are discussed
Resumo:
Orthogonal internal coordinates are defined which have useful properties for constructing the potential energy functions of triatomic molecules with two or three minima on the surface. The coordinates are used to obtain ground state potentials of ClOO and HOF, both of which have three minima.
Resumo:
The current energy requirements system used in the United Kingdom for lactating dairy cows utilizes key parameters such as metabolizable energy intake (MEI) at maintenance (MEm), the efficiency of utilization of MEI for 1) maintenance, 2) milk production (k(l)), 3) growth (k(g)), and the efficiency of utilization of body stores for milk production (k(t)). Traditionally, these have been determined using linear regression methods to analyze energy balance data from calorimetry experiments. Many studies have highlighted a number of concerns over current energy feeding systems particularly in relation to these key parameters, and the linear models used for analyzing. Therefore, a database containing 652 dairy cow observations was assembled from calorimetry studies in the United Kingdom. Five functions for analyzing energy balance data were considered: straight line, two diminishing returns functions, (the Mitscherlich and the rectangular hyperbola), and two sigmoidal functions (the logistic and the Gompertz). Meta-analysis of the data was conducted to estimate k(g) and k(t). Values of 0.83 to 0.86 and 0.66 to 0.69 were obtained for k(g) and k(t) using all the functions (with standard errors of 0.028 and 0.027), respectively, which were considerably different from previous reports of 0.60 to 0.75 for k(g) and 0.82 to 0.84 for k(t). Using the estimated values of k(g) and k(t), the data were corrected to allow for body tissue changes. Based on the definition of k(l) as the derivative of the ratio of milk energy derived from MEI to MEI directed towards milk production, MEm and k(l) were determined. Meta-analysis of the pooled data showed that the average k(l) ranged from 0.50 to 0.58 and MEm ranged between 0.34 and 0.64 MJ/kg of BW0.75 per day. Although the constrained Mitscherlich fitted the data as good as the straight line, more observations at high energy intakes (above 2.4 MJ/kg of BW0.75 per day) are required to determine conclusively whether milk energy is related to MEI linearly or not.
Resumo:
Periplasmic chaperone/usher machineries are used for assembly of filamentous adhesion organelles of Gram-negative pathogens in a process that has been suggested to be driven by folding energy. Structures of mutant chaperone-subunit complexes revealed a final folding transition (condensation of the subunit hydrophobic core) on the release of organelle subunit from the chaperone-subunit pre-assembly complex and incorporation into the final fibre structure. However, in view of the large interface between chaperone and subunit in the pre-assembly complex and the reported stability of this complex, it is difficult to understand how final folding could release sufficient energy to drive assembly. In the present paper, we show the X-ray structure for a native chaperone-fibre complex that, together with thermodynamic data, shows that the final folding step is indeed an essential component of the assembly process. We show that completion of the hydrophobic core and incorporation into the fibre results in an exceptionally stable module, whereas the chaperone-subunit preassembly complex is greatly destabilized by the high-energy conformation of the bound subunit. This difference in stabilities creates a free energy potential that drives fibre formation.
Resumo:
The transition to a low-carbon economy urgently demands better information on the drivers of energy consumption. UK government policy has prioritized energy efficiency in the built stock as a means of carbon reduction, but the sector is historically information poor, particularly the non-domestic building stock. This paper presents the results of a pilot study that investigated whether and how property and energy consumption data might be combined for non-domestic energy analysis. These data were combined in a ‘Non-Domestic Energy Efficiency Database’ to describe the location and physical attributes of each property and its energy consumption. The aim was to support the generation of a range of energy-efficiency statistics for the industrial, commercial and institutional sectors of the non-domestic building stock, and to provide robust evidence for national energy-efficiency and carbon-reduction policy development and monitoring. The work has brought together non-domestic energy data, property data and mapping in a ‘data framework’ for the first time. The results show what is possible when these data are integrated and the associated difficulties. A data framework offers the potential to inform energy-efficiency policy formation and to support its monitoring at a level of detail not previously possible.