121 resultados para Electrical load forecasting
Resumo:
A description is given of the global atmospheric electric circuit operating between the Earth’s surface and the ionosphere. Attention is drawn to the huge range of horizontal and vertical spatial scales, ranging from 10−9 m to 1012 m, concerned with the many important processes at work. A similarly enormous range of time scales is involved from 10−6 s to 109 s, in the physical effects and different phenomena that need to be considered. The current flowing in the global circuit is generated by disturbed weather such as thunderstorms and electrified rain/shower clouds, mostly occurring over the Earth’s land surface. The profile of electrical conductivity up through the atmosphere, determined mainly by galactic cosmic ray ionization, is a crucial parameter of the circuit. Model simulation results on the variation of the ionospheric potential, ∼250 kV positive with respect to the Earth’s potential, following lightning discharges and sprites are summarized. Experimental results comparing global circuit variations with the neutron rate recorded at Climax, Colorado, are then discussed. Within the return (load) part of the circuit in the fair weather regions remote from the generators, charge layers exist on the upper and lower edges of extensive layer clouds; new experimental evidence for these charge layers is also reviewed. Finally, some directions for future research in the subject are suggested.
Resumo:
Ensemble forecasting of nonlinear systems involves the use of a model to run forward a discrete ensemble (or set) of initial states. Data assimilation techniques tend to focus on estimating the true state of the system, even though model error limits the value of such efforts. This paper argues for choosing the initial ensemble in order to optimise forecasting performance rather than estimate the true state of the system. Density forecasting and choosing the initial ensemble are treated as one problem. Forecasting performance can be quantified by some scoring rule. In the case of the logarithmic scoring rule, theoretical arguments and empirical results are presented. It turns out that, if the underlying noise dominates model error, we can diagnose the noise spread.
Resumo:
Mitochondrial DNA (mtDNA) mutations are an important cause of genetic disease and have been proposed to play a role in the ageing process. Quantification of total mtDNA mutation load in ageing tissues is difficult as mutational events are rare in a background of wild-type molecules, and detection of individual mutated molecules is beyond the sensitivity of most sequencing based techniques. The methods currently most commonly used to document the incidence of mtDNA point mutations in ageing include post-PCR cloning, single-molecule PCR and the random mutation capture assay. The mtDNA mutation load obtained by these different techniques varies by orders of magnitude, but direct comparison of the three techniques on the same ageing human tissue has not been performed. We assess the procedures and practicalities involved in each of these three assays and discuss the results obtained by investigation of mutation loads in colonic mucosal biopsies from ten human subjects.
Resumo:
For forecasting and economic analysis many variables are used in logarithms (logs). In time series analysis, this transformation is often considered to stabilize the variance of a series. We investigate under which conditions taking logs is beneficial for forecasting. Forecasts based on the original series are compared to forecasts based on logs. For a range of economic variables, substantial forecasting improvements from taking logs are found if the log transformation actually stabilizes the variance of the underlying series. Using logs can be damaging for the forecast precision if a stable variance is not achieved.
Resumo:
This paper investigates whether using natural logarithms (logs) of price indices for forecasting inflation rates is preferable to employing the original series. Univariate forecasts for annual inflation rates for a number of European countries and the USA based on monthly seasonal consumer price indices are considered. Stochastic seasonality and deterministic seasonality models are used. In many cases, the forecasts based on the original variables result in substantially smaller root mean squared errors than models based on logs. In turn, if forecasts based on logs are superior, the gains are typically small. This outcome sheds doubt on the common practice in the academic literature to forecast inflation rates based on differences of logs.
Resumo:
There are varieties of physical and behavioral factors to determine energy demand load profile. The attainment of the optimum mix of measures and renewable energy system deployment requires a simple method suitable for using at the early design stage. A simple method of formulating load profile (SMLP) for UK domestic buildings has been presented in this paper. Domestic space heating load profile for different types of houses have been produced using thermal dynamic model which has been developed using thermal resistant network method. The daily breakdown energy demand load profile of appliance, domestic hot water and space heating can be predicted using this method. The method can produce daily load profile from individual house to urban community. It is suitable to be used at Renewable energy system strategic design stage.
Resumo:
The rapid expansion of the TMT sector in the late 1990s and more recent growing regulatory and corporate focus on business continuity and security have raised the profile of data centres. Data centres offer a unique blend of occupational, physical and technological characteristics compared to conventional real estate assets. Limited trading and heterogeneity of data centres also causes higher levels of appraisal uncertainty. In practice, the application of conventional discounted cash flow approaches requires information about a wide range of inputs that is difficult to derive from limited market signals or estimate analytically. This paper outlines an approach that uses pricing signals from similar traded cash flows is proposed. Based upon ‘the law of one price’, the method draws upon the premise that two identical future cash flows must have the same value now. Given the difficulties of estimating exit values, an alternative is that the expected cash flows of data centre are analysed over the life cycle of the building, with corporate bond yields used to provide a proxy for the appropriate discount rates for lease income. Since liabilities are quite diverse, a number of proxies are suggested as discount and capitalisation rates including indexed-linked, fixed interest and zero-coupon bonds. Although there are rarely assets that have identical cash flows and some approximation is necessary, the level of appraiser subjectivity is dramatically reduced.
Resumo:
Recursive Learning Control (RLC) has the potential to significantly reduce the tracking error in many repetitive trajectory applications. This paper presents an application of RLC to a soil testing load frame where non-adaptive techniques struggle with the highly nonlinear nature of soil. The main purpose of the controller is to apply a sinusoidal force reference trajectory on a soil sample with a high degree of accuracy and repeatability. The controller uses a feedforward control structure, recursive least squares adaptation algorithm and RLC to compensate for periodic errors. Tracking error is reduced and stability is maintained across various soil sample responses.
Resumo:
Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.
Resumo:
Constrained principal component analysis (CPCA) with a finite impulse response (FIR) basis set was used to reveal functionally connected networks and their temporal progression over a multistage verbal working memory trial in which memory load was varied. Four components were extracted, and all showed statistically significant sensitivity to the memory load manipulation. Additionally, two of the four components sustained this peak activity, both for approximately 3 s (Components 1 and 4). The functional networks that showed sustained activity were characterized by increased activations in the dorsal anterior cingulate cortex, right dorsolateral prefrontal cortex, and left supramarginal gyrus, and decreased activations in the primary auditory cortex and "default network" regions. The functional networks that did not show sustained activity were instead dominated by increased activation in occipital cortex, dorsal anterior cingulate cortex, sensori-motor cortical regions, and superior parietal cortex. The response shapes suggest that although all four components appear to be invoked at encoding, the two sustained-peak components are likely to be additionally involved in the delay period. Our investigation provides a unique view of the contributions made by a network of brain regions over the course of a multiple-stage working memory trial.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Evidence from in vivo and in vitro studies suggests that the consumption of pro- and prebiotics may inhibit colon carcinogenesis; however, the mechanisms involved have, thus far, proved elusive. There are some indications from animal studies that the effects are being exerted during the promotion stage of carcinogenesis. One feature of the promotion stage of colorectal cancer is the disruption of tight junctions, leading to a loss of integrity across the intestinal barrier. We have used the Caco-2 human adenocarcinoma cell line as a model for the intestinal epithelia. Trans-epithelial electrical resistance measurements indicate Caco-2 monolayer integrity, and we recorded changes to this integrity following exposure to the fermentation products of selected probiotics and prebiotics, in the form of nondigestible oligosaccharides (NDOs). Our results indicate that NDOs themselves exert varying, but generally minor, effects upon the strength of the tight junctions, whereas the fermentation products of probiotics and NDOs tend to raise tight junction integrity above that of the controls. This effect was bacterial species and oligosaccharide specific. Bifidobacterium Bb 12 was particularly effective, as were the fermentation products of Raftiline and Raftilose. We further investigated the ability of Raftilose fermentations to protect against the negative effects of deoxycholic acid (DCA) upon tight junction integrity. We found protection to be species dependent and dependent upon the presence of the fermentation products in the media at the same time as or after exposure to the DCA. Results suggest that the Raftilose fermentation products may prevent disruption of the intestinal epithelial barrier function during damage by tumor promoters.
Resumo:
Accumulation of tephra fallout produced during explosive eruptions can cause roof collapses in areas near the volcano, when the weight of the deposit exceeds some threshold value that depends on the quality of buildings. The additional loading of water that remains trapped in the tephra deposits due to rainfall can contribute to increasing the loading of the deposits on the roofs. Here we propose a simple approach to estimate an upper bound for the contribution of rain to the load of pyroclastic deposits that is useful for hazard assessment purposes. As case study we present an application of the method in the area of Naples, Italy, for a reference eruption from Vesuvius volcano.
Resumo:
Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.
Resumo:
A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.