870 resultados para Time-frequency distribution
Resumo:
For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series. First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in reproducing observed flood frequencies. The presented model has the potential to be used for ungauged locations through regionalisation of the model parameters.
Resumo:
Despite recent advances in ocean observing arrays and satellite sensors, there remains great uncertainty in the large-scale spatial variations of upper ocean salinity on the interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focussed on the linear response to anthropogenic forcing but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Väisälä frequency (N2), and the ocean salinity stratification in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the interannually varying atmosphere. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity in the tropics—typically associated with heavy precipitation and barrier layer formation, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs.
Resumo:
The service of a critical infrastructure, such as a municipal wastewater treatment plant (MWWTP), is taken for granted until a flood or another low frequency, high consequence crisis brings its fragility to attention. The unique aspects of the MWWTP call for a method to quantify the flood stage-duration-frequency relationship. By developing a bivariate joint distribution model of flood stage and duration, this study adds a second dimension, time, into flood risk studies. A new parameter, inter-event time, is developed to further illustrate the effect of event separation on the frequency assessment. The method is tested on riverine, estuary and tidal sites in the Mid-Atlantic region. Equipment damage functions are characterized by linear and step damage models. The Expected Annual Damage (EAD) of the underground equipment is further estimated by the parametric joint distribution model, which is a function of both flood stage and duration, demonstrating the application of the bivariate model in risk assessment. Flood likelihood may alter due to climate change. A sensitivity analysis method is developed to assess future flood risk by estimating flood frequency under conditions of higher sea level and stream flow response to increased precipitation intensity. Scenarios based on steady and unsteady flow analysis are generated for current climate, future climate within this century, and future climate beyond this century, consistent with the WWTP planning horizons. The spatial extent of flood risk is visualized by inundation mapping and GIS-Assisted Risk Register (GARR). This research will help the stakeholders of the critical infrastructure be aware of the flood risk, vulnerability, and the inherent uncertainty.
Resumo:
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.
(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.
(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
Resumo:
Harmful Algal Blooms (HABs) are a worldwide problem that have been increasing in frequency and extent over the past several decades. HABs severely damage aquatic ecosystems by destroying benthic habitat, reducing invertebrate and fish populations and affecting larger species such as dugong that rely on seagrasses for food. Few statistical models for predicting HAB occurrences have been developed, and in common with most predictive models in ecology, those that have been developed do not fully account for uncertainties in parameters and model structure. This makes management decisions based on these predictions more risky than might be supposed. We used a probit time series model and Bayesian Model Averaging (BMA) to predict occurrences of blooms of Lyngbya majuscula, a toxic cyanophyte, in Deception Bay, Queensland, Australia. We found a suite of useful predictors for HAB occurrence, with Temperature figuring prominently in models with the majority of posterior support, and a model consisting of the single covariate average monthly minimum temperature showed by far the greatest posterior support. A comparison of alternative model averaging strategies was made with one strategy using the full posterior distribution and a simpler approach that utilised the majority of the posterior distribution for predictions but with vastly fewer models. Both BMA approaches showed excellent predictive performance with little difference in their predictive capacity. Applications of BMA are still rare in ecology, particularly in management settings. This study demonstrates the power of BMA as an important management tool that is capable of high predictive performance while fully accounting for both parameter and model uncertainty.
Resumo:
Background Primary prevention of childhood overweight is an international priority. In Australia 20-25% of 2-8 year olds are already overweight. These children are at substantially increased the risk of becoming overweight adults, with attendant increased risk of morbidity and mortality. Early feeding practices determine infant exposure to food (type, amount, frequency) and include responses (eg coercion) to infant feeding behaviour (eg. food refusal). There is correlational evidence linking parenting style and early feeding practices to child eating behaviour and weight status. A focus on early feeding is consistent with the national focus on early childhood as the foundation for life-long health and well being. The NOURISH trial aims to implement and evaluate a community-based intervention to promote early feeding practices that will foster healthy food preferences and intake and preserve the innate capacity to self-regulate food intake in young children. Methods/Design This randomised controlled trial (RCT) aims to recruit 820 first-time mothers and their healthy term infants. A consecutive sample of eligible mothers will be approached postnatally at major maternity hospitals in Brisbane and Adelaide. Initial consent will be for re-contact for full enrolment when the infants are 4-7 months old. Individual mother- infant dyads will be randomised to usual care or the intervention. The intervention will provide anticipatory guidance via two modules of six fortnightly parent education and peer support group sessions, each followed by six months of regular maintenance contact. The modules will commence when the infants are aged 4-7 and 13-16 months to coincide with establishment of solid feeding, and autonomy and independence, respectively. Outcome measures will be assessed at baseline, with follow up at nine and 18 months. These will include infant intake (type and amount of foods), food preferences, feeding behaviour and growth and self-reported maternal feeding practices and parenting practices and efficacy. Covariates will include sociodemographics, infant feeding mode and temperament, maternal weight status and weight concern and child care exposure. Discussion Despite the strong rationale to focus on parents’ early feeding practices as a key determinant of child food preferences, intake and self-regulatory capacity, prospective longitudinal and intervention studies are rare. This trial will be amongst to provide Level II evidence regarding the impact of an intervention (commencing prior to age 12 months) on children’s eating patterns and behaviours. Trial Registration: ACTRN12608000056392
Resumo:
Extended spectrum β-lactamases or ESBLs, which are derived from non-ESBL precursors by point mutation of β-lactamase genes (bla), are spreading rapidly all over the world and have caused considerable problems in the treatment of infections caused by bacteria which harbour them. The mechanism of this resistance is not fully understood and a better understanding of these mechanisms might significantly impact on choosing proper diagnostic and treatment strategies. Previous work on SHV β-lactamase gene, blaSHV, has shown that only Klebsiella pneumoniae strains which contain plasmid-borne blaSHV are able to mutate to phenotypically ESBL-positive strains and there was also evidence of an increase in blaSHV copy number. Therefore, it was hypothesised that although specific point mutation is essential for acquisition of ESBL activity, it is not yet enough, and blaSHV copy number amplification is also essential for an ESBL-positive phenotype, with homologous recombination being the likely mechanism of blaSHV copy number expansion. In this study, we investigated the mutation rate of non-ESBL expressing K. pneumoniae isolates to an ESBL-positive status by using the MSS-maximum likelihood method. Our data showed that blaSHV mutation rate of a non-ESBL expressing isolate is lower than the mutation rate of the other single base changes on the chromosome, even with a plasmid-borne blaSHV gene. On the other hand, mutation rate from a low MIC ESBL-positive (≤ 8 µg/mL for cefotaxime) to high MIC ESBL-positive (≥16 µg/mL for cefotaxime) is very high. This is because only gene copy number increase is needed which is probably mediated by homologous recombination that typically takes place at a much higher frequencies than point mutations. Using a subinhibitory concentration of novobiocin, as a homologous recombination inhibitor, revealed that this is the case.
Resumo:
In the region of self-organized criticality (SOC) interdependency between multi-agent system components exists and slight changes in near-neighbor interactions can break the balance of equally poised options leading to transitions in system order. In this region, frequency of events of differing magnitudes exhibits a power law distribution. The aim of this paper was to investigate whether a power law distribution characterized attacker-defender interactions in team sports. For this purpose we observed attacker and defender in a dyadic sub-phase of rugby union near the try line. Videogrammetry was used to capture players’ motion over time as player locations were digitized. Power laws were calculated for the rate of change of players’ relative position. Data revealed that three emergent patterns from dyadic system interactions (i.e., try; unsuccessful tackle; effective tackle) displayed a power law distribution. Results suggested that pattern forming dynamics dyads in rugby union exhibited SOC. It was concluded that rugby union dyads evolve in SOC regions suggesting that players’ decisions and actions are governed by local interactions rules.
Resumo:
Bag sampling techniques can be used to temporarily store an aerosol and therefore provide sufficient time to utilize sensitive but slow instrumental techniques for recording detailed particle size distributions. Laboratory based assessment of the method were conducted to examine size dependant deposition loss coefficients for aerosols held in VelostatTM bags conforming to a horizontal cylindrical geometry. Deposition losses of NaCl particles in the range of 10 nm to 160 nm were analysed in relation to the bag size, storage time, and sampling flow rate. Results of this study suggest that the bag sampling method is most useful for moderately short sampling periods of about 5 minutes.
Resumo:
Transportation disadvantage has been recognised to be the key source of social exclusion. Therefore an appropriate process is required to investigate and seek to resolve this problem. Currently, determination of Transportation Disadvantage is postulate based on income, poverty and mobility level. Transportation disadvantage may best regard be based on accessibility perspectives as they represent inability of the individual to access desired activities. This paper attempts to justify a process in determining transportation disadvantage by incorporating accessibility and social transporation conflict as the essence of a framework. The framework embeds space time organisation within the dimension of accessibility to identify a rigorous definition of transportation disadvantage. In developing the framework, the definition, dimension, component and measure of accessibility were scrutinised. The findings suggest the definition and dimension are the significant approach of research to evaluate travel experience of the disadvantaged. Concurrently, location accessibility measures will be incorprated to strenghten the determination of accessibility level. Literature review in social exclusion and mobility-related exclusion identified the dimension and source of transportation disadvantage. It was revealed that the appropriate approach to justify trasnportation disadvantaged is to incorporate space-time organisation within the studied components. The suggested framework is an inter-related process consisting of component of accessibility; individual, networking (transport system) and activities (destination). The integration and correlation among the components shall determine the level of transportation disadvantage. Prior findings are used to retrieve the spatial distribution of transportation disadvantaged and appropriate policies are developed to resolve the problems.
Resumo:
Many interesting phenomena have been observed in layers of granular materials subjected to vertical oscillations; these include the formation of a variety of standing wave patterns, and the occurrence of isolated features called oscillons, which alternately form conical heaps and craters oscillating at one-half of the forcing frequency. No continuum-based explanation of these phenomena has previously been proposed. We apply a continuum theory, termed the double-shearing theory, which has had success in analyzing various problems in the flow of granular materials, to the problem of a layer of granular material on a vertically vibrating rigid base undergoing vertical oscillations in plane strain. There exists a trivial solution in which the layer moves as a rigid body. By investigating linear perturbations of this solution, we find that at certain amplitudes and frequencies this trivial solution can bifurcate. The time dependence of the perturbed solution is governed by Mathieu’s equation, which allows stable, unstable and periodic solutions, and the observed period-doubling behaviour. Several solutions for the spatial velocity distribution are obtained; these include one in which the surface undergoes vertical velocities that have sinusoidal dependence on the horizontal space dimension, which corresponds to the formation of striped standing waves, and is one of the observed patterns. An alternative continuum theory of granular material mechanics, in which the principal axes of stress and rate-of-deformation are coincident, is shown to be incapable of giving rise to similar instabilities.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.