48 resultados para periodicity fluctuation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The flood flow in urbanised areas constitutes a major hazard to the population and infrastructure as seen during the summer 2010-2011 floods in Queensland (Australia). Flood flows in urban environments have been studied relatively recently, although no study considered the impact of turbulence in the flow. During the 12-13 January 2011 flood of the Brisbane River, some turbulence measurements were conducted in an inundated urban environment in Gardens Point Road next to Brisbane's central business district (CBD) at relatively high frequency (50 Hz). The properties of the sediment flood deposits were characterised and the acoustic Doppler velocimeter unit was calibrated to obtain both instantaneous velocity components and suspended sediment concentration in the same sampling volume with the same temporal resolution. While the flow motion in Gardens Point Road was subcritical, the water elevations and velocities fluctuated with a distinctive period between 50 and 80 s. The low frequency fluctuations were linked with some local topographic effects: i.e, some local choke induced by an upstream constriction between stairwells caused some slow oscillations with a period close to the natural sloshing period of the car park. The instantaneous velocity data were analysed using a triple decomposition, and the same triple decomposition was applied to the water depth, velocity flux, suspended sediment concentration and suspended sediment flux data. The velocity fluctuation data showed a large energy component in the slow fluctuation range. For the first two tests at z = 0.35 m, the turbulence data suggested some isotropy. At z = 0.083 m, on the other hand, the findings indicated some flow anisotropy. The suspended sediment concentration (SSC) data presented a general trend with increasing SSC for decreasing water depth. During a test (T4), some long -period oscillations were observed with a period about 18 minutes. The cause of these oscillations remains unknown to the authors. The last test (T5) took place in very shallow waters and high suspended sediment concentrations. It is suggested that the flow in the car park was disconnected from the main channel. Overall the flow conditions at the sampling sites corresponded to a specific momentum between 0.2 to 0.4 m2 which would be near the upper end of the scale for safe evacuation of individuals in flooded areas. But the authors do not believe the evacuation of individuals in Gardens Point Road would have been safe because of the intense water surges and flow turbulence. More generally any criterion for safe evacuation solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by the flow turbulence, water depth fluctuations and water surges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In natural estuaries, scalar diffusion and dispersion are driven by turbulence. In the present study, detailed turbulence measurements were conducted in a small subtropical estuary with semi-diurnal tides under neap tide conditions. Three acoustic Doppler velocimeters were installed mid-estuary at fixed locations close together. The units were sampled simultaneously and continuously at relatively high frequency for 50 h. The results illustrated the influence of tidal forcing in the small estuary, although low frequency longitudinal velocity oscillations were observed and believed to be induced by external resonance. The boundary shear stress data implied that the turbulent shear in the lower flow region was one order of magnitude larger than the boundary shear itself. The observation differed from turbulence data in a laboratory channel, but a key feature of natural estuary flow was the significant three dimensional effects associated with strong secondary currents including transverse shear events. The velocity covariances and triple correlations, as well as the backscatter intensity and covariances, were calculated for the entire field study. The covariances of the longitudinal velocity component showed some tidal trend, while the covariances of the transverse horizontal velocity component exhibited trends that reflected changes in secondary current patterns between ebb and flood tides. The triple correlation data tended to show some differences between ebb and flood tides. The acoustic backscatter intensity data were characterised by large fluctuations during the entire study, with dimensionless fluctuation intensity I0b =Ib between 0.46 and 0.54. An unusual feature of the field study was some moderate rainfall prior to and during the first part of the sampling period. Visual observations showed some surface scars and marked channels, while some mini transient fronts were observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multifractal properties of two indices of geomagnetic activity, D st (representative of low latitudes) and a p (representative of the global geomagnetic activity), with the solar X-ray brightness, X l , during the period from 1 March 1995 to 17 June 2003 are examined using multifractal detrended fluctuation analysis (MF-DFA). The h(q) curves of D st and a p in the MF-DFA are similar to each other, but they are different from that of X l , indicating that the scaling properties of X l are different from those of D st and a p . Hence, one should not predict the magnitude of magnetic storms directly from solar X-ray observations. However, a strong relationship exists between the classes of the solar X-ray irradiance (the classes being chosen to separate solar flares of class X-M, class C, and class B or less, including no flares) in hourly measurements and the geomagnetic disturbances (large to moderate, small, or quiet) seen in D st and a p during the active period. Each time series was converted into a symbolic sequence using three classes. The frequency, yielding the measure representations, of the substrings in the symbolic sequences then characterizes the pattern of space weather events. Using the MF-DFA method and traditional multifractal analysis, we calculate the h(q), D(q), and τ (q) curves of the measure representations. The τ (q) curves indicate that the measure representations of these three indices are multifractal. On the basis of this three-class clustering, we find that the h(q), D(q), and τ (q) curves of the measure representations of these three indices are similar to each other for positive values of q. Hence, a positive flare storm class dependence is reflected in the scaling exponents h(q) in the MF-DFA and the multifractal exponents D(q) and τ (q). This finding indicates that the use of the solar flare classes could improve the prediction of the D st classes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Failing injectors are one of the most common faults in diesel engines. The severity of these faults could have serious effects on diesel engine operations such as engine misfire, knocking, insufficient power output or even cause a complete engine breakdown. It is thus essential to prevent such faults from occurring by monitoring the condition of these injectors. In this paper, the authors present the results of an experimental investigation on identifying the signal characteristics of a simulated incipient injector fault in a diesel engine using both in-cylinder pressure and acoustic emission (AE) techniques. A time waveform event driven synchronous averaging technique was used to minimize or eliminate the effect of engine speed variation and amplitude fluctuation. It was found that AE is an effective method to detect the simulated injector fault in both time (crank angle) and frequency (order) domains. It was also shown that the time domain in-cylinder pressure signal is a poor indicator for condition monitoring and diagnosis of the simulated injector fault due to the small effect of the simulated fault on the engine combustion process. Nevertheless, good correlations between the simulated injector fault and the lower order components of the enveloped in-cylinder pressure spectrum were found at various engine loading conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growth in productivity is the key determinant of the long-term health and prosperity of an economy. The construction industry being one of major strategic importance, its productivity performance has a significant effect on national economic growth. The relationship between construction output and economy has received intensive studies, but there is lack of empirical study on the relationship between construction productivity and economic fluctuations. Fluctuations in construction output are endemic in the industry. In part they are caused by the boom and slump of the economy as a whole and in part by the nature of the construction product. This research aims to uncover how the productivity of construction sector is influenced in the course of economic fluctuations in Malaysia. Malaysia has adopted three economic policies – New Economic Policy (1971-1990), National Development Policy (1991-2000) and the National Vision Policy (2001-2010) since gaining independence in 1959. The Privatisation Master Plan was introduced in 1991. Operating within this historical context, the Malaysian construction sector has experienced four business cycles since 1960. A mixed-method design approach is adopted in this study. Quantitative analysis was conducted on the published official statistics of the construction industry and the overall economy in Malaysia between 1970 and 2009. Qualitative study involved interviews with a purposive sample of 21 industrial participants. This study identified a 32-year long building cycle appears in 1975-2006. It is superimposed with three shorter construction business cycles in 1975-1987, 1987-1999 and 1999-2006. The correlations of Construction labour productivity (CLP) and GDP per capita are statistically significant for the 1975-2006 building cycle, 1987-1999 and 1999-2006 construction business cycles. It was not significant in 1975-1987 construction business cycles. The Construction Industry Surveys/Census over the period from 1996 to 2007 show that the average growth rate of total output per employee expanded but the added value per employee contracted which imply high cost of bought-in materials and services and inefficient usage of purchases. The construction labour productivity is peaked at 2004 although there is contraction of construction sector in 2004. The residential subsector performed relatively better than the other sub-sectors in most of the productivity indicators. Improvements are found in output per employee, value added per employee, labour competitiveness and capital investment but declines are recorded in value added content and capital productivity. The civil engineering construction is most productive in the labour productivity nevertheless relatively poorer in the capital productivity. The labour cost is more competitive in the larger size establishment. The added value per labour cost is higher in larger sized establishment attributed to efficient in utilization of capital. The interview with the industrial participant reveals that the productivity of the construction sector is influenced by the economic environment, the construction methods, contract arrangement, payment chain and regulatory policies. The fluctuations of construction demand have caused companies switched to defensive strategy during the economic downturn and to ensure short-term survival than to make a profit for the long-term survival and growth. It leads the company to take drastic measures to curb expenses, downsizing, employ contract employment, diversification and venture overseas market. There is no empirical evidence supports downsizing as a necessary step in a process of reviving productivity. The productivity does not correlate with size of firm. A relatively smaller and focused firm is more productive than the larger and diversified organisation. However diversified company experienced less fluctuation in both labour and capital productivity. In order to improve the productivity of the construction sector, it is necessary to remove the negatives and flaws from past practices. The recommended measures include long-term strategic planning and coordinated approaches of government agencies in planning of infrastructure development and to provide regulatory environments which encourage competition and facilitate productivity improvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In urbanised areas, the flood flows constitute a hazard to populations and infrastructure as illustrated during major floods in 2011. During the 2011 Brisbane River flood, some turbulent velocity data were collected using acoustic Doppler velocimetry in an inundated street. The field deployment showed some unusual features of flood flow in the urban environment. That is, the water elevations and velocities fluctuated with distinctive periods between 50 and 100 s linked with some local topographic effects. The instantaneous velocity data were analysed using a triple decomposition. The velocity fluctuations included a large energy component in the slow fluctuation range, while the turbulent motion components were much smaller. The suspended sediment data showed some significant longitudinal flux. Altogether the results highlighted that the triple decomposition approach originally developed for period flows is well suited to complicated flows in an inundated urban environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Five basalt samples from the Point Sal ophiolite, California, were examined using HRTEM and AEM in order to compare observations with interpretations of XRD patterns and microprobe analyses. XRD data from ethylene-glycol-saturated samples indicate the following percentages of chlorite in mixed-layer chlorite-smectite identified for each specimen: (i) L2036 almost-equal-to 50%, (ii) L2035 almost-equal-to 70 and 20%, (iii) 1A-13 almost-equal-to 70%, (iv) 1B-42 almost-equal-to 70%, and (v) 1B-55 = 100%. Detailed electron microprobe analyses show that 'chlorite' analyses with high Si, K, Na and Ca contents are the result of interlayering with smectite-like layers. The Fe/(Fe + Mg) ratios of mixed-layer phyllosilicates from Point Sal samples are influenced by the bulk rock composition, not by the percentage of chlorite nor the structure of the phyllosilicate. Measurements of lattice-fringe images indicate that both smectite and chlorite layers are present in the Point Sal samples in abundances similar to those predicted with XRD techniques and that regular alternation of chlorite and smectite occurs at the unit-cell scale. Both 10- and 14-angstrom layers were recorded with HRTEM and interpreted to be smectite and chlorite, respectively. Regular alternation of chlorite and smectite (24-angstrom periodicity) occurs in upper lava samples L2036 and 1A-13, and lower lava sample 1B-42 for as many as seven alternations per crystallite with local layer mistakes. Sample L2035 shows disordered alternation of chlorite and smectite, with juxtaposition of smectite-like layers, suggesting that randomly interlayered chlorite (< 0.5)-smectite exists. Images of lower lava sample 1B-55 show predominantly 14-angstrom layers. Units of 24 angstrom tend to cluster in what may otherwise appear to be disordered mixtures, suggesting the existence of a corrensite end-member having thermodynamic significance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

HRTEM has been used to examine illite/smectite from the Mancos shale, rectorite from Garland County, Arkansas; illite from Silver Hill, Montana; Na-smectite from Crook County, Wyoming; corrensite from Packwood, Washington; and diagenetic chlorite from the Tuscaloosa formation. Thin specimens were prepared by ion milling, ultra-microtome sectioning and/or grain dispersal on a porous carbon substrate. Some smectite-bearing clays were also examined after intercalation with dodecylamine hydrochloride (DH). Intercalation of smectite with DH proved to be a reliable method of HRTEM imaging of expanded smectite, d(001) 16 A which could then be distinguished from unexpanded illite, d(001) 10 A. Lattice fringes of basal spacings of DH-intercalated rectorite and illite/smectite showed 26 A periodicity. These data support XRD studies which suggest that these samples are ordered, interstratified varieties of illite and smectite. The ion-thinned, unexpanded corrensite sample showed discrete crystallites containing 10 A and 14 A basal spacings corresponding with collapsed smectite and chlorite, respectively. Regions containing disordered layers of chlorite and smectite were also noted. Crystallites containing regular alternations of smectite and chlorite were not common. These HRTEM observations of corrensite did not corroborate XRD data. Particle sizes parallel to the c axis ranged widely for each sample studied, and many particles showed basal dimensions equivalent to > five layers. -J.M.H.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High resolution transmission electron microscopy of the Mighei carbonaceous chondrite matrix has revealed the presence of a new mixed layer structure material. This mixed-layer material consists of an ordered arrangement of serpentine-type (S) and brucite-type (B) layers in the sequence ... SBBSBB. ... Electron diffraction and imaging techniques show that the basal periodicity is ~ 17 Å. Discrete crystals of SBB-type material are typically curved, of small size (<1 μm) and show structural variations similar to the serpentine group minerals. Mixed-layer material also occurs in association with planar serpentine. Characteristics of SBB-type material are not consistent with known terrestrial mixed-layer clay minerals. Evidence for formation by a condensation event or by subsequent alteration of preexisting material is not yet apparent. © 1982.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An energy storage system (ESS) can provide ancillary services such as frequency regulation and reserves, as well as smooth the fluctuations of wind power outputs, and hence improve the security and economics of the power system concerned. The combined operation of a wind farm and an ESS has become a widely accepted operating mode. Hence, it appears necessary to consider this operating mode in transmission system expansion planning, and this is an issue to be systematically addressed in this work. Firstly, the relationship between the cost of the NaS based ESS and its discharging cycle life is analyzed. A strategy for the combined operation of a wind farm and an ESS is next presented, so as to have a good compromise between the operating cost of the ESS and the smoothing effect of the fluctuation of wind power outputs. Then, a transmission system expansion planning model is developed with the sum of the transmission investment costs, the investment and operating costs of ESSs and the punishment cost of lost wind energy as the objective function to be minimized. An improved particle swarm optimization algorithm is employed to solve the developed planning model. Finally, the essential features of the developed model and adopted algorithm are demonstrated by 18-bus and 46-bus test systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While substantial research on intelligent transportation systems has focused on the development of novel wireless communication technologies and protocols, relatively little work has sought to fully exploit proximity-based wireless technologies that passengers actually carry with them today. This paper presents the real-world deployment of a system that exploits public transit bus passengers’ Bluetooth-capable devices to capture and reconstruct micro- and macro-passenger behavior. We present supporting evidence that approximately 12% of passengers already carry Bluetooth-enabled devices and that the data collected on these passengers captures with almost 80 % accuracy the daily fluctuation of actual passengers flows. The paper makes three contributions in terms of understanding passenger behavior: We verify that the length of passenger trips is exponentially bounded, the frequency of passenger trips follows a power law distribution, and the microstructure of the network of passenger movements is polycentric.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The skill shortage issues have long existed in the construction industry in countries like Australia. Couple this with the lead and lag time between market demand and resultant supply has traditionally seen cyclical fluctuation of skills demand within the construction industry. Skills demand and shortages are generally well documented and can even have a level of predictability in Australia given the tendency to have a delayed reaction to global economic downturns. Sustainability issues in the construction industry have attracted growing public awareness. Traditionally driven by ever increasing, if only gradual, mandated minimum requirements, drive towards sustainable developments is now increasingly being created by the client. As this demand increases, accordingly a demand for people with the skills to provide these services should be felt. This research examines the green skill shortage issues within the context of construction industry. Stakeholders from across relevant sectors of the built environment were engaged to ascertain the industry’s utilisation and demand for ‘green skilled’ personnel. These findings will assist stakeholders within the construction industry in negating the effects of a skills shortage in the event of accelerated demand for sustainable construction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer vision is increasingly becoming interested in the rapid estimation of object detectors. The canonical strategy of using Hard Negative Mining to train a Support Vector Machine is slow, since the large negative set must be traversed at least once per detector. Recent work has demonstrated that, with an assumption of signal stationarity, Linear Discriminant Analysis is able to learn comparable detectors without ever revisiting the negative set. Even with this insight, the time to learn a detector can still be on the order of minutes. Correlation filters, on the other hand, can produce a detector in under a second. However, this involves the unnatural assumption that the statistics are periodic, and requires the negative set to be re-sampled per detector size. These two methods differ chie y in the structure which they impose on the co- variance matrix of all examples. This paper is a comparative study which develops techniques (i) to assume periodic statistics without needing to revisit the negative set and (ii) to accelerate the estimation of detectors with aperiodic statistics. It is experimentally verified that periodicity is detrimental.