894 resultados para Averaging operators
Resumo:
Condition monitoring of diesel engines can prevent unpredicted engine failures and the associated consequence. This paper presents an experimental study of the signal characteristics of a 4-cylinder diesel engine under various loading conditions. Acoustic emission, vibration and in-cylinder pressure signals were employed to study the effectiveness of these techniques for condition monitoring and identifying symptoms of incipient failures. An event driven synchronous averaging technique was employed to average the quasi-periodic diesel engine signal in the time domain to eliminate or minimize the effect of engine speed and amplitude variations on the analysis of condition monitoring signal. It was shown that acoustic emission (AE) is a better technique than vibration method for condition monitor of diesel engines due to its ability to produce high quality signals (i.e., excellent signal to noise ratio) in a noisy diesel engine environment. It was found that the peak amplitude of AE RMS signals correlating to the impact-like combustion related events decreases in general due to a more stable mechanical process of the engine as the loading increases. A small shift in the exhaust valve closing time was observed as the engine load increases which indicates a prolong combustion process in the cylinder (to produce more power). On the contrary, peak amplitudes of the AE RMS attributing to fuel injection increase as the loading increases. This can be explained by the increase fuel friction caused by the increase volume flow rate during the injection. Multiple AE pulses during the combustion process were identified in the study, which were generated by the piston rocking motion and the interaction between the piston and the cylinder wall. The piston rocking motion is caused by the non-uniform pressure distribution acting on the piston head as a result of the non-linear combustion process of the engine. The rocking motion ceased when the pressure in the cylinder chamber stabilized.
Resumo:
Signal-degrading speckle is one factor that can reduce the quality of optical coherence tomography images. We demonstrate the use of a hierarchical model-based motion estimation processing scheme based on an affine-motion model to reduce speckle in optical coherence tomography imaging, by image registration and the averaging of multiple B-scans. The proposed technique is evaluated against other methods available in the literature. The results from a set of retinal images show the benefit of the proposed technique, which provides an improvement in signal-to-noise ratio of the square root of the number of averaged images, leading to clearer visual information in the averaged image. The benefits of the proposed technique are also explored in the case of ocular anterior segment imaging.
Resumo:
Velocity jump processes are discrete random walk models that have many applications including the study of biological and ecological collective motion. In particular, velocity jump models are often used to represent a type of persistent motion, known as a “run and tumble”, which is exhibited by some isolated bacteria cells. All previous velocity jump processes are non-interacting, which means that crowding effects and agent-to-agent interactions are neglected. By neglecting these agent-to-agent interactions, traditional velocity jump models are only applicable to very dilute systems. Our work is motivated by the fact that many applications in cell biology, such as wound healing, cancer invasion and development, often involve tissues that are densely packed with cells where cell-to-cell contact and crowding effects can be important. To describe these kinds of high cell density problems using a velocity jump process we introduce three different classes of crowding interactions into a one-dimensional model. Simulation data and averaging arguments lead to a suite of continuum descriptions of the interacting velocity jump processes. We show that the resulting systems of hyperbolic partial differential equations predict the mean behavior of the stochastic simulations very well.
Resumo:
Failing injectors are one of the most common faults in diesel engines. The severity of these faults could have serious effects on diesel engine operations such as engine misfire, knocking, insufficient power output or even cause a complete engine breakdown. It is thus essential to prevent such faults from occurring by monitoring the condition of these injectors. In this paper, the authors present the results of an experimental investigation on identifying the signal characteristics of a simulated incipient injector fault in a diesel engine using both in-cylinder pressure and acoustic emission (AE) techniques. A time waveform event driven synchronous averaging technique was used to minimize or eliminate the effect of engine speed variation and amplitude fluctuation. It was found that AE is an effective method to detect the simulated injector fault in both time (crank angle) and frequency (order) domains. It was also shown that the time domain in-cylinder pressure signal is a poor indicator for condition monitoring and diagnosis of the simulated injector fault due to the small effect of the simulated fault on the engine combustion process. Nevertheless, good correlations between the simulated injector fault and the lower order components of the enveloped in-cylinder pressure spectrum were found at various engine loading conditions.
Resumo:
The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.
Resumo:
Biomarker analysis has been implemented in sports research in an attempt to monitor the effects of exertion and fatigue in athletes. This study proposed that while such biomarkers may be useful for monitoring injury risk in workers, proteomic approaches might also be utilised to identify novel exertion or injury markers. We found that urinary urea and cortisol levels were significantly elevated in mining workers following a 12 hour overnight shift. These levels failed to return to baseline over 24h in the more active maintenance crew compared to truck drivers (operators) suggesting a lack of recovery between shifts. Use of a SELDI-TOF MS approach to detect novel exertion or injury markers revealed a spectral feature which was associated with workers in both work categories who were engaged in higher levels of physical activity. This feature was identified as the LG3 peptide, a C-terminal fragment of the anti-angiogenic / anti-tumourigenic protein endorepellin. This finding suggests that urinary LG3 peptide may be a biomarker of physical activity. It is also possible that the activity mediated release of LG3 / endorepellin into the circulation may represent a biological mechanism for the known inverse association between physical activity and cancer risk / survival.
Resumo:
Deterministic transit capacity analysis applies to planning, design and operational management of urban transit systems. The Transit Capacity and Quality of Service Manual (1) and Vuchic (2, 3) enable transit performance to be quantified and assessed using transit capacity and productive capacity. This paper further defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures the transit task performed over distance. Passenger transmission (p-km/h) captures the passenger task delivered by service at speed. Transit productiveness (p-km/h) captures transit work performed over time. These measures are useful to operators in understanding their services’ or systems’ capabilities and passenger quality of service. This paper accounts for variability in utilized demand by passengers along a line and high passenger load conditions where passenger pass-up delay occurs. A hypothetical case study of an individual bus service’s operation demonstrates the usefulness of passenger transmission in comparing existing and growth scenarios. A hypothetical case study of a bus line’s operation during a peak hour window demonstrates the theory’s usefulness in examining the contribution of individual services to line productive performance. Scenarios may be assessed using this theory to benchmark or compare lines and segments, conditions, or consider improvements.
Resumo:
Road dust contain potentially toxic pollutants originating from a range of anthropogenic sources common to urban land uses and soil inputs from surrounding areas. The research study analysed the mineralogy and morphology of dust samples from road surfaces from different land uses and background soil samples to characterise the relative source contributions to road dust. The road dust consist primarily of soil derived minerals (60%) with quartz averaging 40-50% and remainder being clay forming minerals of albite, microcline, chlorite and muscovite originating from surrounding soils. About 2% was organic matter primarily originating from plant matter. Potentially toxic pollutants represented about 30% of the build-up. These pollutants consist of brake and tire wear, combustion emissions and fly ash from asphalt. Heavy metals such as Zn, Cu, Pb, Ni, Cr and Cd primarily originate from vehicular traffic while Fe, Al and Mn primarily originate from surrounding soils. The research study confirmed the significant contribution of vehicular traffic to dust deposited on urban road surfaces.
Resumo:
The current regulatory approach to coal seam gas projects in Queensland is based on the philosophy of adaptive environmental management. This method of “learning by doing” is implemented in Queensland primarily through the imposition of layered monitoring and reporting duties on the coal seam gas operator alongside obligations to compensate and “make good” harm caused. The purpose of this article is to provide a critical review of the Queensland regulatory approach to the approval and minimisation of adverse impacts from coal seam gas activities. Following an overview of the hallmarks of an effective adaptive management approach, this article begins by addressing the mosaic of approval processes and impact assessment regimes that may apply to coal seam gas projects. This includes recent Strategic Cropping Land reforms. This article then turns to consider the preconditions for land access in Queensland and the emerging issues for landholders relating to the negotiation of access and compensation agreements. This article then undertakes a critical review of the environmental duties imposed on coal seam gas operators relating to hydraulic fracturing, well head leaks, groundwater management and the disposal and beneficial use of produced water. Finally, conclusions are drawn regarding the overall effectiveness of the Queensland framework and the lessons that may be drawn from Queensland’s adaptive environmental management approach.
Resumo:
This article presents a critical analysis of the current and proposed CCS legal frameworks across a number of jurisdictions in Australia in order to examine the legal treatment of the risks of carbon leakage from CCS operations. It does so through an analysis of the statutory obligations and liability rules established under the offshore Commonwealth and Victorian regimes, and onshore Queensland and Victorian legislative frameworks. Exposure draft legislation for CCS laws in Western Australia is also examined. In considering where the losses will fall in the event of leakage, the potential tortious and statutory liabilities of private operators and the State are addressed alongside the operation of statutory protections from liability. The current legal treatment of CCS under the new Australian Carbon Pricing Mechanism is also critiqued.
Resumo:
Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.
Resumo:
A Multimodal Seaport Container Terminal (MSCT) is a complex system which requires careful planning and control in order to operate efficiently. It consists of a number of subsystems that require optimisation of the operations within them, as well as synchronisation of machines and containers between the various subsystems. Inefficiency in the terminal can delay ships from their scheduled timetables, as well as cause delays in delivering containers to their inland destinations, both of which can be very costly to their operators. The purpose of this PhD thesis is to use Operations Research methodologies to optimise and synchronise these subsystems as an integrated application. An initial model is developed for the overall MSCT; however, due to a large number of assumptions that had to be made, as well as other issues, it is found to be too inaccurate and infeasible for practical use. Instead, a method of developing models for each subsystem is proposed that then be integrated with each other. Mathematical models are developed for the Storage Area System (SAS) and Intra-terminal Transportation System (ITTS). The SAS deals with the movement and assignment of containers to stacks within the storage area, both when they arrive and when they are rehandled to retrieve containers below them. The ITTS deals with scheduling the movement of containers and machines between the storage areas and other sections of the terminal, such as the berth and road/rail terminals. Various constructive heuristics are explored and compared for these models to produce good initial solutions for large-sized problems, which are otherwise impractical to compute by exact methods. These initial solutions are further improved through the use of an innovative hyper-heuristic algorithm that integrates the SAS and ITTS solutions together and optimises them through meta-heuristic techniques. The method by which the two models can interact with each other as an integrated system will be discussed, as well as how this method can be extended to the other subsystems of the MSCT.
Resumo:
Retrofit projects are different from newly-built projects in many respects. A retrofit project involves an existing building, which imposes constraints on the owners, designers, operators and constructors throughout the project process. Retrofit projects are risky, complex, less predictable and difficult to be well planned, which need greater coordination. For office building retrofit project, further restrictions will apply as these buildings often locate in CBD areas and most have to remain operational during the progression of project work. Issues such as site space, material storage and handling, noise and dust, need to be considered and well addressed. In this context, waste management is even more challenging with small spaces for waste handling, uncertainties in waste control, and impact of waste management activities on project delivery and building occupants. Current literatures on waste management in office building retrofit projects focus on increasing waste recovery rate based on project planning, monitoring and stakeholders’ collaboration. However, previous research has not produced knowledge of understanding the particular retrofit processes and their impact on waste generation and management. This paper discusses the interim results of a continuing research on new strategies for waste management in office building retrofit projects. Firstly based on the literature review, it summarizes the unique characteristics of office building retrofit projects and their influence on waste management. An assumption on waste management strategies is formed. Semi-structured interviews were conducted towards industry practitioners and findings are then presented in the paper. The assumption of the research was validated in the interviews from the opinions and experiences of the respondents. Finally the research develops a process model for waste management in office building retrofit projects. It introduces two different waste management strategies. For the dismantling phase, waste is generated fast along with the work progress, so integrated planning for project delivery and waste generation is needed in order to organize prompt handling and treatment. For the fit-out phase, the work is similar as new construction. Factors which are particularly linked to generating waste on site need to be controlled and monitored. Continuing research in this space will help improve the practice of waste management in office building retrofit projects. The new strategies will help promote the practicality of project waste planning and management and stakeholders’ capability of coordinating waste management and project delivery.
Resumo:
Serving as a powerful tool for extracting localized variations in non-stationary signals, applications of wavelet transforms (WTs) in traffic engineering have been introduced; however, lacking in some important theoretical fundamentals. In particular, there is little guidance provided on selecting an appropriate WT across potential transport applications. This research described in this paper contributes uniquely to the literature by first describing a numerical experiment to demonstrate the shortcomings of commonly-used data processing techniques in traffic engineering (i.e., averaging, moving averaging, second-order difference, oblique cumulative curve, and short-time Fourier transform). It then mathematically describes WT’s ability to detect singularities in traffic data. Next, selecting a suitable WT for a particular research topic in traffic engineering is discussed in detail by objectively and quantitatively comparing candidate wavelets’ performances using a numerical experiment. Finally, based on several case studies using both loop detector data and vehicle trajectories, it is shown that selecting a suitable wavelet largely depends on the specific research topic, and that the Mexican hat wavelet generally gives a satisfactory performance in detecting singularities in traffic and vehicular data.
Resumo:
Fractional differential equations are becoming more widely accepted as a powerful tool in modelling anomalous diffusion, which is exhibited by various materials and processes. Recently, researchers have suggested that rather than using constant order fractional operators, some processes are more accurately modelled using fractional orders that vary with time and/or space. In this paper we develop computationally efficient techniques for solving time-variable-order time-space fractional reaction-diffusion equations (tsfrde) using the finite difference scheme. We adopt the Coimbra variable order time fractional operator and variable order fractional Laplacian operator in space where both orders are functions of time. Because the fractional operator is nonlocal, it is challenging to efficiently deal with its long range dependence when using classical numerical techniques to solve such equations. The novelty of our method is that the numerical solution of the time-variable-order tsfrde is written in terms of a matrix function vector product at each time step. This product is approximated efficiently by the Lanczos method, which is a powerful iterative technique for approximating the action of a matrix function by projecting onto a Krylov subspace. Furthermore an adaptive preconditioner is constructed that dramatically reduces the size of the required Krylov subspaces and hence the overall computational cost. Numerical examples, including the variable-order fractional Fisher equation, are presented to demonstrate the accuracy and efficiency of the approach.