112 resultados para Block signals.
Resumo:
This paper discusses commonly encountered diesel engine problems and the underlying combustion related faults. Also discussed are the methods used in previous studies to simulate diesel engine faults and the initial results of an experimental simulation of a common combustion related diesel engine fault, namely diesel engine misfire. This experimental fault simulation represents the first step towards a comprehensive investigation and analysis into the characteristics of acoustic emission signals arising from combustion related diesel engine faults. Data corresponding to different engine running conditions was captured using in-cylinder pressure, vibration and acoustic emission transducers along with both crank-angle encoder and top-dead centre signals. Using these signals, it was possible to characterise the diesel engine in-cylinder pressure profiles and the effect of different combustion conditions on both vibration and acoustic emission signals.
Resumo:
Acoustic emission (AE) is the phenomenon where stress waves are generated due to rapid release of energy within a material caused by sources such as crack initiation or growth. AE technique involves recording the stress waves by means of sensors and subsequent analysis of the recorded signals to gather information about the nature of the source. Though AE technique is one of the popular non destructive evaluation (NDE) techniques for structural health monitoring of mechanical, aerospace and civil structures; several challenges still exist in successful application of this technique. Presence of spurious noise signals can mask genuine damage‐related AE signals; hence a major challenge identified is finding ways to discriminate signals from different sources. Analysis of parameters of recorded AE signals, comparison of amplitudes of AE wave modes and investigation of uniqueness of recorded AE signals have been mentioned as possible criteria for source differentiation. This paper reviews common approaches currently in use for source discrimination, particularly focusing on structural health monitoring of civil engineering structural components such as beams; and further investigates the applications of some of these methods by analyzing AE data from laboratory tests.
Resumo:
Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.
Resumo:
Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset.
Resumo:
This paper presents a study whereby a series of tests was undertaken using a naturally aspirated 4 cylinder, 2.216 litre, Perkins Diesel engine fitted with a piston having an undersized skirt. This experimental simulation resulted in engine running conditions that included abnormally high levels of piston slap occurring in one of the cylinders. The detectability of the resultant Diesel engine piston slap was investigated using acoustic emission signals. Data corresponding to both normal and piston slap engine running conditions was captured using acoustic emission transducers along with both; in-cylinder pressure and top-dead centre reference signals. Using these signals it was possible to demonstrate that the increased piston slap running conditions were distinguishable by monitoring the piston slap events occurring near the piston mid-stroke positions. However, when monitoring the piston slap events occurring near the TDC/BDC piston stroke positions, the normal and excessive piston slap engine running condition were not clearly distinguishable.
Resumo:
This paper presents a methodology for determining the vertical hydraulic conductivity (Kv) of an aquitard, in a multilayered leaky system, based on the harmonic analysis of arbitrary water-level fluctuations in aquifers. As a result, Kv of the aquitard is expressed as a function of the phase-shift of water-level signals measured in the two adjacent aquifers. Based on this expression, we propose a robust method to calculate Kv by employing linear regression analysis of logarithm transformed frequencies and phases. The frequencies, where the Kv are calculated, are identified by coherence analysis. The proposed methods are validated by a synthetic case study and are then applied to the Westbourne and Birkhead aquitards, which form part of a five-layered leaky system in the Eromanga Basin, Australia.
Resumo:
In this paper, we present three counterfeiting attacks on the block-wise dependent fragile watermarking schemes. We consider vulnerabilities such as the exploitation of a weak correlation among block-wise dependent watermarks to modify valid watermarked %(medical or other digital) images, where they could still be verified as authentic, though they are actually not. Experimental results successfully demonstrate the practicability and consequences of the proposed attacks for some relevant schemes. The development of the proposed attack models can be used as a means to systematically examine the security levels of similar watermarking schemes.
Resumo:
Focal segmental glomerulosclerosis (FSGS) is the consequence of a disease process that attacks the kidney's filtering system, causing serious scarring. More than half of FSGS patients develop chronic kidney failure within 10 years, ultimately requiring dialysis or renal transplantation. There are currently several genes known to cause the hereditary forms of FSGS (ACTN4, TRPC6, CD2AP, INF2, MYO1E and NPHS2). This study involves a large, unique, multigenerational Australian pedigree in which FSGS co-segregates with progressive heart block with apparent X-linked recessive inheritance. Through a classical combined approach of linkage and haplotype analysis, we identified a 21.19 cM interval implicated on the X chromosome. We then used a whole exome sequencing approach to identify two mutated genes, NXF5 and ALG13, which are located within this linkage interval. The two mutations NXF5-R113W and ALG13-T141L segregated perfectly with the disease phenotype in the pedigree and were not found in a large healthy control cohort. Analysis using bioinformatics tools predicted the R113W mutation in the NXF5 gene to be deleterious and cellular studies support a role in the stability and localization of the protein suggesting a causative role of this mutation in these co-morbid disorders. Further studies are now required to determine the functional consequence of these novel mutations to development of FSGS and heart block in this pedigree and to determine whether these mutations have implications for more common forms of these diseases in the general population.
Resumo:
Myopia (short-sightedness) is a common ocular disorder of children and young adults. Studies primarily using animal models have shown that the retina controls eye growth and the outer retina is likely to have a key role. One theory is that the proportion of L (long-wavelength-sensitive) and M (medium-wavelength-sensitive) cones is related to myopia development; with a high L/M cone ratio predisposing individuals to myopia. However, not all dichromats (persons with red-green colour vision deficiency) with extreme L/M cone ratios have high refractive errors. We predict that the L/M cone ratio will vary in individuals with normal trichromatic colour vision but not show a systematic difference simply due to refractive error. The aim of this study was to determine if L/M cone ratios in the central 30° are different between myopic and emmetropic young, colour normal adults. Information about L/M cone ratios was determined using the multifocal visual evoked potential (mfVEP). The mfVEP can be used to measure the response of visual cortex to different visual stimuli. The visual stimuli were generated and measurements performed using the Visual Evoked Response Imaging System (VERIS 5.1). The mfVEP was measured when the L and M cone systems were separately stimulated using the method of silent substitution. The method of silent substitution alters the output of three primary lights, each with physically different spectral distributions to control the excitation of one or more photoreceptor classes without changing the excitation of the unmodulated photoreceptor classes. The stimulus was a dartboard array subtending 30° horizontally and 30° vertically on a calibrated LCD screen. The m-sequence of the stimulus was 215-1. The N1-P1 amplitude ratio of the mfVEP was used to estimate the L/M cone ratio. Data were collected for 30 young adults (22 to 33 years of age), consisting of 10 emmetropes (+0.3±0.4 D) and 20 myopes (–3.4±1.7 D). The stimulus and analysis techniques were confirmed using responses of two dichromats. For the entire participant group, the estimated central L/M cone ratios ranged from 0.56 to 1.80 in the central 3°-13° diameter ring and from 0.94 to 1.91 in the more peripheral 13°-30° diameter ring. Within 3°-13°, the mean L/M cone ratio of the emmetropic group was 1.20±0.33 and the mean was similar, 1.20±0.26, for the myopic group. For the 13°-30° ring, the mean L/M cone ratio of the emmetropic group was 1.48±0.27 and it was slightly lower in the myopic group, 1.30±0.27. Independent-samples t-test indicated no significant difference between the L/M cone ratios of the emmetropic and myopic group for either the central 3°-13° ring (p=0.986) or the more peripheral 13°-30° ring (p=0.108). The similar distributions of estimated L/M cone ratios in the sample of emmetropes and myopes indicates that there is likely to be no association between the L/M cone ratio and refractive error in humans.
Resumo:
Wide-Area Measurement Systems (WAMS) provide the opportunity of utilizing remote signals from different locations for the enhancement of power system stability. This paper focuses on the implementation of remote measurements as supplementary signals for off-center Static Var Compensators (SVCs) to damp inter-area oscillations. Combination of participation factor and residue method is used for the selection of most effective stabilizing signal. Speed difference of two generators from separate areas is identified as the best stabilizing signal and used as a supplementary signal for lead-lag controller of SVCs. Time delays of remote measurements and control signals is considered. Wide-Area Damping Controller (WADC) is deployed in Matlab Simulink framework and is tested under different operating conditions. Simulation results reveal that the proposed WADC improve the dynamic characteristic of the system significantly.
Resumo:
A fundamental part of many authentication protocols which authenticate a party to a human involves the human recognizing or otherwise processing a message received from the party. Examples include typical implementations of Verified by Visa in which a message, previously stored by the human at a bank, is sent by the bank to the human to authenticate the bank to the human; or the expectation that humans will recognize or verify an extended validation certificate in a HTTPS context. This paper presents general definitions and building blocks for the modelling and analysis of human recognition in authentication protocols, allowing the creation of proofs for protocols which include humans. We cover both generalized trawling and human-specific targeted attacks. As examples of the range of uses of our construction, we use the model presented in this paper to prove the security of a mutual authentication login protocol and a human-assisted device pairing protocol.
Resumo:
Earthwork planning has been considered in this article and a generic block partitioning and modelling approach has been devised to provide strategic plans of various levels of detail. Conceptually this approach is more accurate and comprehensive than others, for instance those that are section based. In response to environmental concerns the metric for decision making was fuel consumption and emissions. Haulage distance and gradient are also included as they are important components of these metrics. Advantageously the fuel consumption metric is generic and captures the physical difficulties of travelling over inclines of different gradients, that is consistent across all hauling vehicles. For validation, the proposed models and techniques have been applied to a real world road project. The numerical investigations have demonstrated that the models can be solved with relatively little CPU time. The proposed block models also result in solutions of superior quality, i.e. they have reduced fuel consumption and cost. Furthermore the plans differ considerably from those based solely upon a distance based metric thus demonstrating a need for industry to reflect upon their current practices.