140 resultados para High-frequency induction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Variable Speed Limits (VSL) is an Intelligent Transportation Systems (ITS) control tool which can enhance traffic safety and which has the potential to contribute to traffic efficiency. Queensland's motorways experience a large volume of commuter traffic in peak periods, leading to heavy recurrent congestion and a high frequency of incidents. Consequently, Queensland's Department of Transport and Main Roads have considered deploying VSL to improve safety and efficiency. This paper identifies three types of VSL and three applicable conditions for activating VSL on for Queensland motorways: high flow, queuing and adverse weather. The design objectives and methodology for each condition are analysed, and micro-simulation results are presented to demonstrate the effectiveness of VSL.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Loss of the short arm of chromosome 1 is frequently observed in many tumor types, including melanoma. We recently localized a third melanoma susceptibility locus to chromosome band 1p22. Critical recombinants in linked families localized the gene to a 15-Mb region between D1S430 and D1S2664. To map the locus more finely we have performed studies to assess allelic loss across the region in a panel of melanomas from 1p22-linked families, sporadic melanomas, and melanoma cell lines. Eighty percent of familial melanomas exhibited loss of heterozygosity (LOH) within the region, with a smallest region of overlapping deletions (SRO) of 9 Mb between D1S207 and D1S435. This high frequency of LOH makes it very likely that the susceptibility locus is a tumor suppressor. In sporadic tumors, four SROs were defined. SRO1 and SRO2 map within the critical recombinant and familial tumor region, indicating that one or the other is likely to harbor the susceptibility gene. However, SRO3 may also be significant because it overlaps with the markers with the highest 2-point LOD score (D1S2776), part of the linkage recombinant region, and the critical region defined in mesothelioma. The candidate genes PRKCL2 and GTF2B, within SRO2, and TGFBR3, CDC7, and EVI5, in a broad region encompassing SRO3, were screened in 1p22-linked melanoma kindreds, but no coding mutations were detected. Allelic loss in melanoma cell lines was significantly less frequent than in fresh tumors, indicating that this gene may not be involved late in progression, such as in overriding cellular senescence, necessary for the propagation of melanoma cells in culture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Road traffic injuries are a major global public health problem but continue to receive inadequate attention. Alcohol influences both risk and consequence of road traffic injury but the scale of the problem is not well understood in many countries. In Vietnam, economic development has brought a substantial increase in the number of registered motorcycles as well as alcohol consumption. Traffic injury is among the leading causes of death in Vietnam but there is little local information regarding alcohol related traffic injuries. The primary goal of this study is to explore the drinking and driving patterns of males and their perceptions towards drink-driving and to determine the relationship between alcohol consumption and road traffic injuries. Furthermore, this thesis aims to present the situation analysis for choosing priority actions to reduce drinking and driving in Vietnam. The study is a combination of two cross-sectional surveys and a pilot study. The pilot study, involving 224 traffic injured patients, was conducted to test the tools and the feasibility of approach methods. In the first survey, male patrons (n=464) were randomly selected at seven restaurants. Face-to-face interviews were conducted when patrons just arrived and breath tests were collected when they were about to leave the restaurant. In the second survey, male patients admitted to hospital following a traffic injury (n=480, of which 414 were motorcycle or bicycle riders) were interviewed and their blood alcohol concentration (BAC) measured by breathalyzer. The results show broadly similar patterns of drinking and driving among male patrons and male traffic injured patients with a high frequency of drinking and drink-driving reported among the majority of the two groups. A high proportion of male patrons were leaving restaurants with a BAC over the legal limit. Factors that significantly associate with the number of drinks and BAC were age, hazardous drinking, frequency of drink-driving in the past year, self-estimated number of drinks consumed to drive legally, perceived family’s disapproval of drink-driving, and perceived legal risk and physical risk. The proportion of patrons and patients with BAC above the legal limit of 0.05 were 86.7% and 60.4% respectively, which was much higher than found in previous studies. In addition, both groups had a high prevalence of BAC over 0.15g/100ml (39.7% of patrons and 45.6% patients), a level that can seriously affect driving capacity. Results from the case-crossover analysis for patients indicate a dose-response relationship between alcohol consumption and the risk of traffic injury. The risk of traffic injury increased when alcohol was consumed before driving and there was a more than 13 fold increase when six or more drinks were consumed. Regarding perceptions towards drinking and driving, findings corroborate the low awareness among males in Vietnam, with a majority of respondents holding a low knowledge of safe and legally permissible alcohol use, and a low perceived risk of drinking and driving. The results also indicate a huge gap in prevention skills in terms of planning ahead or using alternative transport to avoid drink-driving and a perception by patrons and patients of a low rate of disapproval of drink-driving from peers and family. Findings in this study have considerable implications for national policy, injury prevention, clinical practice, reporting systems, and for further research. The low rate of compliance with existing laws and a generally low perceived legal risk toward drink-driving in this study call for the strengthening of enforcement along with mass media campaigns and news coverage in order to decrease the widespread perception of impunity and thereby, to reduce the level of drink-driving. In addition, no significant difference was found in this study on risk of traffic injuries between car drivers and motorcycle drivers. The current inconsistency between legal BAC for drivers of motorcycles, compared to cars, thus needs addressing. Furthermore, as drinking was found to be very common, rather than solely targeting drink-driving, it is important to call for a more strategic and comprehensive approach to alcohol policy in Viet Nam. This study also has considerable implications for clinical practice in terms of screening and brief interventions. Our study suggests that the short form of the AUDIT (AUDIT-C) screening tool is appropriate for use in busy emergency departments. The high proportion of traffic injured patients with evidence of alcohol abuse or hazardous drinking suggests that brief interventions by alcohol and drug counselors in emergency departments are a sensible option to addressing this important problem. The significance of this study is in the combination of the systematic collection of breath test and use of case-crossover design to estimate the risk of traffic injuries after alcohol consumption. The results provide convincing evidence to policy makers, health authorities and the media to help raise community awareness and policy advocacy toward the drinkdriving problem in Vietnam. The findings suggest an urgent need for a multi-sectoral approach to curtail drink-driving in Vietnam, especially programs to raise community awareness and effective legal enforcement. Furthermore, serving as a situation analysis, the thesis should inform the formulation of interventions designed to curtail drinking and driving in Vietnam and other developing countries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In natural estuaries, scalar diffusion and dispersion are driven by turbulence. In the present study, detailed turbulence measurements were conducted in a small subtropical estuary with semi-diurnal tides under neap tide conditions. Three acoustic Doppler velocimeters were installed mid-estuary at fixed locations close together. The units were sampled simultaneously and continuously at relatively high frequency for 50 h. The results illustrated the influence of tidal forcing in the small estuary, although low frequency longitudinal velocity oscillations were observed and believed to be induced by external resonance. The boundary shear stress data implied that the turbulent shear in the lower flow region was one order of magnitude larger than the boundary shear itself. The observation differed from turbulence data in a laboratory channel, but a key feature of natural estuary flow was the significant three dimensional effects associated with strong secondary currents including transverse shear events. The velocity covariances and triple correlations, as well as the backscatter intensity and covariances, were calculated for the entire field study. The covariances of the longitudinal velocity component showed some tidal trend, while the covariances of the transverse horizontal velocity component exhibited trends that reflected changes in secondary current patterns between ebb and flood tides. The triple correlation data tended to show some differences between ebb and flood tides. The acoustic backscatter intensity data were characterised by large fluctuations during the entire study, with dimensionless fluctuation intensity I0b =Ib between 0.46 and 0.54. An unusual feature of the field study was some moderate rainfall prior to and during the first part of the sampling period. Visual observations showed some surface scars and marked channels, while some mini transient fronts were observed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since predictions of scalar dispersion in small estuaries can rarely be predicted accurately, new field measurements were conducted continuously at relatively high frequency for up to 50 h (per investigation) in a small subtropical estuary with semidiurnal tides. The bulk flow parameters varied in time with periods comparable to tidal cycles and other large-scale processes. The turbulence properties depended upon the instantaneous local flow properties. They were little affected by the flow history, but their structure and temporal variability were influenced by a variety of parameters including the tidal conditions and bathymetry. A striking feature of the data sets was the large fluctuations in all turbulence characteristics during the tidal cycle, and basic differences between neap and spring tide turbulence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the ultrasonic velocity measurement method which investigates the possible effects of high voltage high frequency pulsed power on cortical bone material elasticity. Before applying a pulsed power signal on a live bone, it is essential to determine the safe parameters of pulsed power applied on bone non-destructively. Therefore, the possible changes in cortical bone material elasticity due to a specified pulsed power excitation have been investigated. A controllable positive buck-boost converter with adjustable output voltage and frequency has been used to generate high voltage pulses (500V magnitude at 10 KHz frequency). To determine bone elasticity, an ultrasonic velocity measurement has been conducted on two groups of control (unexposed to pulse power but in the same environmental condition) and cortical bone samples exposed to pulsed power. Young’s modulus of cortical bone samples have been determined and compared before and after applying the pulsed power signal. After applying the high voltage pulses, no significant variation in elastic property of cortical bone specimens was found compared to the control. The result shows that pulsed power with nominated parameters can be applied on cortical bone tissue without any considerable negative effect on elasticity of bone material.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In natural waterways and estuaries, the understanding of turbulent mixing is critical to the knowledge of sediment transport, stormwater runoff during flood events, and release of nutrient-rich wastewater into ecosystems. In the present study, some field measurements were conducted in a small subtropical estuary with micro-tidal range and semi-diurnal tides during king tide conditions: i. e., the tidal range was the largest for both 2009 and 2010. The turbulent velocity measurements were performed continuously at high-frequency (50Hz) for 60 h. Two acoustic Doppler velocimeters (ADVs) were sampled simultaneously in the middle estuarine zone, and a third ADV was deployed in the upper estuary for 12 h only. The results provided an unique characterisation of the turbulence in both middle and upper estuarine zones under the king tide conditions. The present observations showed some marked differences between king tide and neap tide conditions. During the king tide conditions, the tidal forcing was the dominant water exchange and circulation mechanism in the estuary. In contrast, the long-term oscillations linked with internal and external resonance played a major role in the turbulent mixing during neap tides. The data set showed further that the upper estuarine zone was drastically less affected by the spring tide range: the flow motion remained slow, but the turbulent velocity data were affected by the propagation of a transient front during the very early flood tide motion at the sampling site. © 2012 Springer Science+Business Media B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study presented in this paper reviewed 9,358 accidents which occurred in the U.S. construction industry between 2002 and 2011, in order to understand the relationships between the risk factors and injury severity (e.g. fatalities, hospitalized injuries, or non-hospitalized injuries) and to develop a strategic prevention plan to reduce the likelihood of fatalities where an accident is unavoidable. The study specifically aims to: (1) verify the relationships among risk factors, accident types, and injury severity, (2) determine significant risk factors associated with each accident type that are highly correlated to injury severity, and (3) analyze the impact of the identified key factors on accident and fatality occurrence. The analysis results explained that safety managers’ roles are critical to reducing human-related risks—particularly misjudgement of hazardous situations—through safety training and education, appropriate use of safety devices and proper safety inspection. However, for environment-related factors, the dominant risk factors were different depending on the different accident types. The outcomes of this study will assist safety managers to understand the nature of construction accidents and plan for strategic risk mitigation by prioritizing high frequency risk factors to effectively control accident occurrence and manage the likelihood of fatal injuries on construction sites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We used in vivo (biological), in silico (computational structure prediction), and in vitro (model sequence folding) analyses of single-stranded DNA sequences to show that nucleic acid folding conservation is the selective principle behind a high-frequency single-nucleotide reversion observed in a three-nucleotide mutated motif of the Maize streak virus replication associated protein (Rep) gene. In silico and in vitro studies showed that the three-nucleotide mutation adversely affected Rep nucleic acid folding, and that the single-nucleotide reversion [C(601)A] restored wild-type-like folding. In vivo support came from infecting maize with mutant viruses: those with Rep genes containing nucleotide changes predicted to restore a wild-type-like fold [A(601)/G(601)] preferentially accumulated over those predicted to fold differently [C(601)/T(601)], which frequently reverted to A(601) and displaced the original population. We propose that the selection of native nucleic acid folding is an epigenetic effect, which might have broad implications in the evolution of plants and their viruses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One aim of experimental economics is to try to better understand human economic decision making. Early research of the ultimatum bargaining game (Gueth et al., 1982) revealed that other motives than pure monetary reward play a role. Neuroeconomic research has introduced the recording of physiological observations as signals of emotional responses. In this study, we apply heart rate variability (HRV) measuring technology to explore the behaviour and physiological reactions of proposers and responders in the ultimatum bargaining game. Since this technology is small and non-intrusive, we are able to run the experiment in a standard experimental economic setup. We show that low o�ers by a proposer cause signs of mental stress in both the proposer and the responder, as both exhibit high ratios of low to high frequency activity in the HRV spectrum.