991 resultados para 270802 Diagnostic Applications
Resumo:
The lymphedema diagnostic method used in descriptive or intervention studies may influence results found. The purposes of this work were to compare baseline lymphedema prevalence in the physical activity and lymphedema (PAL) trial cohort and to subsequently compare the effect of the weight-lifting intervention on lymphedema, according to four standard diagnostic methods. The PAL trial was a randomized controlled intervention study, involving 295 women who had previously been treated for breast cancer, and evaluated the effect of 12 months of weight lifting on lymphedema status. Four diagnostic methods were used to evaluate lymphedema outcomes: (i) interlimb volume difference through water displacement, (ii) interlimb size difference through sum of arm circumferences, (iii) interlimb impedance ratio using bioimpedance spectroscopy, and (iv) a validated self-report survey. Of the 295 women who participated in the PAL trial, between 22 and 52% were considered to have lymphedema at baseline according to the four diagnostic criteria used. No between-group differences were noted in the proportion of women who had a change in interlimb volume, interlimb size, interlimb ratio, or survey score of ≥5, ≥5, ≥10%, and 1 unit, respectively (cumulative incidence ratio at study end for each measure ranged between 0.6 and 0.8, with confidence intervals spanning 1.0). The variation in proportions of women within the PAL trial considered to have lymphoedema at baseline highlights the potential impact of the diagnostic criteria on population surveillance regarding prevalence of this common morbidity of treatment. Importantly though, progressive weight lifting was shown to be safe for women following breast cancer, even for those at risk or with lymphedema, irrespective of the diagnostic criteria used.
Resumo:
In this thesis, the author proposed and developed gas sensors made of nanostructured WO3 thin film by a thermal evaporation technique. This technique gives control over film thickness, grain size and purity. The device fabrication, nanostructured material synthesis, characterization and gas sensing performance have been undertaken. Three different types of nanostructured thin films, namely, pure WO3 thin films, iron-doped WO3 thin films by co-evaporation and Fe-implanted WO3 thin films have been synthesized. All the thin films have a film thickness of 300 nm. The physical, chemical and electronic properties of these films have been optimized by annealing heat treatment at 300ºC and 400ºC for 2 hours in air. Various analytical techniques were employed to characterize these films. Atomic Force Microscopy and Transmission Electron Microscopy revealed a very small grain size of the order 5-10 nm in as-deposited WO3 films, and annealing at 300ºC or 400ºC did not result in any significant change in grain size. X-ray diffraction (XRD) analysis revealed a highly amorphous structure of as-deposited films. Annealing at 300ºC for 2 hours in air did not improve crystallinity in these films. However, annealing at 400ºC for 2 hours in air significantly improved the crystallinity in pure and iron-doped WO3 thin films, whereas it only slightly improved the crystallinity of iron-implanted WO3 thin film as a result of implantation. Rutherford backscattered spectroscopy revealed an iron content of 0.5 at.% and 5.5 at.% in iron-doped and iron-implanted WO3 thin films, respectively. The RBS results have been confirmed using energy dispersive x-ray spectroscopy (EDX) during analysis of the films using transmission electron microscopy (TEM). X-ray photoelectron spectroscopy (XPS) revealed significant lowering of W 4f7/2 binding energy in all films annealed at 400ºC as compared with the as-deposited and 300ºC annealed films. Lowering of W 4f7/2 is due to increase in number of oxygen vacancies in the films and is considered highly beneficial for gas sensing. Raman analysis revealed that 400ºC annealed films except the iron-implanted film are highly crystalline with significant number of O-W-O bonds, which was consistent with the XRD results. Additionally, XRD, XPS and Raman analyses showed no evidence of secondary peaks corresponding to compounds of iron due to iron doping or implantation. This provided an understanding that iron was incorporated in the host WO3 matrix rather than as a separate dispersed compound or as catalyst on the surface. WO3 thin film based gas sensors are known to operate efficiently in the temperature range 200ºC-500 ºC. In the present study, by optimizing the physical, chemical and electronic properties through heat treatment and doping, an optimum response to H2, ethanol and CO has been achieved at a low operating temperature of 150ºC. Pure WO3 thin film annealed at 400ºC showed the highest sensitivity towards H2 at 150ºC due to its very small grain size and porosity, coupled with high number of oxygen vacancies, whereas Fe-doped WO3 film annealed at 400ºC showed the highest sensitivity to ethanol at an operating temperature of 150ºC due to its crystallinity, increased number of oxygen vacancies and higher degree of crystal distortions attributed to Fe addition. Pure WO3 films are known to be insensitive to CO, but iron-doped WO3 thin film annealed at 300ºC and 400ºC showed an optimum response to CO at an operating temperature of 150ºC. This result is attributed to lattice distortions produced in WO3 host matrix as a result of iron incorporation as substitutional impurity. However, iron-implanted WO3 thin films did not show any promising response towards the tested gases as the film structure has been damaged due to implantation, and annealing at 300ºC or 400ºC was not sufficient to induce crystallinity in these films. This study has demonstrated enhanced sensing properties of WO3 thin film sensors towards CO at lower operating temperature, which was achieved by optimizing the physical, chemical and electronic properties of the WO3 film through Fe doping and annealing. This study can be further extended to systematically investigate the effects of different Fe concentrations (0.5 at.% to 10 at.%) on the sensing performance of WO3 thin film gas sensors towards CO.
Resumo:
A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Digital information that is place- and time-specific, is increasingly becoming available on all aspects of the urban landscape. People (cf. the Social Web), places (cf. the Geo Web), and physical objects (cf. ubiquitous computing, the Internet of Things) are increasingly infused with sensors, actuators, and tagged with a wealth of digital information. Urban informatics research explores these emerging digital layers of the city at the intersection of people, place and technology. However, little is known about the challenges and new opportunities that these digital layers may offer to road users driving through today’s mega cities. We argue that this aspect is worth exploring in particular with regards to Auto-UI’s overarching goal of making cars both safer and more enjoyable. This paper presents the findings of a pilot study, which included 14 urban informatics research experts participating in a guided ideation (idea creation) workshop within a simulated environment. They were immersed into different driving scenarios to imagine novel urban informatics type of applications specific to the driving context.
Resumo:
Many substation applications require accurate time-stamping. The performance of systems such as Network Time Protocol (NTP), IRIG-B and one pulse per second (1-PPS) have been sufficient to date. However, new applications, including IEC 61850-9-2 process bus and phasor measurement, require accuracy of one microsecond or better. Furthermore, process bus applications are taking time synchronisation out into high voltage switchyards where cable lengths may have an impact on timing accuracy. IEEE Std 1588, Precision Time Protocol (PTP), is the means preferred by the smart grid standardisation roadmaps (from both the IEC and US National Institute of Standards and Technology) of achieving this higher level of performance, and integrates well into Ethernet based substation automation systems. Significant benefits of PTP include automatic path length compensation, support for redundant time sources and the cabling efficiency of a shared network. This paper benchmarks the performance of established IRIG-B and 1-PPS synchronisation methods over a range of path lengths representative of a transmission substation. The performance of PTP using the same distribution system is then evaluated and compared to the existing methods to determine if the performance justifies the additional complexity. Experimental results show that a PTP timing system maintains the synchronising performance of 1-PPS and IRIG-B timing systems, when using the same fibre optic cables, and further meets the needs of process buses in large substations.
Resumo:
This article sets out the results of an empirical research study into the uses to which the Australian patent system is being put in the early 21st century. The focus of the study is business method patents, which are of interest because they are a controversial class of patent that are thought to differ significantly from the mechanical, chemical and industrial inventions that have traditionally been the mainstay of the patent system. The purpose of the study is to understand what sort of business method patent applications have been lodged in Australia in the first decade of this century and how the patent office is responding to those applications.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information over- lays. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment has much potential in areas of BPM; to engage, pro- vide insight, and to promote collaboration amongst analysts and stakeholders alike. This initial visualization workshop seeks to initiate the development of a high quality international forum to present and discuss research in this field. Via this workshop, we intend to create a community to unify and nurture the development of process visualization topics as a continuing research area.
Resumo:
Background: Outside the mass-spectrometer, proteomics research does not take place in a vacuum. It is affected by policies on funding and research infrastructure. Proteomics research both impacts and is impacted by potential clinical applications. It provides new techniques & clinically relevant findings, but the possibilities for such innovations (and thus the perception of the potential for the field by funders) are also impacted by regulatory practices and the readiness of the health sector to incorporate proteomics-related tools & findings. Key to this process is how knowledge is translated. Methods: We present preliminary results from a multi-year social science project, funded by the Canadian Institutes of Health Research, on the processes and motivations for knowledge translation in the health sciences. The proteomics case within this wider study uses qualitative methods to examine the interplay between proteomics science and regulatory and policy makers regarding clinical applications of proteomics. Results: Adopting an interactive format to encourage conference attendees’ feedback, our poster focuses on deficits in effective knowledge translation strategies from the laboratory to policy, clinical, & regulatory arenas. An analysis of the interviews conducted to date suggests five significant choke points: the changing priorities of funding agencies; the complexity of proteomics research; the organisation of proteomics research; the relationship of proteomics to genomics and other omics sciences; and conflict over the appropriate role of standardisation. Conclusion: We suggest that engagement with aspects of knowledge translation, such as those mentioned above, is crucially important for the eventual clinical application ofproteomics science on any meaningful scale.
Resumo:
New substation automation applications, such as sampled value process buses and synchrophasors, require sampling accuracy of 1 µs or better. The Precision Time Protocol (PTP), IEEE Std 1588, achieves this level of performance and integrates well into Ethernet based substation networks. This paper takes a systematic approach to the performance evaluation of commercially available PTP devices (grandmaster, slave, transparent and boundary clocks) from a variety of manufacturers. The ``error budget'' is set by the performance requirements of each application. The ``expenditure'' of this error budget by each component is valuable information for a system designer. The component information is used to design a synchronization system that meets the overall functional requirements. The quantitative performance data presented shows that this testing is effective and informative. Results from testing PTP performance in the presence of sampled value process bus traffic demonstrate the benefit of a ``bottom up'' component testing approach combined with ``top down'' system verification tests. A test method that uses a precision Ethernet capture card, rather than dedicated PTP test sets, to determine the Correction Field Error of transparent clocks is presented. This test is particularly relevant for highly loaded Ethernet networks with stringent timing requirements. The methods presented can be used for development purposes by manufacturers, or by system integrators for acceptance testing. A sampled value process bus was used as the test application for the systematic approach described in this paper. The test approach was applied, components were selected, and the system performance verified to meet the application's requirements. Systematic testing, as presented in this paper, is applicable to a range of industries that use, rather than develop, PTP for time transfer.
Resumo:
Smart antenna receiver and transmitter systems consist of multi-port arrays with an individual receiver channel (including ADC) and an individual transmitter channel (including DAC)at every of the M antenna ports, respectively. By means of digital beamforming, an unlimited number of simultaneous complex-valued vector radiation patterns with M-1 degrees of freedom can be formed. Applications of smart antennas in communication systems include space-division multiple access. If both stations of a communication link are equipped with smart antennas (multiple-input-multiple-output, MIMO). multiple independent channels can be formed in a "multi-path-rich" environment. In this article, it will be shown that under certain circumstances, the correlation between signals from adjacent ports of a dense array (M + ΔM elements) can be kept as low as the correlation between signals from adjacent ports of a conventional array (M elements and half-wavelength pacing). This attractive feature is attained by means of a novel approach which employs a RF decoupling network at the array ports in order to form new ports which are decoupled and associated with mutually orthogonal (de-correlated) radiation patterns.
Resumo:
Most social network users hold more than one social network account and utilize them in different ways depending on the digital context. For example, friendly chat on Facebook, professional discussion on LinkedIn, and health information exchange on PatientsLikeMe. Thus many web users need to manage many disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time consuming, inefficient, and leads to lost opportunity. In this paper we propose a framework for multiple profile management of online social networks and showcase a demonstrator utilising an open source platform. The result of the research enables a user to create and manage an integrated profile and share/synchronise their profiles with their social networks. A number of use cases were created to capture the functional requirements and describe the interactions between users and the online services. An innovative application of this project is in public health informatics. We utilize the prototype to examine how the framework can benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians.