1000 resultados para ISOTOPE APPLICATIONS
Resumo:
Many substation applications require accurate time-stamping. The performance of systems such as Network Time Protocol (NTP), IRIG-B and one pulse per second (1-PPS) have been sufficient to date. However, new applications, including IEC 61850-9-2 process bus and phasor measurement, require accuracy of one microsecond or better. Furthermore, process bus applications are taking time synchronisation out into high voltage switchyards where cable lengths may have an impact on timing accuracy. IEEE Std 1588, Precision Time Protocol (PTP), is the means preferred by the smart grid standardisation roadmaps (from both the IEC and US National Institute of Standards and Technology) of achieving this higher level of performance, and integrates well into Ethernet based substation automation systems. Significant benefits of PTP include automatic path length compensation, support for redundant time sources and the cabling efficiency of a shared network. This paper benchmarks the performance of established IRIG-B and 1-PPS synchronisation methods over a range of path lengths representative of a transmission substation. The performance of PTP using the same distribution system is then evaluated and compared to the existing methods to determine if the performance justifies the additional complexity. Experimental results show that a PTP timing system maintains the synchronising performance of 1-PPS and IRIG-B timing systems, when using the same fibre optic cables, and further meets the needs of process buses in large substations.
Resumo:
This article sets out the results of an empirical research study into the uses to which the Australian patent system is being put in the early 21st century. The focus of the study is business method patents, which are of interest because they are a controversial class of patent that are thought to differ significantly from the mechanical, chemical and industrial inventions that have traditionally been the mainstay of the patent system. The purpose of the study is to understand what sort of business method patent applications have been lodged in Australia in the first decade of this century and how the patent office is responding to those applications.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information over- lays. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment has much potential in areas of BPM; to engage, pro- vide insight, and to promote collaboration amongst analysts and stakeholders alike. This initial visualization workshop seeks to initiate the development of a high quality international forum to present and discuss research in this field. Via this workshop, we intend to create a community to unify and nurture the development of process visualization topics as a continuing research area.
Resumo:
Background: Outside the mass-spectrometer, proteomics research does not take place in a vacuum. It is affected by policies on funding and research infrastructure. Proteomics research both impacts and is impacted by potential clinical applications. It provides new techniques & clinically relevant findings, but the possibilities for such innovations (and thus the perception of the potential for the field by funders) are also impacted by regulatory practices and the readiness of the health sector to incorporate proteomics-related tools & findings. Key to this process is how knowledge is translated. Methods: We present preliminary results from a multi-year social science project, funded by the Canadian Institutes of Health Research, on the processes and motivations for knowledge translation in the health sciences. The proteomics case within this wider study uses qualitative methods to examine the interplay between proteomics science and regulatory and policy makers regarding clinical applications of proteomics. Results: Adopting an interactive format to encourage conference attendees’ feedback, our poster focuses on deficits in effective knowledge translation strategies from the laboratory to policy, clinical, & regulatory arenas. An analysis of the interviews conducted to date suggests five significant choke points: the changing priorities of funding agencies; the complexity of proteomics research; the organisation of proteomics research; the relationship of proteomics to genomics and other omics sciences; and conflict over the appropriate role of standardisation. Conclusion: We suggest that engagement with aspects of knowledge translation, such as those mentioned above, is crucially important for the eventual clinical application ofproteomics science on any meaningful scale.
Resumo:
New substation automation applications, such as sampled value process buses and synchrophasors, require sampling accuracy of 1 µs or better. The Precision Time Protocol (PTP), IEEE Std 1588, achieves this level of performance and integrates well into Ethernet based substation networks. This paper takes a systematic approach to the performance evaluation of commercially available PTP devices (grandmaster, slave, transparent and boundary clocks) from a variety of manufacturers. The ``error budget'' is set by the performance requirements of each application. The ``expenditure'' of this error budget by each component is valuable information for a system designer. The component information is used to design a synchronization system that meets the overall functional requirements. The quantitative performance data presented shows that this testing is effective and informative. Results from testing PTP performance in the presence of sampled value process bus traffic demonstrate the benefit of a ``bottom up'' component testing approach combined with ``top down'' system verification tests. A test method that uses a precision Ethernet capture card, rather than dedicated PTP test sets, to determine the Correction Field Error of transparent clocks is presented. This test is particularly relevant for highly loaded Ethernet networks with stringent timing requirements. The methods presented can be used for development purposes by manufacturers, or by system integrators for acceptance testing. A sampled value process bus was used as the test application for the systematic approach described in this paper. The test approach was applied, components were selected, and the system performance verified to meet the application's requirements. Systematic testing, as presented in this paper, is applicable to a range of industries that use, rather than develop, PTP for time transfer.
Resumo:
Smart antenna receiver and transmitter systems consist of multi-port arrays with an individual receiver channel (including ADC) and an individual transmitter channel (including DAC)at every of the M antenna ports, respectively. By means of digital beamforming, an unlimited number of simultaneous complex-valued vector radiation patterns with M-1 degrees of freedom can be formed. Applications of smart antennas in communication systems include space-division multiple access. If both stations of a communication link are equipped with smart antennas (multiple-input-multiple-output, MIMO). multiple independent channels can be formed in a "multi-path-rich" environment. In this article, it will be shown that under certain circumstances, the correlation between signals from adjacent ports of a dense array (M + ΔM elements) can be kept as low as the correlation between signals from adjacent ports of a conventional array (M elements and half-wavelength pacing). This attractive feature is attained by means of a novel approach which employs a RF decoupling network at the array ports in order to form new ports which are decoupled and associated with mutually orthogonal (de-correlated) radiation patterns.
Resumo:
Most social network users hold more than one social network account and utilize them in different ways depending on the digital context. For example, friendly chat on Facebook, professional discussion on LinkedIn, and health information exchange on PatientsLikeMe. Thus many web users need to manage many disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time consuming, inefficient, and leads to lost opportunity. In this paper we propose a framework for multiple profile management of online social networks and showcase a demonstrator utilising an open source platform. The result of the research enables a user to create and manage an integrated profile and share/synchronise their profiles with their social networks. A number of use cases were created to capture the functional requirements and describe the interactions between users and the online services. An innovative application of this project is in public health informatics. We utilize the prototype to examine how the framework can benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians.
Resumo:
Carbon nanotubes (CNTs) have excellent electrical, mechanical and electromechanical properties. When CNTs are incorporated into polymers, electrically conductive composites with high electrical conductivity at very low CNT content (often below 1% wt CNT) result. Due to the change in electrical properties under mechanical load, carbon nanotube/polymer composites have attracted significant research interest especially due to their potential for application in in-situ monitoring of stress distribution and active control of strain sensing in composite structures or as strain sensors. To sucessfully develop novel devices for such applications, some of the major challenges that need to be overcome include; in-depth understanding of structure-electrical conductivity relationships, response of the composites under changing environmental conditions and piezoresistivity of different types of carbon nanotube/polymer sensing devices. In this thesis, direct current (DC) and alternating current (AC) conductivity of CNT-epoxy composites was investigated. Details of microstructure obtained by scanning electron microscopy were used to link observed electrical properties with structure using equivalent circuit modeling. The role of polymer coatings on macro and micro level electrical conductivity was investigated using atomic force microscopy. Thermal analysis and Raman spectroscopy were used to evaluate the heat flow and deformation of carbon nanotubes embedded in the epoxy, respectively, and related to temperature induced resistivity changes. A comparative assessment of piezoresistivity was conducted using randomly mixed carbon nanotube/epoxy composites, and new concept epoxy- and polyurethane-coated carbon nanotube films. The results indicate that equivalent circuit modelling is a reliable technique for estimating values of the resistance and capacitive components in linear, low aspect ratio-epoxy composites. Using this approach, the dominant role of tunneling resistance in determining the electrical conductivity was confirmed, a result further verified using conductive-atomic force microscopy analysis. Randomly mixed CNT-epoxy composites were found to be highly sensitive to mechanical strain and temperature variation compared to polymer-coated CNT films. In the vicinity of the glass transition temperature, the CNT-epoxy composites exhibited pronounced resistivity peaks. Thermal and Raman spectroscopy analyses indicated that this phenomenon can be attributed to physical aging of the epoxy matrix phase and structural rearrangement of the conductive network induced by matrix expansion. The resistivity of polymercoated CNT composites was mainly dominated by the intrinsic resistivity of CNTs and the CNT junctions, and their linear, weakly temperature sensitive response can be described by a modified Luttinger liquid model. Piezoresistivity of the polymer coated sensors was dominated by break up of the conducting carbon nanotube network and the consequent degradation of nanotube-nanotube contacts while that of the randomly mixed CNT-epoxy composites was determined by tunnelling resistance between neighbouring CNTs. This thesis has demonstrated that it is possible to use microstructure information to develop equivalent circuit models that are capable of representing the electrical conductivity of CNT/epoxy composites accurately. New designs of carbon nanotube based sensing devices, utilising carbon nanotube films as the key functional element, can be used to overcome the high temperature sensitivity of randomly mixed CNT/polymer composites without compromising on desired high strain sensitivity. This concept can be extended to develop large area intelligent CNT based coatings and targeted weak-point specific strain sensors for use in structural health monitoring.
Resumo:
A contentious issue in the field of destination marketing has been the recent tendency by some authors to refer to destination marketing organisations (DMOs) as destination management organisations. This nomenclature infers control over destination resources, a level of influence that is in reality held by few DMOs. This issue of a lack of control over the destination ‘amalgam’ is acknowledged by a number of the contributors, including the editors and the discussion on destination competitiveness by J.R. Brent Ritchie and Geoffrey Crouch, and is perhaps best summed up by Alan Fyall in the concluding chapter: “...unless all elements are owned by the same body, then the ability to control and influence the direction, quality and development of the destination pose very real challenges’ (p. 343). The title of the text acknowledges both marketing and management, in relation to theories and applications. While there are insightful propositions about ideals of destination management, readers will find there is a lack of coverage of destination management in practise by DMOs. This represents fertile ground for future research.
Resumo:
Prostate cancer (CaP) is the second leading cause of cancer-related deaths in North American males and the most common newly diagnosed cancer in men world wide. Biomarkers are widely used for both early detection and prognostic tests for cancer. The current, commonly used biomarker for CaP is serum prostate specific antigen (PSA). However, the specificity of this biomarker is low as its serum level is not only increased in CaP but also in various other diseases, with age and even body mass index. Human body fluids provide an excellent resource for the discovery of biomarkers, with the advantage over tissue/biopsy samples of their ease of access, due to the less invasive nature of collection. However, their analysis presents challenges in terms of variability and validation. Blood and urine are two human body fluids commonly used for CaP research, but their proteomic analyses are limited both by the large dynamic range of protein abundance making detection of low abundance proteins difficult and in the case of urine, by the high salt concentration. To overcome these challenges, different techniques for removal of high abundance proteins and enrichment of low abundance proteins are used. Their applications and limitations are discussed in this review. A number of innovative proteomic techniques have improved detection of biomarkers. They include two dimensional differential gel electrophoresis (2D-DIGE), quantitative mass spectrometry (MS) and functional proteomic studies, i.e., investigating the association of post translational modifications (PTMs) such as phosphorylation, glycosylation and protein degradation. The recent development of quantitative MS techniques such as stable isotope labeling with amino acids in cell culture (SILAC), isobaric tags for relative and absolute quantitation (iTRAQ) and multiple reaction monitoring (MRM) have allowed proteomic researchers to quantitatively compare data from different samples. 2D-DIGE has greatly improved the statistical power of classical 2D gel analysis by introducing an internal control. This chapter aims to review novel CaP biomarkers as well as to discuss current trends in biomarker research from two angles: the source of biomarkers (particularly human body fluids such as blood and urine), and emerging proteomic approaches for biomarker research.
Resumo:
The development and design of electric high power devices with electromagnetic computer-aided engineering (EM-CAE) software such as the Finite Element Method (FEM) and Boundary Element Method (BEM) has been widely adopted. This paper presents the analysis of a Fault Current Limiter (FCL), which acts as a high-voltage surge protector for power grids. A prototype FCL was built. The magnetic flux in the core and the resulting electromagnetic forces in the winding of the FCL were analyzed using both FEM and BEM. An experiment on the prototype was conducted in a laboratory. The data obtained from the experiment is compared to the numerical solutions to determine the suitability and accuracy of the two methods.
Resumo:
Despite the rapidly urbanising population, public transport usage in metropolitan areas is not growing at a level that corresponds to the trend. Many people are reluctant to travel using public transport, as it is commonly associated with unpleasant experiences such as limited services, long wait time, and crowded spaces. This study aims to explore the use of mobile spatial interactions and services, and investigate their potential to increase the enjoyment of our everyday commuting experience. The main goal is to develop and evaluate mobile-mediated design interventions to foster interactions for and among passengers, as well as between passengers and public transport infrastructures, with the aim to positively influence the experience of commuting. Ultimately, this study hopes to generate findings and knowledge towards creating a more enjoyable public transport experience, as well as to explore innovative uses of mobile technologies and context-aware services for the urban lifestyle.
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.