718 resultados para Semantic file systems
Resumo:
The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.
Resumo:
This thesis was a step forward in improving the stability of power systems by applying new control and modelling techniques. The developed methods use the data obtained from voltage angle measurement devices which are synchronized with GPS signals to stabilize the system and avoid system-wide blackouts in the event of severe faults. New approaches were developed in this research for identifying and estimating reduced dynamic system models using phasor measurement units. The main goal of this research is achieved by integrating the developed methods to obtain a feasible wide-area control system for stabilizing the power systems.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
As the importance of information literacy has gained increased recognition, so too have academic library professionals intensified their efforts to champion, activate, and advance these capabilities in others. To date, however, little attention has focused on advancing these essential competencies amongst practitioner advocates.This paper helps redress the paucity of professional literature on the topic of workplace information literacy among library professionals.
Resumo:
This paper presents an approach to assess the resilience of a water supply system under the impacts of climate change. Changes to climate characteristics such as rainfall, evapotranspiration and temperature can result in changes to the global hydrological cycle and thereby adversely impact on the ability of water supply systems to meet service standards in the future. Changes to the frequency and characteristics of floods and droughts as well as the quality of water provided by groundwater and surface water resources are the other consequences of climate change that will affect water supply system functionality. The extent and significance of these changes underline the necessity for assessing the future functionality of water supply systems under the impacts of climate change. Resilience can be a tool for assessing the ability of a water supply system to meet service standards under the future climate conditions. The study approach is based on defining resilience as the ability of a system to absorb pressure without going into failure state as well as its ability to achieve an acceptable level of function quickly after failure. In order to present this definition in the form of a mathematical function, a surrogate measure of resilience has been proposed in this paper. In addition, a step-by-step approach to estimate resilience of water storage reservoirs is presented. This approach will enable a comprehensive understanding of the functioning of a water storage reservoir under future climate scenarios and can also be a robust tool to predict future challenges faced by water supply systems under the consequence of climate change.
Resumo:
The methodology undertaken, the channel model and the system model created for developing a novel adaptive equalization method and a novel channel tracking method for uplink of MU-MIMO-OFDM systems is presented in this paper. The results show that the channel tracking method works with 97% accuracy, while the training-based initial channel estimation method shows poor performance in estimating the actual channel comparatively.
Resumo:
I believe that studies of men's gendered experiences of information systems are needed. In order to support this claim, I introduce the area of Masculinity Studies to Information Systems research and, using this, present an exploratory analysis of an internet dating website for gay men – Gaydar. The information system, which forms part of the Gaydar community, is shown to shape, and be shaped by the members as they accept and challenge aspects of it as related to their identities. In doing this, I show how the intertwined processes of information systems development and use contribute to the creation of diverse interpretations of masculinity within a group of men. In sum, my analysis highlights different kinds of men and different versions of masculinity that can sometimes be associated with different experiences of information systems. The implications of this work centre on the need to expand our knowledge of men's gendered experiences with information systems, to reflect upon processes of technology facilitated categorisation and to consider the influences that contribute to the roll out of particular software features along with the underlying rationales for market segmentation in the software and software-based services industries.
Resumo:
The papers in this issue focus our attention to packaged software as an increasingly important, but still relatively poorly understood phenomena in the information systems research community. The topic is not new: Lucas et al. (1988) wrote a provocative piece focused on the issues with implementing packaged software. A decade later, Carmel (1997) argued that packaged software was both ideally suited for American entrepreneurial activity and rapidly growing. The information systems research community, however, has moved more slowly to engage this change (e.g., Sawyer, 2001). The papers in this special issue represent a significant step in better engaging the issues of packaged software relative to information systems research, and highlighting opportunities for additional relevant research.
Resumo:
This thesis highlights the limitations of the existing car following models to emulate driver behaviour for safety study purposes. It also compares the capabilities of the mainstream car following models emulating driver behaviour precise parameters such as headways and Time to Collisions. The comparison evaluates the robustness of each car following model for safety metric reproductions. A new car following model, based on the personal space concept and fish school model is proposed to simulate more precise traffic metrics. This new model is capable of reflecting changes in the headway distribution after imposing the speed limit form VSL systems. This research facilitates assessing Intelligent Transportation Systems on motorways, using microscopic simulation.
Resumo:
Business process management systems (BPMS) belong to a class of enterprise information systems that are characterized by the dependence on explicitly modeled process logic. Through the process logic, it is relatively easy to manage explicitly the routing and allocation of work items along a business process through the system. Inspired by the DeLone and McLean framework, we theorize that these process-aware system features are important attributes of system quality, which in turn will elevate key user evaluations such as perceived usefulness, and usage satisfaction. We examine this theoretical model using data collected from four different, mostly mature BPM system projects. Our findings validate the importance of input quality as well as allocation and routing attributes as antecedents of system quality, which, in turn, determines both usefulness and satisfaction with the system. We further demonstrate how service quality and workflow dependency are significant precursors to perceived usefulness. Our results suggest the appropriateness of a multi-dimensional conception of system quality for future research, and provide important design-oriented advice for the design and configuration of BPMSs.
Resumo:
The development of user expertise is a strategic imperative for organizations in hyper-competitive markets. This paper conceptualizes opreationalises and validates user expertise in contemporary Information Systems (IS) as a formative, multidimensional index. Such a validated and widely accepted index would facilitate progression of past research on user competence and efficacy of IS to complex contemporary IS, while at the same time providing a benchmark for organizations to track their user expertise. The validation involved three separate studies, including exploratory and confirmatory phases, using data from 244 respondents.
Resumo:
Although Design Science Research (DSR) is now an accepted approach to research in the Information Systems (IS) discipline, consensus on the methodology of DSR has yet to be achieved. Lack of a comprehensive and detailed methodology for Design Science Research (DSR) in the Information System (IS) discipline is a main issue. Prior research (the parent-study) aimed to remedy this situation and resulted in the DSR-Roadmap (Alturki et al., 2011a). Continuing empirical validation and revision of the DSR-Roadmap strives towards a methodology with appropriate levels of detail, integration, and completeness for novice researchers to efficiently and effectively conduct and report DSR in IS. The sub-study reported herein contributes to this larger, ongoing effort. This paper reports results from a formative evaluation effort of the DSR-Roadmap conducted using focus group analysis. Generally, participants endorsed the utility and intuitiveness of the DSR-Roadmap, while also suggesting valuable refinements. Both parent-study and sub-study make methodological contributions. The parent-study is the first attempt of utilizing DSR to develop a research methodology showing an example of how to use DSR in research methodology construction. The sub-study demonstrates the value of the focus group method in DSR for formative product evaluation.
Resumo:
Recent literature has emphasized the pivotal role of knowledge integration in Enterprise Systems (ES) success. This research-in-progress paper, building upon Knowledge Based Theory of the firm (KBT), examines the efficiency of knowledge integration in the context of ES implementation and identifies the factors contributing to its enhancement. The proposed model in this paper suggests that the efficiency of knowledge integration in an ES implementation process depends upon the level of common knowledge and the level of coordination in the ES adopting organization. It further suggests that the level of common knowledge can be enhanced by proper training, improving ES users’intrinsic and extrinsic motivations and business process modeling and the level of coordination can be improved by articulating a clear unified organizational goal for the ES adoption in the organization, forming a competent ES team, enhancing interdepartmental communication and the cross-functionality in the organization structure.
Resumo:
Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.
Resumo:
Text categorisation is challenging, due to the complex structure with heterogeneous, changing topics in documents. The performance of text categorisation relies on the quality of samples, effectiveness of document features, and the topic coverage of categories, depending on the employing strategies; supervised or unsupervised; single labelled or multi-labelled. Attempting to deal with these reliability issues in text categorisation, we propose an unsupervised multi-labelled text categorisation approach that maps the local knowledge in documents to global knowledge in a world ontology to optimise categorisation result. The conceptual framework of the approach consists of three modules; pattern mining for feature extraction; feature-subject mapping for categorisation; concept generalisation for optimised categorisation. The approach has been promisingly evaluated by compared with typical text categorisation methods, based on the ground truth encoded by human experts.