289 resultados para Computer Hardware.
em Queensland University of Technology - ePrints Archive
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
An optical system which performs the multiplication of binary numbers is described and proof-of-principle experiments are performed. The simultaneous generation of all partial products, optical regrouping of bit products, and optical carry look-ahead addition are novel features of the proposed scheme which takes advantage of the parallel operations capability of optical computers. The proposed processor uses liquid crystal light valves (LCLVs). By space-sharing the LCLVs one such system could function as an array of multipliers. Together with the optical carry look-ahead adders described, this would constitute an optical matrix-vector multiplier.
Resumo:
The research seeks to understand the nature of law and justice students’ use of technology for their learning purposes. There is often an assumption made that all students have, and engage with, technology to the same degree. The research tests these assumptions by means of a survey conducted of first year law and justice students to determine their actual use of smart devices inside and outside classes. The analysis of results reveals that while the majority of respondents own at least one smart device; most rarely use their device for their learning purposes.
Resumo:
The topic of “the cloud” has attracted significant attention throughout the past few years (Cherry 2009; Sterling and Stark 2009) and, as a result, academics and trade journals have created several competing definitions of “cloud computing” (e.g., Motahari-Nezhad et al. 2009). Underpinning this article is the definition put forward by the US National Institute of Standards and Technology, which describes cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Garfinkel 2011, p. 3). Despite the lack of consensus about definitions, however, there is broad agreement on the growing demand for cloud computing. Some estimates suggest that spending on cloudrelated technologies and services in the next few years may climb as high as USD 42 billion/year (Buyya et al. 2009).
Resumo:
This paper explores inquiry skills in the Australian Curriculum in relation to inquiry learning pedagogy. Inquiry skills in the Australian Curriculum are represented as questioning skills (i.e. posing and evaluating questions and hypotheses), information literacy (i.e. seeking, evaluating, selecting and using information), ICT literacy (i.e. fluency with computer hardware and software) and discipline specific skills (i.e. data gathering, mathematical measurement, data analysis and presentation of data). This paper provides an explanation of inquiry learning pedagogy that complements the Australian Curriculum inquiry skills.
Resumo:
A Remote Sensing Core Curriculum (RSCC) development project is currently underway. This project is being conducted under the auspices of the National Center for Geographic Information and Analysis (NCGIA). RSCC is an outgrowth of the NCGIA GIS Core Curriculum project. It grew out of discussions begun at NCGIA, Initiative 12 (I-12): 'Integration of Remote Sensing and Geographic Information Systems'. This curriculum development project focuses on providing professors, teachers and instructors in undergraduate and graduate institutions with course materials from experts in specific subject matter for areas use in the class room.
Resumo:
In the developing digital economy, the notion of traditional attack on enterprises of national significance or interest has transcended into different modes of electronic attack, surpassing accepted traditional forms of physical attack upon a target. The terrorist attacks that took place in the United States on September 11, 2001 demonstrated the physical devastation that could occur if any nation were the target of a large-scale terrorist attack. Therefore, there is a need to protect criticalnational infrastructure and critical information infrastructure. In particular,this protection is crucial for the proper functioning of a modern society and for a government to fulfill one of its most important prerogatives – namely, the protection of its people. Computer networks have many benefits that governments, corporations, and individuals alike take advantage of in order to promote and perform their duties and roles. Today, there is almost complete dependence on private sector telecommunication infrastructures and the associated computer hardware and software systems.1 These infrastructures and systems even support government and defense activity.2 This Article discusses possible attacks on critical information infrastructures and the government reactions to these attacks.
Resumo:
Information and Communication Technology (ICT) has become an integral part of societies across the globe. This study demonstrates how successful technology integration by 10 experienced teachers in an Australian high school was dependent on teacher-driven change and innovation that influenced the core business of teaching and learning. The teachers were subject specialists across a range of disciplines, engaging their Year Eight students (aged 12–14 years) in the Technology Rich Classrooms programme. Two classrooms were renovated to accommodate the newly acquired computer hardware. The first classroom adopted a one-to-one desktop model with all the computers with Internet access arranged in a front-facing pattern. The second classroom had computers arranged in small groups. The students also used Blackboard to access learning materials after school hours. Qualitative data were gathered from teachers mainly through structured and unstructured interviews and a range of other approaches to ascertain their perceptions of the new initiative. This investigation showed that ICT was impacting positively on the core business of teaching and learning. Through the support of the school leadership team, the built environment was enabling teachers to use ICT. This influenced their pedagogical approaches and the types of learning activities they designed and implemented. As a consequence, teachers felt that students were motivated and benefited through this experience.
Resumo:
BIM as a suite of technologies has been enabled by the significant improvements in IT infrastructure, the capabilities of computer hardware and software, the increasing adoption of BIM, and the development of Industry Foundation Classes (IFC) which facilitate the sharing of information between firms. The report highlights the advantages of BIM, particularly the increased utility and speed, better data quality and enhanced fault finding in all construction phases. Additionally BIM promotes enhanced collaborations and visualisation of data mainly in the design and construction phase. There are a number of barriers to the effective implementation of BIM. These include, somewhat paradoxically, a single detailed model (which precludes scenarios and development of detailed alternative designs); the need for three different interoperability standards for effective implementation; added work for the designer which needs to be recognised and remunerated; the size and complexity of BIM, which requires significant investment in human capital to enable the realisation of its full potential. There are also a number of challenges to implementing BIM. The report has identified these as a range of issues concerning: IP, liability, risks and contracts, and the authenticity of users. Additionally, implementing BIM requires investment in new technology, skills training and development of news ways of collaboration. Finally, there are likely to be Trade Practices concerns as requiring certain technology owned by relatively few firms may limit
Resumo:
Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.
Resumo:
This paper discusses a new paradigm of real-time simulation of power systems in which equipment can be interfaced with a real-time digital simulator. In this scheme, one part of a power system can be simulated by using a real-time simulator; while the other part is implemeneted as a physical system. The only interface of the physical system with the computer-based simulator is through data-acquisition system. The physical system is driven by a voltage-source converter (VSC)that mimics the power system simulated in the real-time simulator. In this papar, the VSC operates in a voltage-control mode to track the point of common coupling voltage signal supplied by the digital simulator. This type of splitting a network in two parts and running a real-time simulation with a physical system in parallel is called a power network in loop here. this opens up the possibility of study of interconnection o f one or several distributed generators to a complex power network. The proposed implementation is verified through simulation studies using PSCAD/EMTDC and through hardware implementation on a TMS320G2812 DSP.
Resumo:
The integration of unmanned aircraft into civil airspace is a complex issue. One key question is whether unmanned aircraft can operate just as safely as their manned counterparts. The absence of a human pilot in unmanned aircraft automatically points to a deficiency that is the lack of an inherent see-and-avoid capability. To date, regulators have mandated that an “equivalent level of safety” be demonstrated before UAVs are permitted to routinely operate in civil airspace. This chapter proposes techniques, methods, and hardware integrations that describe a “sense-and-avoid” system designed to address the lack of a see-and-avoid capability in UAVs.
Resumo:
A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.
Resumo:
Fast calculation of quantities such as in-cylinder volume and indicated power is important in internal combustion engine research. Multiple channels of data including crank angle and pressure were collected for this purpose using a fully instrumented diesel engine research facility. Currently, existing methods use software to post-process the data, first calculating volume from crank angle, then calculating the indicated work and indicated power from the area enclosed by the pressure-volume indicator diagram. Instead, this work investigates the feasibility of achieving real-time calculation of volume and power via hardware implementation on Field Programmable Gate Arrays (FPGAs). Alternative hardware implementations were investigated using lookup tables, Taylor series methods or the CORDIC (CoOrdinate Rotation DIgital Computer) algorithm to compute the trigonometric operations in the crank angle to volume calculation, and the CORDIC algorithm was found to use the least amount of resources. Simulation of the hardware based implementation showed that the error in the volume and indicated power is less than 0.1%.