926 resultados para Real power
Resumo:
The use of technology for purposes such as communication and document management has become essential to legal practice with practitioners and courts increasingly relying on various forms of technology. Accordingly, legal practitioners need to be able to understand, communicate with, and persuade their audience using this technology. Technology skills are therefore an essential and integral part of undergraduate legal education, and given the widening participation agenda in Australia and consequent increasing diversity of law students, it must also be available to all students. To neglect this most crucial part of modern legal education is to fail in a fundamental aspect of a University’s obligation not just to its students, but ultimately to our students’ potential employers and their future clients. This paper will consider how law schools can facilitate the development of technology skills by using technology to facilitate mooting in settings that replicate legal practice. In order to assess the facilities at the disposal of universities, the authors surveyed the law schools in Australia about their equipment in and use of electronic moot court rooms. The authors also conducted and evaluated an internal mooting competition using Elluminate, an online communication platform available to students through Blackboard. Students were able to participate wherever they were located without the need to attend a moot court room. The results of the survey and evaluation of the Elluminate competition will be discussed. The paper will conclude that while it is essential to teach technology skills as part of legal education, it is important that the benefits and importance of using technology be made clear in order for it to be accepted and embraced by the students. Technology must also be available to all students considering the widening participation in higher education and consequent increasing diversity of law students.
Resumo:
Video games have shown great potential as tools that both engage and motivate players to achieve tasks and build communities in fantasy worlds. We propose that the application of game elements to real world activities can aid in delivering contextual information in interesting ways and help young people to engage in everyday events. Our research will explore how we can unite utility and fun to enhance information delivery, encourage participation, build communities and engage users with utilitarian events situated in the real world. This research aims to identify key game elements that work effectively to engage young digital natives, and provide guidelines to influence the design of interactions and interfaces for event applications in the future. This research will primarily contribute to areas of user experience and pervasive gaming.
Resumo:
The customary approach to the study of meal size suggests that ‘events’ occurring during a meal lead to its termination. Recent research, however, suggests that a number of decisions are made before eating commences that may affect meal size. The present study sought to address three key research questions around meal size: the extent to which plate cleaning occurs; prevalence of pre-meal planning and its influence on meal size; and the effect of within-meal experiences, notably the development of satiation. To address these, a large-cohort internet-based questionnaire was developed. Results showed that plate cleaning occurred at 91% of meals, and was planned from the outset in 92% of these cases. A significant relationship between plate cleaning and meal planning was observed. Pre meal plans were resistant to modification over the course of the meal: only 18% of participants reported consumption that deviated from expected. By contrast, 28% reported continuing eating beyond satiation, and 57% stated that they could have eaten more at the end of the meal. Logistic regression confirmed pre-meal planning as the most important predictor of consumption. Together, our findings demonstrate the importance of meal planning as a key determinant of meal size and energy intake.
Resumo:
This paper focuses on the ‘real world’ approach to the degree achieved through the first year program, embedding and scaffolding law graduate capabilities through authentic and valid assessment and work integrated learning to assist graduates with transition into the workplace.
Resumo:
Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.
Resumo:
In recent years, the effect of ions and ultrafine particles on ambient air quality and human health has been well documented, however, knowledge about their sources, concentrations and interactions within different types of urban environments remains limited. This thesis presents the results of numerous field studies aimed at quantifying variations in ion concentration with distance from the source, as well as identifying the dynamics of the particle ionisation processes which lead to the formation of charged particles in the air. In order to select the most appropriate measurement instruments and locations for the studies, a literature review was also conducted on studies that reported ion and ultrafine particle emissions from different sources in a typical urban environment. The initial study involved laboratory experiments on the attachment of ions to aerosols, so as to gain a better understanding of the interaction between ions and particles. This study determined the efficiency of corona ions at charging and removing particles from the air, as a function of different particle number and ion concentrations. The results showed that particle number loss was directly proportional to particle charge concentration, and that higher small ion concentrations led to higher particle deposition rates in all size ranges investigated. Nanoparticles were also observed to decrease with increasing particle charge concentration, due to their higher Brownian mobility and subsequent attachment to charged particles. Given that corona discharge from high voltage powerlines is considered one of the major ion sources in urban areas, a detailed study was then conducted under three parallel overhead powerlines, with a steady wind blowing in a perpendicular direction to the lines. The results showed that large sections of the lines did not produce any corona at all, while strong positive emissions were observed from discrete components such as a particular set of spacers on one of the lines. Measurements were also conducted at eight upwind and downwind points perpendicular to the powerlines, spanning a total distance of about 160m. The maximum positive small and large ion concentrations, and DC electric field were observed at a point 20 m downwind from the lines, with median values of 4.4×103 cm-3, 1.3×103 cm-3 and 530 V m-1, respectively. It was estimated that, at this point, less than 7% of the total number of particles was charged. The electrical parameters decreased steadily with increasing downwind distance from the lines but remained significantly higher than background levels at the limit of the measurements. Moreover, vehicles are one of the most prevalent ion and particle emitting sources in urban environments, and therefore, experiments were also conducted behind a motor vehicle exhaust pipe and near busy motorways, with the aim of quantifying small ion and particle charge concentration, as well as their distribution as a function of distance from the source. The study found that approximately equal numbers of positive and negative ions were observed in the vehicle exhaust plume, as well as near motorways, of which heavy duty vehicles were believed to be the main contributor. In addition, cluster ion concentration was observed to decrease rapidly within the first 10-15 m from the road and ion-ion recombination and ion-aerosol attachment were the most likely cause of ion depletion, rather than dilution and turbulence related processes. In addition to the above-mentioned dominant ion sources, other sources also exist within urban environments where intensive human activities take place. In this part of the study, airborne concentrations of small ions, particles and net particle charge were measured at 32 different outdoor sites in and around Brisbane, Australia, which were classified into seven different groups as follows: park, woodland, city centre, residential, freeway, powerlines and power substation. Whilst the study confirmed that powerlines, power substations and freeways were the main ion sources in an urban environment, it also suggested that not all powerlines emitted ions, only those with discrete corona discharge points. In addition to the main ion sources, higher ion concentrations were also observed environments affected by vehicle traffic and human activities, such as the city centre and residential areas. A considerable number of ions were also observed in a woodland area and it is still unclear if they were emitted directly from the trees, or if they originated from some other local source. Overall, it was found that different types of environments had different types of ion sources, which could be classified as unipolar or bipolar particle sources, as well as ion sources that co-exist with particle sources. In general, fewer small ions were observed at sites with co-existing sources, however particle charge was often higher due to the effect of ion-particle attachment. In summary, this study quantified ion concentrations in typical urban environments, identified major charge sources in urban areas, and determined the spatial dispersion of ions as a function of distance from the source, as well as their controlling factors. The study also presented ion-aerosol attachment efficiencies under high ion concentration conditions, both in the laboratory and in real outdoor environments. The outcomes of these studies addressed the aims of this work and advanced understanding of the charge status of aerosols in the urban environment.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
The potential of distributed reactive power control to improve the voltage profile of radial distribution feeders has been reported in literature by few authors. However, the multiple inverters injecting or absorbing reactive power across a distribution feeder may introduce control interactions and/or voltage instability. Such controller interactions can be alleviated if the inverters are allowed to operate on voltage droop. First, the paper demonstrates that a linear shallow droop line can maintain the steady state voltage profile close to reference, up to a certain level of loading. Then, impacts of the shallow droop line control on line losses and line power factors are examined. Finally, a piecewise linear droop line which can achieve reduced line losses and close to unity power factor at the feeder source is proposed.
Resumo:
Electricity has been the major source of power in most railway systems. Reliable, efficient and safe power distribution to the trains is vitally important to the overall quality of railway service. Like any large-scale engineering system, design, operation and planning of traction power systems rely heavily on computer simulation. This paper reviews the major features on modelling and the general practices for traction power system simulation; and introduces the development of the latest simulation approach with discussions on simulation results and practical applications. Remarks will also be given on the future challenges on traction power system simulation.
Resumo:
The use of grant contracts to deliver community services is now a significant feature of all Australian government administrations. These contracts are the primary instrument governing the provision of such services to citizens and are largely outside the usual parliamentary review mechanisms and constraints. This article examines the extent of the erosion of fundamental constitutional principles facilitated by the use of private contracts, by applying the principles used in scrutiny of delegated legislation to standard form federal and State community service contracts. It reveals extensive executive power which, if the relationship were founded in legislative instruments rather than in private contract, would have to be justified to Parliament at least and possibly not tolerated.
Resumo:
Embedded real-time programs rely on external interrupts to respond to events in their physical environment in a timely fashion. Formal program verification theories, such as the refinement calculus, are intended for development of sequential, block-structured code and do not allow for asynchronous control constructs such as interrupt service routines. In this article we extend the refinement calculus to support formal development of interrupt-dependent programs. To do this we: use a timed semantics, to support reasoning about the occurrence of interrupts within bounded time intervals; introduce a restricted form of concurrency, to model composition of interrupt service routines with the main program they may preempt; introduce a semantics for shared variables, to model contention for variables accessed by both interrupt service routines and the main program; and use real-time scheduling theory to discharge timing requirements on interruptible program code.
Resumo:
How bloggers and other independent online commentators criticise, correct, and otherwise challenge conventional journalism has been known for years, but has yet to be fully accepted by journalists; hostilities between the media establishment and the new generation of citizen journalists continue to flare up from time to time. The old gatekeeping monopoly of the mass media has been challenged by the new practice of gatewatching: by individual bloggers and by communities of commentators which may not report the news first-hand, but curate and evaluate the news and other information provided by official sources, and thus provide an important service. And this now takes place ever more rapidly, almost in real time: using the latest social networks, which disseminate, share, comment, question, and debunk news reports within minutes, and using additional platforms that enable fast and effective ad hoc collaboration between users. When hundreds of volunteers can prove within a few days that a German minister has been guilty of serious plagiarism, when the world first learns of earthquakes and tsunamis via Twitter – how does journalism manage to keep up?
Resumo:
With examples drawn from media coverage of the War on Terror, the 2003 invasion of Iraq, Hurricane Katrina and the London underground bombings, Cultural Chaos explores the changing relationship between journalism and power in an increasingly globalised news culture. In this new text, Brian McNair examines the processes of cultural, geographic and political dissolution in the post-Cold War era and the rapid evolution of information and communication technologies. He investigates the impact of these trends on domestic and international journalism and on political processes in democratic and authoritarian societies across the world. Written in a lively and accessible style, Cultural Chaos provides students with an overview of the evolution of the sociology of journalism, a critical review of current thinking within media studies and an argument for a revision and renewal of the paradigms that have dominated the field since the early twentieth century. Separate chapters are devoted to new developments such as the rise of the blogosphere and satellite television news and their impact on journalism more generally. Cultural Chaos will be essential reading for all those interested in the emerging globalised news culture of the twenty-first century.
Resumo:
Several authors stress the importance of data’s crucial foundation for operational, tactical and strategic decisions (e.g., Redman 1998, Tee et al. 2007). Data provides the basis for decision making as data collection and processing is typically associated with reducing uncertainty in order to make more effective decisions (Daft and Lengel 1986). While the first series of investments of Information Systems/Information Technology (IS/IT) into organizations improved data collection, restricted computational capacity and limited processing power created challenges (Simon 1960). Fifty years on, capacity and processing problems are increasingly less relevant; in fact, the opposite exists. Determining data relevance and usefulness is complicated by increased data capture and storage capacity, as well as continual improvements in information processing capability. As the IT landscape changes, businesses are inundated with ever-increasing volumes of data from both internal and external sources available on both an ad-hoc and real-time basis. More data, however, does not necessarily translate into more effective and efficient organizations, nor does it increase the likelihood of better or timelier decisions. This raises questions about what data managers require to assist their decision making processes.