885 resultados para Repeated Averages of Real-Valued Functions
Resumo:
Police work tasks are diverse and require the ability to take command, demonstrate leadership, make serious decisions and be self directed (Beck, 1999; Brunetto & Farr-Wharton, 2002; Howard, Donofrio & Boles, 2002). This work is usually performed in pairs or sometimes by an officer working alone. Operational police work is seldom performed under the watchful eyes of a supervisor and a great amount of reliance is placed on the high levels of motivation and professionalism of individual officers. Research has shown that highly motivated workers produce better outcomes (Whisenand & Rush, 1998; Herzberg, 2003). It is therefore important that Queensland police officers are highly motivated to provide a quality service to the Queensland community. This research aims to identify factors which motivate Queensland police to perform quality work. Researchers acknowledge that there is a lack of research and knowledge in regard to the factors which motivate police (Beck, 1999; Bragg, 1998; Howard, Donofrio & Boles, 2002; McHugh & Verner, 1998). The motivational factors were identified in regard to the demographic variables of; age, sex, rank, tenure and education. The model for this research is Herzberg’s two-factor theory of workplace motivation (1959). Herzberg found that there are two broad types of workplace motivational factors; those driven by a need to prevent loss or harm and those driven by a need to gain personal satisfaction or achievement. His study identified 16 basic sub-factors that operate in the workplace. The research utilised a questionnaire instrument based on the sub-factors identified by Herzberg (1959). The questionnaire format consists of an initial section which sought demographic information about the participant and is followed by 51 Likert scale questions. The instrument is an expanded version of an instrument previously used in doctoral studies to identify sources of police motivation (Holden, 1980; Chiou, 2004). The questionnaire was forwarded to approximately 960 police in the Brisbane, Metropolitan North Region. The data were analysed using Factor Analysis, MANOVAs, ANOVAs and multiple regression analysis to identify the key sources of police motivation and to determine the relationships between demographic variables such as: age, rank, educational level, tenure, generation cohort and motivational factors. A total of 484 officers responded to the questionnaire from the sample population of 960. Factor analysis revealed five broad Prime Motivational Factors that motivate police in their work. The Prime Motivational Factors are: Feeling Valued, Achievement, Workplace Relationships, the Work Itself and Pay and Conditions. The factor Feeling Valued highlighted the importance of positive supportive leaders in motivating officers. Many officers commented that supervisors who only provided negative feedback diminished their sense of feeling valued and were a key source of de-motivation. Officers also frequently commented that they were motivated by operational police work itself whilst demonstrating a strong sense of identity with their team and colleagues. The study showed a general need for acceptance by peers and an idealistic motivation to assist members of the community in need and protect victims of crime. Generational cohorts were not found to exert a significant influence on police motivation. The demographic variable with the single greatest influence on police motivation was tenure. Motivation levels were found to drop dramatically during the first two years of an officer’s service and generally not improve significantly until near retirement age. The findings of this research provide the foundation of a number of recommendations in regard to police retirement, training and work allocation that are aimed to improve police motivation levels. The five Prime Motivational Factor model developed in this study is recommended for use as a planning tool by police leaders to improve motivational and job-satisfaction components of police Service policies. The findings of this study also provide a better understanding of the current sources of police motivation. They are expected to have valuable application for Queensland police human resource management when considering policies and procedures in the areas of motivation, stress reduction and attracting suitable staff to specific areas of responsibility.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.
Resumo:
This paper discusses a new paradigm of real-time simulation of power systems in which equipment can be interfaced with a real-time digital simulator. In this scheme, one part of a power system can be simulated by using a real-time simulator; while the other part is implemeneted as a physical system. The only interface of the physical system with the computer-based simulator is through data-acquisition system. The physical system is driven by a voltage-source converter (VSC)that mimics the power system simulated in the real-time simulator. In this papar, the VSC operates in a voltage-control mode to track the point of common coupling voltage signal supplied by the digital simulator. This type of splitting a network in two parts and running a real-time simulation with a physical system in parallel is called a power network in loop here. this opens up the possibility of study of interconnection o f one or several distributed generators to a complex power network. The proposed implementation is verified through simulation studies using PSCAD/EMTDC and through hardware implementation on a TMS320G2812 DSP.
Resumo:
Wireless network technologies, such as IEEE 802.11 based wireless local area networks (WLANs), have been adopted in wireless networked control systems (WNCS) for real-time applications. Distributed real-time control requires satisfaction of (soft) real-time performance from the underlying networks for delivery of real-time traffic. However, IEEE 802.11 networks are not designed for WNCS applications. They neither inherently provide quality-of-service (QoS) support, nor explicitly consider the characteristics of the real-time traffic on networked control systems (NCS), i.e., periodic round-trip traffic. Therefore, the adoption of 802.11 networks in real-time WNCSs causes challenging problems for network design and performance analysis. Theoretical methodologies are yet to be developed for computing the best achievable WNCS network performance under the constraints of real-time control requirements. Focusing on IEEE 802.11 distributed coordination function (DCF) based WNCSs, this paper analyses several important NCS network performance indices, such as throughput capacity, round trip time and packet loss ratio under the periodic round trip traffic pattern, a unique feature of typical NCSs. Considering periodic round trip traffic, an analytical model based on Markov chain theory is developed for deriving these performance indices under a critical real-time traffic condition, at which the real-time performance constraints are marginally satisfied. Case studies are also carried out to validate the theoretical development.
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Resumo:
Abstract Computer simulation is a versatile and commonly used tool for the design and evaluation of systems with different degrees of complexity. Power distribution systems and electric railway network are areas for which computer simulations are being heavily applied. A dominant factor in evaluating the performance of a software simulator is its processing time, especially in the cases of real-time simulation. Parallel processing provides a viable mean to reduce the computing time and is therefore suitable for building real-time simulators. In this paper, we present different issues related to solving the power distribution system with parallel computing based on a multiple-CPU server and we will concentrate, in particular, on the speedup performance of such an approach.
Resumo:
This paper anatomises emerging developments in online community engagement in a major global industry: real estate. Economists argue that we are entering a ‘social network economy’ in which ‘complex social networks’ govern consumer choice and product value. In the light of this, organisations are shifting from thinking and behaving in the conventional ‘value chain’ model--in which exchanges between firms and customers are one-way only, from the firm to the consumer--to the ‘value ecology’ model, in which consumers and their networks become co-creators of the value of the product. This paper studies the way in which the global real estate industry is responding to this environment. This paper identifies three key areas in which online real estate ‘value ecology’ work is occurring: real estate social networks, games, and locative media / augmented reality applications. Uptake of real estate applications is, of course, user-driven: the paper not only highlights emerging innovations; it also identifies which of these innovations are actually being taken up by users, and the content contributed as a result. The paper thus provides a case study of one major industry’s shift into a web 2.0 communication model, focusing on emerging trends and issues.
Resumo:
“You need to be able to tell stories. Illustration is a literature, not a pure fine art. It’s the fine art of writing with pictures.” – Gregory Rogers. This paper reads two recent wordless picture books by Australian illustrator Gregory Rogers in order to consider how “Shakespeare” is produced as a complex object of consumption for the implied child reader: The Boy, The Bear, The Baron, The Bard (2004) and Midsummer Knight (2006). In these books other worlds are constructed via time-travel and travel to a fantasy world, and clearly presume reader competence in narrative temporality and structure, and cultural literacy (particularly in reference to Elizabethan London and William Shakespeare), even as they challenge normative concepts via use of the fantastic. Exploring both narrative sequences and individual images reveals a tension in the books between past and present, and real and imagined. Where children’s texts tend to privilege Shakespeare, the man and his works, as inherently valuable, Rogers’s work complicates any sense of cultural value. Even as these picture books depend on a lexicon of Shakespearean images for meaning and coherence, they represent William Shakespeare as both an enemy to children (The Boy), and a national traitor (Midsummer). The protagonists, a boy in the first book and the bear he rescues in the second, effect political change by defeating Shakespeare. However, where these texts might seem to be activating a postcolonial cultural critique, this is complicated both by presumed readerly competence in authorized cultural discourses and by repeated affirmation of monarchies as ideal political systems. Power, then, in these picture books is at once rewarded and withheld, in a dialectic of (possibly postcolonial) agency, and (arguably colonial) subjection, even as they challenge dominant valuations of “Shakespeare” they do not challenge understandings of the “Child”.
Resumo:
The growth of powerful entertainment functions of mobile devices, in particular mobile video, has recently attracted much attention. Studies on mobile TV, one form of mobile video, have been conducted in many countries. However, little research focuses on the holistic usage of mobile video. To understand the features of such usage, we conducted an online survey in Brisbane, Australia, during the first half of 2010. Our findings reveal similarities and diversities between usage of mobile TV in particular and mobile video on the whole. The results could aid in improving the design of future studies, with a view to ultimately increase user satisfaction.
Resumo:
In computational linguistics, information retrieval and applied cognition, words and concepts are often represented as vectors in high dimensional spaces computed from a corpus of text. These high dimensional spaces are often referred to as Semantic Spaces. We describe a novel and efficient approach to computing these semantic spaces via the use of complex valued vector representations. We report on the practical implementation of the proposed method and some associated experiments. We also briefly discuss how the proposed system relates to previous theoretical work in Information Retrieval and Quantum Mechanics and how the notions of probability, logic and geometry are integrated within a single Hilbert space representation. In this sense the proposed system has more general application and gives rise to a variety of opportunities for future research.
Resumo:
Efficient state asset management is crucial for governments as they facilitate the fulfillment of their public functions, which include the provision of essential services and other public administration support. In recent times economies internationally and particularly in South east Asia, have displayed increased recognition of the importance of efficiencies across state asset management law, policies and practice. This has been exemplified by a surge in notable instances of reform in state asset management. A prominent theme in this phenomenon is the consideration of governance principles within the re-conceptualization of state asset management law and related policy, with many countries recognizing variability in the quality of asset governance and opportunities for profit as being critical factors. This issue is very current in Indonesia where a major reform process in this area has been confirmed by the establishment of a new Directorate of State Asset Management. The incumbent Director-General of State Asset Management has confirmed a re-emphasis on adherence to governance principles within applicable state asset management law and policy reform. This paper reviews aspects of the challenge of reviewing and reforming Indonesian practice within state asset management law and policy specifically related to public housing, public buildings, parklands, and vacant land. A critical issue in beginning this review is how Indonesia currently conceptualizes the notion of asset governance and how this meaning is embodied in recent changes in law and policy and importantly in options for future change. This paper discusses the potential complexities uniquely Indonesian characteristics such as decentralisation and regional autonomy regime, political history, and bureaucratic culture
Resumo:
Magneto-rheological (MR) fluid damper is a semi-active control device that has recently received more attention by the vibration control community. But inherent nonlinear hysteresis character of magneto-rheological fluid dampers is one of the challenging aspects for utilizing this device to achieve high system performance. So the development of accurate model is necessary to take the advantage their unique characteristics. Research by others [3] has shown that a system of nonlinear differential equations can successfully be used to describe the hysteresis behavior of the MR damper. The focus of this paper is to develop an alternative method for modeling a damper in the form of centre average fuzzy interference system, where back propagation learning rules are used to adjust the weight of network. The inputs for the model are used from the experimental data. The resulting fuzzy interference system is satisfactorily represents the behavior of the MR fluid damper with reduced computational requirements. Use of the neuro-fuzzy model increases the feasibility of real time simulation.
Resumo:
In the partnering with students and industry it is important for universities to recognize and value the nature of knowledge and learning that emanates from work integrated learning experiences is different to formal university based learning. Learning is not a by-product of work rather learning is fundamental to engaging in work practice. Work integrated learning experiences provide unique opportunities for students to integrate theory and practice through the solving of real world problems. This paper reports findings to date of a project that sought to identify key issues and practices faced by academics, industry partners and students engaged in the provision and experience of work integrated learning within an undergraduate creative industries program at a major metropolitan university. In this paper, those findings are focused on some of the particular qualities and issues related to the assessment of learning at and through the work integrated experience. The findings suggest that the assessment strategies needed to better value the knowledges and practices of the Creative Industries. The paper also makes recommendations about how industry partners might best contribute to the assessment of students’ developing capabilities and to continuous reflection on courses and the assurance of learning agenda.