877 resultados para constrained controller
Resumo:
In this paper, two different high bandwidth converter control strategies are discussed. One of the strategies is for voltage control and the other is for current control. The converter, in each of the cases, is equipped with an output passive filter. For the voltage controller, the converter is equipped with an LC filter, while an output has an LCL filter for current controller. The important aspect that has been discussed the paper is to avoid computation of unnecessary references using high-pass filters in the feedback loop. The stability of the overall system, including the high-pass filters, has been analyzed. The choice of filter parameters is crucial for achieving desirable system performance. In this paper, the bandwidth of achievable performance is presented through frequency (Bode) plot of the system gains. It has been illustrated that the proposed controllers are capable of tracking fundamental frequency components along with low-order harmonic components. Extensive simulation results are presented to validate the control concepts presented in the paper.
Resumo:
This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.
Resumo:
Resource-based theory posits that firms achieve high performance by controlling resources that are rare, valuable and costly for others to duplicate or work around. Yet scholars have been less successful understanding processes and behaviours by which firms develop such resources. We draw on the behavioral theory of bricolage from the entrepreneurship literature to suggest one such mechanism by which firms may develop such resource-based advantages. The core of our argument is that idiosyncratic bundling processes synonymous with bricolage behavior may create advantageous resource positions by (i) allowing resource constrained firms to allocate more of their limited resources to activities that they view as more strategically important, and (ii) increasing the difficulties other firms face in trying to imitate these advantages. Based on this reasoning we develop several hypotheses which we test in the context of several samples from a large, longitudinal, Australian study of new firm development. The results support our arguments that bricolage will improve a firms’ overall resource positions while generating more areas of strong resource advantage and fewer areas of strong resource disadvantage. We find little support, however, for our arguments that bricolage will make a firms’ key resource advantages more difficult for other firms to imitate. We find some support for our argument that the role of bricolage in creating resource advantages will be enhanced by the quality of the opportunity with which a firm is engaged.
Resumo:
The quality and bitrate modeling is essential to effectively adapt the bitrate and quality of videos when delivered to multiplatform devices over resource constraint heterogeneous networks. The recent model proposed by Wang et al. estimates the bitrate and quality of videos in terms of the frame rate and quantization parameter. However, to build an effective video adaptation framework, it is crucial to incorporate the spatial resolution in the analytical model for bitrate and perceptual quality adaptation. Hence, this paper proposes an analytical model to estimate the bitrate of videos in terms of quantization parameter, frame rate, and spatial resolution. The model can fit the measured data accurately which is evident from the high Pearson correlation. The proposed model is based on the observation that the relative reduction in bitrate due to decreasing spatial resolution is independent of the quantization parameter and frame rate. This modeling can be used for rate-constrained bit-stream adaptation scheme which selects the scalability parameters to optimize the perceptual quality for a given bandwidth constraint.
Resumo:
Ocean processes are dynamic and complex events that occur on multiple different spatial and temporal scales. To obtain a synoptic view of such events, ocean scientists focus on the collection of long-term time series data sets. Generally, these time series measurements are continually provided in real or near-real time by fixed sensors, e.g., buoys and moorings. In recent years, an increase in the utilization of mobile sensor platforms, e.g., Autonomous Underwater Vehicles, has been seen to enable dynamic acquisition of time series data sets. However, these mobile assets are not utilized to their full capabilities, generally only performing repeated transects or user-defined patrolling loops. Here, we provide an extension to repeated patrolling of a designated area. Our algorithms provide the ability to adapt a standard mission to increase information gain in areas of greater scientific interest. By implementing a velocity control optimization along the predefined path, we are able to increase or decrease spatiotemporal sampling resolution to satisfy the sampling requirements necessary to properly resolve an oceanic phenomenon. We present a path planning algorithm that defines a sampling path, which is optimized for repeatability. This is followed by the derivation of a velocity controller that defines how the vehicle traverses the given path. The application of these tools is motivated by an ongoing research effort to understand the oceanic region off the coast of Los Angeles, California. The computed paths are implemented with the computed velocities onto autonomous vehicles for data collection during sea trials. Results from this data collection are presented and compared for analysis of the proposed technique.
Resumo:
Voluminous (≥3·9 × 105 km3), prolonged (∼18 Myr) explosive silicic volcanism makes the mid-Tertiary Sierra Madre Occidental province of Mexico one of the largest intact silicic volcanic provinces known. Previous models have proposed an assimilation–fractional crystallization origin for the rhyolites involving closed-system fractional crystallization from crustally contaminated andesitic parental magmas, with <20% crustal contributions. The lack of isotopic variation among the lower crustal xenoliths inferred to represent the crustal contaminants and coeval Sierra Madre Occidental rhyolite and basaltic andesite to andesite volcanic rocks has constrained interpretations for larger crustal contributions. Here, we use zircon age populations as probes to assess crustal involvement in Sierra Madre Occidental silicic magmatism. Laser ablation-inductively coupled plasma-mass spectrometry analyses of zircons from rhyolitic ignimbrites from the northeastern and southwestern sectors of the province yield U–Pb ages that show significant age discrepancies of 1–4 Myr compared with previously determined K/Ar and 40Ar/39Ar ages from the same ignimbrites; the age differences are greater than the errors attributable to analytical uncertainty. Zircon xenocrysts with new overgrowths in the Late Eocene to earliest Oligocene rhyolite ignimbrites from the northeastern sector provide direct evidence for some involvement of Proterozoic crustal materials, and, potentially more importantly, the derivation of zircon from Mesozoic and Eocene age, isotopically primitive, subduction-related igneous basement. The youngest rhyolitic ignimbrites from the southwestern sector show even stronger evidence for inheritance in the age spectra, but lack old inherited zircon (i.e. Eocene or older). Instead, these Early Miocene ignimbrites are dominated by antecrystic zircons, representing >33 to ∼100% of the dated population; most antecrysts range in age between ∼20 and 32 Ma. A sub-population of the antecrystic zircons is chemically distinct in terms of their high U (>1000 ppm to 1·3 wt %) and heavy REE contents; these are not present in the Oligocene ignimbrites in the northeastern sector of the Sierra Madre Occidental. The combination of antecryst zircon U–Pb ages and chemistry suggests that much of the zircon in the youngest rhyolites was derived by remelting of partially molten to solidified igneous rocks formed during preceding phases of Sierra Madre Occidental volcanism. Strong Zr undersaturation, and estimations for very rapid dissolution rates of entrained zircons, preclude coeval mafic magmas being parental to the rhyolite magmas by a process of lower crustal assimilation followed by closed-system crystal fractionation as interpreted in previous studies of the Sierra Madre Occidental rhyolites. Mafic magmas were more probably important in providing a long-lived heat and material flux into the crust, resulting in the remelting and recycling of older crust and newly formed igneous materials related to Sierra Madre Occidental magmatism.
Resumo:
The credit crisis of the past few years has affected the development industry like no other. Whilst early signs of loosening in bank credit policy are emerging the ability of developers to proceed with new projects is still being constrained by their inability to obtain project finance.
Resumo:
In a resource constrained business world, strategic choices must be made on process improvement and service delivery. There are calls for more agile forms of enterprises and much effort is being directed at moving organizations from a complex landscape of disparate application systems to that of an integrated and flexible enterprise accessing complex systems landscapes through service oriented architecture (SOA). This paper describes the analysis of strategies to detect supporting business services. These services can then be delivered in a variety of ways: web-services, new application services or outsourced services. The focus of this paper is on strategy analysis to identify those strategies that are common to lines of business and thus can be supported through shared services. A case study of a state government is used to show the analytical method and the detection of shared strategies.
Resumo:
Becoming a teacher in technology-rich classrooms is a complex and challenging transition for career-change entrants. Those with generic or specialist Information and Communication Technology (ICT) expertise bring a mindset about purposeful uses of ICT that enrich student learning and school communities. The transition process from a non-education environment is both enhanced and constrained by shifting the technology context of generic or specialist ICT expertise, developed through a former career as well as general life experience. In developing an understanding of the complexity of classrooms and creating a learner centred way of working, perceptions about learners and learning evolve and shift. Shifts in thinking about how ICT expertise supports learners and enhances learning preceded shifts in perceptions about being a teacher, working with colleagues, and functioning in schools that have varying degrees of intensity and impact on evolving professional identities. Current teacher education and school induction programs are seen to be falling short of meeting the needs of career-change entrants and, as a flow on, the students they nurture. Research (see, for example, Tigchelaar, Brouwer, & Korthagen, 2008; Williams & Forgasz, 2009) highlights the value of generic and specialist expertise career-change teachers bring to the profession and draws attention to the challenges such expertise begets (Anthony & Ord, 2008; Priyadharshini & Robinson-Pant, 2003). As such, the study described in this thesis investigated perceptions of career-change entrants, who have generic (Mishra & Koehler, 2006) or specialist expertise, that is, ICT qualifications and work experience in the use of ICT. The career-change entrants‘ perceptions were sought as they shifted the technology context and transitioned into teaching in technology-rich classrooms. The research involved an interpretive analysis of qualitative data and quantitative data. The study used the explanatory case study (Yin, 1994) methodology enriched through grounded theory processes (Strauss & Corbin, 1998), to develop a theory about professional identity transition from the perceptions of the participants in the study. The study provided insights into the expertise and experiences of career change entrants, particularly in relation to how professional identities that include generic and specialist ICT knowledge and expertise were reconfigured while transitioning into the teaching profession. This thesis presents the Professional Identity Transition Theory that encapsulates perceptions about teaching in technology-rich classrooms amongst a selection of the increasing number of career-change entrants. The theory, grounded in the data, (Strauss & Corbin, 1998) proposes that career-change entrants experience transition phases of varying intensity that impact on professional identity, retention and development as a teacher. These phases are linked to a shift in perceptions rather than time as a teacher. Generic and specialist expertise in the use of ICT is a weight of the past and an asset that makes the transition process more challenging for career-change entrants. The study showed that career-change entrants used their experiences and perceptions to develop a way of working in a school community. Their way of working initially had an adaptive orientation focussed on immediate needs as their teaching practice developed. Following a shift of thinking, more generative ways of working focussed on the future emerged to enable continual enhancement and development of practice. Sustaining such learning is a personal, school and systemic challenge for the teaching profession.
Resumo:
The present paper proposes a technical analysis method for extracting information about movement patterning in studies of motor control, based on a cluster analysis of movement kinematics. In a tutorial fashion, data from three different experiments are presented to exemplify and validate the technical method. When applied to three different basketball-shooting techniques, the method clearly distinguished between the different patterns. When applied to a cyclical wrist supination-pronation task, the cluster analysis provided the same results as an analysis using the conventional discrete relative phase measure. Finally, when analyzing throwing performance constrained by distance to target, the method grouped movement patterns together according to throwing distance. In conclusion, the proposed technical method provides a valuable tool to improve understanding of coordination and control in different movement models, including multiarticular actions.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
Reforming schooling to enable engagement and success for those typically marginalised and failed by schools is a necessary task for educational researchers and activists concerned with injustice. However, it is a difficult pursuit, with a long history of failed attempts. This paper outlines the rationale of an Australian partnership research project, Redesigning Pedagogies in the North (RPiN), which took on such an effort in public secondary schooling contexts that, in current times, are beset with 'crisis' conditions and constrained by policy rationales that make it difficult to pursue issues of justice. Within the project, university investigators and teachers collaborated in action research that drew on a range of conceptual resources for redesigning curriculum and pedagogies, including: funds of knowledge, vernacular or local literacies; place-based education; the 'productive pedagogies' and the 'unofficial curriculum' of popular culture and out-of-school learning settings. In bringing these resources together with the aim of interrupting the reproduction of inequality, the project developed a methodo-logic which builds on Bourdieuian insights.
Resumo:
A Wireless Sensor Network (WSN) is a set of sensors that are integrated with a physical environment. These sensors are small in size, and capable of sensing physical phenomena and processing them. They communicate in a multihop manner, due to a short radio range, to form an Ad Hoc network capable of reporting network activities to a data collection sink. Recent advances in WSNs have led to several new promising applications, including habitat monitoring, military target tracking, natural disaster relief, and health monitoring. The current version of sensor node, such as MICA2, uses a 16 bit, 8 MHz Texas Instruments MSP430 micro-controller with only 10 KB RAM, 128 KB program space, 512 KB external ash memory to store measurement data, and is powered by two AA batteries. Due to these unique specifications and a lack of tamper-resistant hardware, devising security protocols for WSNs is complex. Previous studies show that data transmission consumes much more energy than computation. Data aggregation can greatly help to reduce this consumption by eliminating redundant data. However, aggregators are under the threat of various types of attacks. Among them, node compromise is usually considered as one of the most challenging for the security of WSNs. In a node compromise attack, an adversary physically tampers with a node in order to extract the cryptographic secrets. This attack can be very harmful depending on the security architecture of the network. For example, when an aggregator node is compromised, it is easy for the adversary to change the aggregation result and inject false data into the WSN. The contributions of this thesis to the area of secure data aggregation are manifold. We firstly define the security for data aggregation in WSNs. In contrast with existing secure data aggregation definitions, the proposed definition covers the unique characteristics that WSNs have. Secondly, we analyze the relationship between security services and adversarial models considered in existing secure data aggregation in order to provide a general framework of required security services. Thirdly, we analyze existing cryptographic-based and reputationbased secure data aggregation schemes. This analysis covers security services provided by these schemes and their robustness against attacks. Fourthly, we propose a robust reputationbased secure data aggregation scheme for WSNs. This scheme minimizes the use of heavy cryptographic mechanisms. The security advantages provided by this scheme are realized by integrating aggregation functionalities with: (i) a reputation system, (ii) an estimation theory, and (iii) a change detection mechanism. We have shown that this addition helps defend against most of the security attacks discussed in this thesis, including the On-Off attack. Finally, we propose a secure key management scheme in order to distribute essential pairwise and group keys among the sensor nodes. The design idea of the proposed scheme is the combination between Lamport's reverse hash chain as well as the usual hash chain to provide both past and future key secrecy. The proposal avoids the delivery of the whole value of a new group key for group key update; instead only the half of the value is transmitted from the network manager to the sensor nodes. This way, the compromise of a pairwise key alone does not lead to the compromise of the group key. The new pairwise key in our scheme is determined by Diffie-Hellman based key agreement.
Resumo:
Focusing on the conditions that an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. The principle is discussed in the theoretical model of DSZ algorithm, from which we present the practical model of DSZ algorithm. Practical model efficiency is demonstrated by the comparison with the similar algorithms such as Enhanced simulated annealing (ESA), Monte Carlo simulated annealing (MCS), Sniffer Global Optimization (SGO), Directed Tabu Search (DTS), and Genetic Algorithm (GA), using a set of well-known unconstrained and constrained optimization test cases. Meanwhile, further attention goes to the strategies how to optimize the high-dimensional unconstrained problem using DSZ algorithm.
Resumo:
The synthesizer has come a long way since wendy Carlos' 'Switched On Bach'. Unfortunately many would not realise it. Synthesizers are in most of the popular and commercial music we hear, and their development has followed the rapid development of computing technology, allowing sugnificant perfromance leaps every five years. In the last 10 years or so, the physical interface of synthesizers has changed little even while the sound generating hardware has raced ahead. The stabilisation of gestural controller, particularly keyboard-based controllers, has enabled tje synthesizer to establish itself as an expressive instrument and one worthy of the hours of practice required on any instrument to reach a high level of proficiency. It is now time for the instrumental study of synthesizer to be taken seriously by music educators across Australia, and I hope, through this paper, to shed some light on the path forward.