514 resultados para uses


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some of my most powerful spiritual experiences have come from the splendorous and sublime sounding hymns performed by a choir and church organ at the traditional Anglican church Ive attended since I was very young. In the later stage of my life, my pursuit of education in the field of engineering caused me to move to Australia where I regularly attended a contemporary evangelical church and subsequently became a music director in the faith community. This environmental and cultural shift altered my perception and musical experiences of Christian music and led me to enquire about the relationship between Christian liturgy and church music. Throughout history church musicians and composers have synthesised the theological, congregational, cultural and musical aspects of church liturgy. Many great composers have taken into account the conditions surrounding the process of sacred composition and arrangement of music to enhance the experience of religious ecstasy they sought resonances with Christian values and beliefs to draw congregational participation into the light of praising and glorifying God. As a music director in an evangelical church this aspiration has become one I share. I hope to identify and define the qualities of these resonances that have been successful and apply them to my own practice. Introduction and Structure of the Thesis In this study I will examine four purposively selected excerpts of Christian church vocal music combining theomusicological and semiotic analysis to help identify guidelines that might be useful in my practice as a church music director. The four musical excerpts have been selected based upon their sustained musical and theological impact over time, and their ability to affect ecstatic responses from congregations. This thesis documents a personal journey through analysis of music and uses a context that draws upon ethno-musicological, theological and semiotic tools that lead to a preliminary framework and principles which can then be applied to the identified qualities of resonance in church music today. The thesis is comprised of four parts. Part 1 presents a literature study on the relationship between sacred music, the effects of religious ecstasy and the Christian church. Multiple lenses on this phenomenon are drawn from the viewpoints of prominent western church historians, Biblical theologians, and philosophers. The literature study continues in Part 2, where the role of embodiment is examined from the current perspective of cognitive learning environments. This study offers a platform for a critical reflection on two distinctive musical liturgical systems that have treated differently the notion of embodied understanding amidst a shifting church paradigm. This allows an in-depth theological and philosophical understanding of the liturgical conditions around sacred music-making that relates to the monistic and dualistic body/mind. Part 3 involves undertaking a theomusicological methodology that utilises creative case studies of four purposively selected spiritual pieces. A semiotic study focuses on specific sections of sacred vocal works that express the notions of praise and glorification, particularly in relation to these effects,which combine an analysis of theological perspectives around religious ecstasy and particular spiritual themes. Part 4 presents the critiques and findings gathered from the study that incorporate theoretical and technological means to analyse the purposive selected musical artefact, particularly with the sonic narratives expressing notions of Praise' and 'Glory. The musical findings are further discussed in relation to the notion of resonance, and then a conceptual framework for the role of contemporary musicdirector is proposed. The musical and Christian terminologies used in the thesis are explained in the glossary, and the appendices includes tables illustrating the musical findings, conducted surveys, written musical analyses and audio examples of selected sacred pieces available on the enclosed compact disc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of stable isotope ratios 18O and 2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWLs, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of isoscapes, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between data points i.e. borehole screened zones where groundwater enters. The relative variations in the 18O and 2H values are similar in these ambient temperature systems. However, 2H better reflects hydrological processes, whereas 18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of 18O and 2H and colour coding for isotope values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim. Our aim in this paper is to explain a methodological/methods package devised to incorporate situational and social world mapping with frame analysis, based on a grounded theory study of Australian rural nurses' experiences of mentoring. Background. Situational analysis, as conceived by Adele Clarke, shifts the research methodology of grounded theory from being located within a postpositivist paradigm to a postmodern paradigm. Clarke uses three types of maps during this process: situational, social world and positional, in combination with discourse analysis. Method. During our grounded theory study, the process of concurrent interview data generation and analysis incorporated situational and social world mapping techniques. An outcome of this was our increased awareness of how outside actors influenced participants in their constructions of mentoring. In our attempts to use Clarke's methodological package, however, it became apparent that our constructivist beliefs about human agency could not be reconciled with the postmodern project of discourse analysis. We then turned to the literature on symbolic interactionism and adopted frame analysis as a method to examine the literature on rural nursing and mentoring as secondary form of data. Findings. While we found situational and social world mapping very useful, we were less successful in using positional maps. In retrospect, we would argue that collective action framing provides an alternative to analysing such positions in the literature. This is particularly so for researchers who locate themselves within a constructivist paradigm, and who are therefore unwilling to reject the notion of human agency and the ability of individuals to shape their world in some way. Conclusion. Our example of using this package of situational and social worlds mapping with frame analysis is intended to assist other researchers to locate participants more transparently in the social worlds that they negotiate in their everyday practice. 2007 Blackwell Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

More recently, lifespan development psychology models of adaptive development have been applied to the workforce to investigate ageing worker and lifespan issues. The current study uses the Learning and Development Survey (LDS) to investigate employee selection and engagement of learning and development goals and opportunities and constraints for learning at work in relation to demographics and career goals. It was found that mature age was associated with perceptions of preferential treatment of younger workers with respect to learning and development. Age was also correlated with several career goals. Findings suggest that younger workers learning and development options are better catered for in the workplace. Mature aged workers may compensate for unequal learning opportunities at work by studying for an educational qualification or seeking alternate job opportunities. The desire for a higher level job within the organization or educational qualification was linked to engagement in learning and development goals at work. It is suggested that an understanding of employee perceptions in the workplace in relation to goals and activities may be important in designing strategies to retain workers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For almost a decade before Hollywood existed, French firm Pathe towered over the early film industry with estimates of its share of all films sold around the world varying between 50-70%. Pathe was the first global entertainment company. This paper analyses its rise to market leadership by applying a theoretical framework drawn from the business literature on causes of industry dominance, which provides insights into how firms acquire and maintain market dominance, and in this case the film industry. This paper uses evidence presented by film historians to argue that Pathe fits the expected theoretical model of a dominant firm because it had a marketing orientation, used an effective quality-based competitive strategy and possessed the six critical strategic marketing capabilities that business research shows enable the best performing firms to consistently outperform rivals

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the rapid increase in electrical energy demand, power generation in the form of distributed generation is becoming more important. However, the connections of distributed generators (DGs) to a distribution network or a microgrid can create several protection issues. The protection of these networks using protective devices based only on current is a challenging task due to the change in fault current levels and fault current direction. The isolation of a faulted segment from such networks will be difficult if converter interfaced DGs are connected as these DGs limit their output currents during the fault. Furthermore, if DG sources are intermittent, the current sensing protective relays are difficult to set since fault current changes with time depending on the availability of DG sources. The system restoration after a fault occurs is also a challenging protection issue in a converter interfaced DG connected distribution network or a microgrid. Usually, all the DGs will be disconnected immediately after a fault in the network. The safety of personnel and equipment of the distribution network, reclosing with DGs and arc extinction are the major reasons for these DG disconnections. In this thesis, an inverse time admittance (ITA) relay is proposed to protect a distribution network or a microgrid which has several converter interfaced DG connections. The ITA relay is capable of detecting faults and isolating a faulted segment from the network, allowing unfaulted segments to operate either in grid connected or islanded mode operations. The relay does not make the tripping decision based on only the fault current. It also uses the voltage at the relay location. Therefore, the ITA relay can be used effectively in a DG connected network in which fault current level is low or fault current level changes with time. Different case studies are considered to evaluate the performance of the ITA relays in comparison to some of the existing protection schemes. The relay performance is evaluated in different types of distribution networks: radial, the IEEE 34 node test feeder and a mesh network. The results are validated through PSCAD simulations and MATLAB calculations. Several experimental tests are carried out to validate the numerical results in a laboratory test feeder by implementing the ITA relay in LabVIEW. Furthermore, a novel control strategy based on fold back current control is proposed for a converter interfaced DG to overcome the problems associated with the system restoration. The control strategy enables the self extinction of arc if the fault is a temporary arc fault. This also helps in self system restoration if DG capacity is sufficient to supply the load. The coordination with reclosers without disconnecting the DGs from the network is discussed. This results in increased reliability in the network by reduction of customer outages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article analyses the legality of Israels 2007 airstrike on an alleged Syrian nuclear facility at Al-Kibaran incident that has been largely overlooked by international lawyers to date. The absence of a threat of imminent attack from Syria means Israels military action was not a lawful exercise of anticipatory self-defence. Yet, despite Israels clear violation of the prohibition on the use of force there was remarkably little condemnation from other states, suggesting the possibility of growing international support for the doctrine of pre-emptive self-defence. This article argues that the muted international reaction to Israels pre-emptive action was the result of political factors, and should not be seen as endorsement of the legality of the airstrike. As such, a lack of opinio juris means the Al-Kibar episode cannot be viewed as extending the scope of the customary international law right of self-defence so as to permit the use of force against non-imminent threats. However, two features of this incidentnamely, Israels failure to offer any legal justification for its airstrike, and the international communitys apparent lack of concern over legalityare also evident in other recent uses of force in the war on terror context. These developments may indicate a shift in state practice involving a downgrading of the role of international law in discussions of the use of force. This may signal a declining perception of the legitimacy of the jus ad bellum, at least in cases involving minor uses of force.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, I advance the understanding of information technology (IT) governance research and corporate governance research by considering the question How do boards govern IT? The importance of IT to business has increased over the last decade, but there has been little academic research which has focused on boards and their role in the governance of IT (Van Grembergen, De Haes and Guldentops, 2004). Most of the research on information technology governance (ITG) has focused on advancing the understanding and measurement of the components of the ITG model (Buckby, Best & Stewart, 2008; Wilkin & Chenhall, 2010), a model recommended by the IT Governance Institute (2003) as best practice for boards to use in governing IT. IT governance is considered to be the responsibility of the board and is said to form an important subset of an organisations corporate governance processes (Borth & Bradley, 2008). Boards need to govern IT as a result of the large capital investment in IT resources and high dependency on IT by organisations. Van Grembergen, De Haes and Guldentops (2004) and De Haes & Van Grembergen (2009) indicate that corporate governance matters are not able to be effectively discharged unless IT is being governed properly, and call for further specific research on the role of the board in ITG. Researchers also indicate that the link between corporate governance and IT governance has been neglected (Borth & Bradley, 2008; Musson & Jordan, 2005; Bhattacharjya & Chang, 2008). This thesis will address this gap in the ITG literature by providing the bridge between the ITG and corporate governance literatures. My thesis uses a critical realist epistemology and a mixed method approach to gather insights into my research question. In the first phase of my research I develop a survey instrument to assess whether boards consider the components of the ITG model in governing IT. The results of this first study indicated that directors do not conceptualise their role in governing IT using the elements of the ITG model. Thus, I moved to focus on whether prominent corporate governance theories might elucidate how boards govern IT. In the second phase of the research, I used a qualitative inductive case based study to assess whether agency, stewardship and resource dependence theories explain how boards govern IT in Australian universities. As the first in-depth study of university IT governance processes, my research contributes to the ITG research field by revealing that Australian university board governance of IT is characterized by a combination of agency theory and stewardship theory behaviours and processes. The study also identified strong links between a universitys IT structure and evidence of agency and stewardship theories. This link provides insight into the structures element of the emerging enterprise governance of IT framework (Van Grembergen, De Haes & Guldentops, 2004; De Haes & Van Grembergen, 2009; Van Grembergen & De Haes, 2009b; Ko & Fink, 2010). My research makes an important contribution to governance research by identifying a key link between corporate and ITG literatures and providing insight into board IT governance processes. The research conducted in my thesis should encourage future researchers to continue to explore the links between corporate and IT governance research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. 2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of Game Strategies (GS) have been developed in past decades. They have been used in the fields of economics, engineering, computer science and biology due to their efficiency in solving design optimization problems. In addition, research in multi-objective (MO) and multidisciplinary design optimization (MDO) has focused on developing robust and efficient optimization methods to produce a set of high quality solutions with low computational cost. In this paper, two optimization techniques are considered; the first optimization method uses multi-fidelity hierarchical Pareto optimality. The second optimization method uses the combination of two Game Strategies; Nash-equilibrium and Pareto optimality. The paper shows how Game Strategies can be hybridised and coupled to Multi-Objective Evolutionary Algorithms (MOEA) to accelerate convergence speed and to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid-Game Strategies are clearly demonstrated

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the application of two advanced optimization methods for solving active flow control (AFC) device shape design problem and compares their optimization efficiency in terms of computational cost and design quality. The first optimization method uses hierarchical asynchronous parallel multi-objective evolutionary algorithm and the second uses hybridized evolutionary algorithm with Nash-Game strategies (Hybrid-Game). Both optimization methods are based on a canonical evolution strategy and incorporate the concepts of parallel computing and asynchronous evaluation. One type of AFC device named shock control bump (SCB) is considered and applied to a natural laminar flow (NLF) aerofoil. The concept of SCB is used to decelerate supersonic flow on suction/pressure side of transonic aerofoil that leads to a delay of shock occurrence. Such active flow technique reduces total drag at transonic speeds which is of special interest to commercial aircraft. Numerical results show that the Hybrid-Game helps an EA to accelerate optimization process. From the practical point of view, applying a SCB on the suction and pressure sides significantly reduces transonic total drag and improves lift-to-drag (L/D) value when compared to the baseline design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to develop a Demand-Side-Response (DSR) model, which assists electricity end-users to be engaged in mitigating peak demands on the electricity network in Eastern and Southern Australia. The proposed innovative model will comprise a technical set-up of a programmable internet relay, a router, solid state switches in addition to the suitable software to control electricity demand at user's premises. The software on appropriate multimedia tool (CD Rom) will be curtailing/shifting electric loads to the most appropriate time of the day following the implemented economic model, which is designed to be maximizing financial benefits to electricity consumers. Additionally the model is targeting a national electrical load be spread-out evenly throughout the year in order to satisfy best economic performance for electricity generation, transmission and distribution. The model is applicable in region managed by the Australian Energy Management Operator (AEMO) covering states of Eastern-, Southern-Australia and Tasmania.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercubeoneinclusion graph. The first main result of this paper is a density bound of n [n1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuththe second part to a conjectured proof of correctness for Peelingthat every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercubeone-inclusion graph. The first main result of this report is a density bound of nchoose(n-1,d-1)/choose(n,d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuththe second part to a conjectured proof of correctness for Peelingthat every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout