151 resultados para Multiple input and multiple output autonomous flight systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a nonlinear gust-attenuation controller based on constrained neural-network (NN) theory. The controller aims to achieve sufficient stability and handling quality for a fixed-wing unmanned aerial system (UAS) in a gusty environment when control inputs are subjected to constraints. Constraints in inputs emulate situations where aircraft actuators fail requiring the aircraft to be operated with fail-safe capability. The proposed controller enables gust-attenuation property and stabilizes the aircraft dynamics in a gusty environment. The proposed flight controller is obtained by solving the Hamilton-Jacobi-Isaacs (HJI) equations based on an policy iteration (PI) approach. Performance of the controller is evaluated using a high-fidelity six degree-of-freedom Shadow UAS model. Simulations show that our controller demonstrates great performance improvement in a gusty environment, especially in angle-of-attack (AOA), pitch and pitch rate. Comparative studies are conducted with the proportional-integral-derivative (PID) controllers, justifying the efficiency of our controller and verifying its suitability for integration into the design of flight control systems for forced landing of UASs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Police in-vehicle systems include a visual output mobile data terminal (MDT) with manual input via touch screen and keyboard. This study investigated the potential for voice-based input and output modalities for reducing subjective workload of police officers while driving. Nineteen experienced drivers of police vehicles (one female) from New South Wales (NSW) Police completed four simulated urban drives. Three drives included a concurrent secondary task: an imitation licence number search using an emulated MDT. Three different interface output-input modalities were examined: Visual-Manual, Visual-Voice, and Audio-Voice. Following each drive, participants rated their subjective workload using the NASA - Raw Task Load Index and completed questions on acceptability. A questionnaire on interface preferences was completed by participants at the end of their session. Engaging in secondary tasks while driving significantly increased subjective workload. The Visual-Manual interface resulted in higher time demand than either of the voice-based interfaces and greater physical demand than the Audio-Voice interface. The Visual-Voice and Audio-Voice interfaces were rated easier to use and more useful than the Visual-Manual interface, although not significantly different from each other. Findings largely echoed those deriving from the analysis of the objective driving performance data. It is acknowledged that under standard procedures, officers should not drive while performing tasks concurrently with certain invehicle policing systems; however, in practice this sometimes occurs. Taking action now to develop voice-based technology for police in-vehicle systems has potential to realise visions for potentially safer and more efficient vehicle-based police work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motion control systems have a significant impact on the performance of ships and marine structures allowing them to perform tasks in severe sea states and during long periods of time. Ships are designed to operate with adequate reliability and economy, and in order to achieve this, it is essential to control the motion. For each type of ship and operation performed (transit, landing a helicopter, fishing, deploying and recovering loads, etc.), there are not only desired motion settings, but also limits on the acceptable (undesired) motion induced by the environment. The task of a ship motion control system is therefore to act on the ship so it follows the desired motion as closely as possible. This book provides an introduction to the field of ship motion control by studying the control system designs for course-keeping autopilots with rudder roll stabilisation and integrated rudder-fin roll stabilisation. These particular designs provide a good overview of the difficulties encountered by designers of ship motion control systems and, therefore, serve well as an example driven introduction to the field. The idea of combining the control design of autopilots with that of fin roll stabilisers, and the idea of using rudder induced roll motion as a sole source of roll stabilisation seems to have emerged in the late 1960s. Since that time, these control designs have been the subject of continuous and ongoing research. This ongoing interest is a consequence of the significant bearing that the control strategy has on the performance and the issues associated with control system design. The challenges of these designs lie in devising a control strategy to address the following issues: underactuation, disturbance rejection with a non minimum phase system, input and output constraints, model uncertainty, and large unmeasured stochastic disturbances. To date, the majority of the work reported in the literature has focused strongly on some of the design issues whereas the remaining issues have been addressed using ad hoc approaches. This has provided an additional motivation for revisiting these control designs and looking at the benefits of applying a contemporary design framework, which can potentially address the majority of the design issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design Science is the process of solving ‘wicked problems’ through designing, developing, instantiating, and evaluating novel solutions (Hevner, March, Park and Ram, 2004). Wicked problems are described as agent finitude in combination with problem complexity and normative constraint (Farrell and Hooker, 2013). In Information Systems Design Science, determining that problems are ‘wicked’ differentiates Design Science research from Solutions Engineering (Winter, 2008) and is a necessary part of proving the relevance to Information Systems Design Science research (Hevner, 2007; Iivari, 2007). Problem complexity is characterised as many problem components with nested, dependent and co-dependent relationships interacting through multiple feedback and feed-forward loops. Farrell and Hooker (2013) specifically state for wicked problems “it will often be impossible to disentangle the consequences of specific actions from those of other co-occurring interactions”. This paper discusses the application of an Enterprise Information Architecture modelling technique to disentangle the wicked problem complexity for one case. It proposes that such a modelling technique can be applied to other wicked problems and can lay the foundations for proving relevancy to DSR, provide solution pathways for artefact development, and aid to substantiate those elements required to produce Design Theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel gray-box neural network model (GBNNM), including multi-layer perception (MLP) neural network (NN) and integrators, is proposed for a model identification and fault estimation (MIFE) scheme. With the GBNNM, both the nonlinearity and dynamics of a class of nonlinear dynamic systems can be approximated. Unlike previous NN-based model identification methods, the GBNNM directly inherits system dynamics and separately models system nonlinearities. This model corresponds well with the object system and is easy to build. The GBNNM is embedded online as a normal model reference to obtain the quantitative residual between the object system output and the GBNNM output. This residual can accurately indicate the fault offset value, so it is suitable for differing fault severities. To further estimate the fault parameters (FPs), an improved extended state observer (ESO) using the same NNs (IESONN) from the GBNNM is proposed to avoid requiring the knowledge of ESO nonlinearity. Then, the proposed MIFE scheme is applied for reaction wheels (RW) in a satellite attitude control system (SACS). The scheme using the GBNNM is compared with other NNs in the same fault scenario, and several partial loss of effect (LOE) faults with different severities are considered to validate the effectiveness of the FP estimation and its superiority.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a three-layered framework to monitor the positioning performance requirements of Real-time Relative Positioning (RRP) systems of the Cooperative Intelligent Transport Systems (C-ITS) that support Cooperative Collision Warning (CCW) applications. These applications exploit state data of surrounding vehicles obtained solely from the Global Positioning System (GPS) and Dedicated Short-Range Communications (DSRC) units without using other sensors. To this end, the paper argues the need for the GPS/DSRC-based RRP systems to have an autonomous monitoring mechanism, since the operation of CCW applications is meant to augment safety on roads. The advantages of autonomous integrity monitoring are essential and integral to any safety-of-life system. The autonomous integrity monitoring framework proposed necessitates the RRP systems to detect/predict the unavailability of their sub-systems and of the integrity monitoring module itself, and, if available, to account for effects of data link delays and breakages of DSRC links, as well as of faulty measurement sources of GPS and/or integrated augmentation positioning systems, before the information used for safety warnings/alarms becomes unavailable, unreliable, inaccurate or misleading. Hence, a monitoring framework using a tight integration and correlation approach is proposed for instantaneous reliability assessment of the RRP systems. Ultimately, using the proposed framework, the RRP systems will provide timely alerts to users when the RRP solutions cannot be trusted or used for the intended operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lave and Wenger’s legitimate peripheral participation is an important aspect of online learning environments. It is common for teachers to scaffold varying levels of online participation in Web 2.0 contexts, such as online discussion forums and blogs. This study argues that legitimate peripheral participation needs to be redefined in response to students’ decentralised multiple interactions and non-linear engagement in hyperlinked learning environments. The study examines students’ levels of participation in online learning through theories of interactivity, distinguishing between five levels of student participation in the context of a first-year university course delivered via a learning management system. The data collection was implemented through two instruments: i) a questionnaire about students’ interactivity perception in the online reflective learning (n = 238) and then ii) an open discussion on the reason for the diverse perceptions of interactivity (n = 34). The study findings indicate that student participants, other than those who were active, need high levels of teacher or moderator intervention, which better enables legitimate peripheral participation to occur in online learning contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background & Research Focus Managing knowledge for innovation and organisational benefit has been extensively investigated in studies of large firms (Smith, Collins & Clark, 2005; Zucker, et al., 2007) and to a large extent there is limited research into studies of small- and medium- sized enterprises (SMEs). There are some investigations in knowledge management research on SMEs, but what remains to be seen in particular is the question of where are the potential challenges for managing knowledge more effectively within these firms? Effective knowledge management (KM) processes and systems lead to improved performance in pursuing distinct capabilities that contribute to firm-level innovation (Nassim 2009; Zucker et al. 2007; Verona and Ravasi 2003). Managing internal and external knowledge in a way that links it closely to the innovation process can assist the creation and implementation of new products and services. KM is particularly important in knowledge intensive firms where the knowledge requirements are highly specialized, diverse and often emergent. However, to a large extent the KM processes of small firms that are often the source of new knowledge and an important element of the value networks of larger companies have not been closely studied. To address this gap which is of increasing importance with the growing number of small firms, we need to further investigate knowledge management processes and the ways that firms find, capture, apply and integrate knowledge from multiple sources for their innovation process. This study builds on the previous literature and applies existing frameworks and takes the process and activity view of knowledge management as a starting point of departure (see among others Kraaijenbrink, Wijnhoven & Groen, 2007; Enberg, Lindkvist, & Tell, 2006; Lu, Wang & Mao, 2007). In this paper, it is attempted to develop a better understanding of the challenges of knowledge management within the innovation process in small knowledge-oriented firms. The paper aims to explore knowledge management processes and practices in firms that are engaged in the new product/service development programs. Consistent with the exploratory character of the study, the research question is: How is knowledge integrated, sourced and recombined from internal and external sources for innovation and new product development? Research Method The research took an exploratory case study approach and developed a theoretical framework to investigate the knowledge situation of knowledge-intensive firms. Equipped with the conceptual foundation, the research adopted a multiple case study method investigating four diverse Australian knowledge-intensive firms from IT, biotechnology, nanotechnology and biochemistry industries. The multiple case study method allowed us to document in some depth the knowledge management experience of the theses firms. Case study data were collected through a review of company published data and semi-structured interviews with managers using an interview guide to ensure uniform coverage of the research themes. This interview guide was developed following development of the framework and a review of the methodologies and issues covered by similar studies in other countries and used some questions common to these studies. It was framed to gather data around knowledge management activity within the business, focusing on the identification, acquisition and utilisation of knowledge, but collecting a range of information about subject as well. The focus of the case studies was on the use of external and internal knowledge to support their knowledge intensive products and services. Key Findings Firstly a conceptual and strategic knowledge management framework has been developed. The knowledge determinants are related to the nature of knowledge, organisational context, and mechanism of the linkages between internal and external knowledge. Overall, a number of key observations derived from this study, which demonstrated the challenges of managing knowledge and how important KM is as a management tool for innovation process in knowledge-oriented firms. To summarise, findings suggest that knowledge management process in these firms is very much project focused and not embedded within the overall organisational routines and mainly based on ad hoc and informal processes. Our findings highlighted lack of formal knowledge management process within our sampled firms. This point to the need for more specialised capabilities in knowledge management for these firms. We observed a need for an effective knowledge transfer support system which is required to facilitate knowledge sharing and particularly capturing and transferring tacit knowledge from one team members to another. In sum, our findings indicate that building effective and adaptive IT systems to manage and share knowledge in the firm is one of the biggest challenges for these small firms. Also, there is little explicit strategy in small knowledge-intensive firms that is targeted at systematic KM either at the strategic or operational level. Therefore, a strategic approach to managing knowledge for innovation as well as leadership and management are essential to achieving effective KM. In particular, research findings demonstrate that gathering tacit knowledge, internal and external to the organization, and applying processes to ensure the availability of knowledge for innovation teams, drives down the risks and cost of innovation. KM activities and tools, such as KM systems, environmental scanning, benchmarking, intranets, firm-wide databases and communities of practice to acquire knowledge and to make it accessible, were elements of KM. Practical Implications The case study method that used in this study provides practical insight into the knowledge management process within Australian knowledge-intensive firms. It also provides useful lessons which can be used by other firms in managing the knowledge more effectively in the innovation process. The findings would be helpful for small firms that may be searching for a practical method for managing and integrating their specialised knowledge. Using the results of this exploratory study and to address the challenges of knowledge management, this study proposes five practices that are discussed in the paper for managing knowledge more efficiently to improve innovation: (1) Knowledge-based firms must be strategic in knowledge management processes for innovation, (2) Leadership and management should encourage various practices for knowledge management, (3) Capturing and sharing tacit knowledge is critical and should be managed, (4)Team knowledge integration practices should be developed, (5) Knowledge management and integration through communication networks, and technology systems should be encouraged and strengthen. In sum, the main managerial contribution of the paper is the recognition of knowledge determinants and processes, and their effects on the effective knowledge management within firm. This may serve as a useful benchmark in the strategic planning of the firm as it utilises new and specialised knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid powerplants combining internal combustion engines and electric motor prime movers have been extensively developed for land- and marine-based transport systems. The use of such powerplants in airborne applications has been historically impractical due to energy and power density constraints. Improvements in battery and electric motor technology make aircraft hybrid powerplants feasible. This paper presents a technique for determining the feasibility and mechanical effectiveness of powerplant hybridisation. In this work, a prototype aircraft hybrid powerplant was designed, constructed and tested. It is shown that an additional 35% power can be supplied from the hybrid system with an overall weight penalty of 5%, for a given unmanned aerial system. A flight dynamic model was developed using the AeroSim Blockset in MATLAB Simulink. The results have shown that climb rates can be improved by 56% and endurance increased by 13% when using the hybrid powerplant concept.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the structural health monitoring (SHM) field, long-term continuous vibration-based monitoring is becoming increasingly popular as this could keep track of the health status of structures during their service lives. However, implementing such a system is not always feasible due to on-going conflicts between budget constraints and the need of sophisticated systems to monitor real-world structures under their demanding in-service conditions. To address this problem, this paper presents a comprehensive development of a cost-effective and flexible vibration DAQ system for long-term continuous SHM of a newly constructed institutional complex with a special focus on the main building. First, selections of sensor type and sensor positions are scrutinized to overcome adversities such as low-frequency and low-level vibration measurements. In order to economically tackle the sparse measurement problem, a cost-optimized Ethernet-based peripheral DAQ model is first adopted to form the system skeleton. A combination of a high-resolution timing coordination method based on the TCP/IP command communication medium and a periodic system resynchronization strategy is then proposed to synchronize data from multiple distributed DAQ units. The results of both experimental evaluations and experimental–numerical verifications show that the proposed DAQ system in general and the data synchronization solution in particular work well and they can provide a promising cost-effective and flexible alternative for use in real-world SHM projects. Finally, the paper demonstrates simple but effective ways to make use of the developed monitoring system for long-term continuous structural health evaluation as well as to use the instrumented building herein as a multi-purpose benchmark structure for studying not only practical SHM problems but also synchronization related issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.