978 resultados para Dynamic Capability
Resumo:
We propose a multi-layer spectrum sensing optimisation algorithm to maximise sensing efficiency by computing the optimal sensing and transmission durations for a fast changing, dynamic primary user. Dynamic primary user traffic is modelled as a random process, where the primary user changes states during both the sensing period and transmission period to reflect a more realistic scenario. Furthermore, we formulate joint constraints to correctly reflect interference to the primary user and lost opportunity of the secondary user during the transmission period. Finally, we implement a novel duty cycle based detector that is optimised with respect to PU traffic to accurately detect primary user activity during the sensing period. Simulation results show that unlike currently used detection models, the proposed algorithm can jointly optimise the sensing and transmission durations to simultaneously satisfy the optimisation constraints for the considered primary user traffic.
Resumo:
The overarching aim of this thesis was to investigate how processes of perception and action emerge under changing informational constraints during performance of multi-articular interceptive actions. Interceptive actions provide unique opportunities to study processes of perception and action in dynamic performance environments. The movement model used to exemplify the functionally coupled relationship between perception and action, from an ecological dynamics perspective, was cricket batting. Ecological dynamics conceptualises the human body as a complex system composed of many interacting sub-systems, and perceptual and motor system degrees of freedom, which leads to the emergence of patterns of behaviour under changing task constraints during performance. The series of studies reported in the Chapters of this doctoral thesis contributed to understanding of human behaviour by providing evidence of key properties of complex systems in human movement systems including self-organisation under constraints and meta-stability. Specifically, the studies: i) demonstrated how movement organisation (action) and visual strategies (perception) of dynamic human behaviour are constrained by changing ecological (especially informational) task constraints; (ii) provided evidence for the importance of representative design in experiments on perception and action; and iii), provided a principled theoretical framework to guide learning design in acquisition of skill in interceptive actions like cricket batting.
Resumo:
In the modern built environment, building construction and demolition consume a large amount of energy and emits greenhouse gasses due to widely used conventional construction materials such as reinforced and composite concrete. These materials consume high amount of natural resources and possess high embodied energy. More energy is required to recycle or reuse such materials at the cessation of use. Therefore, it is very important to use recyclable or reusable new materials in building construction in order to conserve natural resources and reduce the energy and emissions associated with conventional materials. Advancements in materials technology have resulted in the introduction of new composite and hybrid materials in infrastructure construction as alternatives to the conventional materials. This research project has developed a lightweight and prefabricatable Hybrid Composite Floor Plate System (HCFPS) as an alternative to conventional floor system, with desirable properties, easy to construct, economical, demountable, recyclable and reusable. Component materials of HCFPS include a central Polyurethane (PU) core, outer layers of Glass-fiber Reinforced Cement (GRC) and steel laminates at tensile regions. This research work explored the structural adequacy and performance characteristics of hybridised GRC, PU and steel laminate for the development of HCFPS. Performance characteristics of HCFPS were investigated using Finite Element (FE) method simulations supported by experimental testing. Parametric studies were conducted to develop the HCFPS to satisfy static performance using sectional configurations, spans, loading and material properties as the parameters. Dynamic response of HCFPS floors was investigated by conducting parametric studies using material properties, walking frequency and damping as the parameters. Research findings show that HCFPS can be used in office and residential buildings to provide acceptable static and dynamic performance. Design guidelines were developed for this new floor system. HCFPS is easy to construct and economical compared to conventional floor systems as it is lightweight and prefabricatable floor system. This floor system can also be demounted and reused or recycled at the cessation of use due to its component materials.
Resumo:
As universities worldwide begin to appreciate the value of authentic learning experiences, so they struggle with methods of assessing the outcomes from such experiences. This chapter describes the application of an assessment matrix developed by Queensland University of Technology (QUT) in Australia, to the assessment requirements and practices relating to work integrated learning at the University of Surrey in the UK. Despite the very different institutional contexts and independent way in which the assessment regimes have developed, it was found that the values and outcomes being assessed and the methods used to assess them were similar. The most important feature of assessing work integrated learning experiences is fitness for purpose; hence the learning objectives and assessment of outcomes for a WIL experience must be explicitly aligned to this objective.
Resumo:
Purpose: The precise shape of the three-dimensional dose distributions created by intensity-modulated radiotherapy means that the verification of patient position and setup is crucial to the outcome of the treatment. In this paper, we investigate and compare the use of two different image calibration procedures that allow extraction of patient anatomy from measured electronic portal images of intensity-modulated treatment beams. Methods and Materials: Electronic portal images of the intensity-modulated treatment beam delivered using the dynamic multileaf collimator technique were acquired. The images were formed by measuring a series of frames or segments throughout the delivery of the beams. The frames were then summed to produce an integrated portal image of the delivered beam. Two different methods for calibrating the integrated image were investigated with the aim of removing the intensity modulations of the beam. The first involved a simple point-by-point division of the integrated image by a single calibration image of the intensity-modulated beam delivered to a homogeneous polymethyl methacrylate (PMMA) phantom. The second calibration method is known as the quadratic calibration method and required a series of calibration images of the intensity-modulated beam delivered to different thicknesses of homogeneous PMMA blocks. Measurements were made using two different detector systems: a Varian amorphous silicon flat-panel imager and a Theraview camera-based system. The methods were tested first using a contrast phantom before images were acquired of intensity-modulated radiotherapy treatment delivered to the prostate and pelvic nodes of cancer patients at the Royal Marsden Hospital. Results: The results indicate that the calibration methods can be used to remove the intensity modulations of the beam, making it possible to see the outlines of bony anatomy that could be used for patient position verification. This was shown for both posterior and lateral delivered fields. Conclusions: Very little difference between the two calibration methods was observed, so the simpler division method, requiring only the single extra calibration measurement and much simpler computation, was the favored method. This new method could provide a complementary tool to existing position verification methods, and it has the advantage that it is completely passive, requiring no further dose to the patient and using only the treatment fields.
Resumo:
Corporate business and management are embracing design thinking for its potential to deliver competitive advantage through helping them be more innovative, differentiate their brands, and bring more customer centric products and services to market (Brown, 2008). As consumers continue to expect more personalisation and customisation from their service providers, the use of design thinking for innovation within organisations is a logical progression. To date however, there is little empirical literature discussing how organisations are setting about integrating design thinking into their culture and innovation practices. This paper is a first step in initiating a scholarly discussion on the integration of design thinking within organisational culture. Deloitte Australia is a large professional services firm employing over 5700 staff in 12 offices across Australia. The company provides a range of services to clients in the areas of audit, tax, financial advisory and consulting. In early 2011 the company made a strategic commitment to introducing design thinking into the organisation’s practices. While it already maintains a strong innovation culture, to date it had largely been operating within an analytical business environment. For Deloitte, design thinking is an opportunity to create better outcomes for the people they serve – both internal and external stakeholders (Brown and Wyatt, 2010). Research was conducted using case study methodology and ethnographic methods from June to September 2011 at the Melbourne Deloitte office. It involved three methods of data collection: semi structured interviews, participant observation and artifact analysis. This paper presents preliminary case study findings of Deloitte’s approach to building awareness and a consistent understanding of design thinking, as well as large scale capability, across the firm. Deloitte’s commitment to transforming its culture to one of design thinking poses significant potential for understanding how design thinking is comprehended, enabled and integrated within a complex organisational environment.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
KEEP CLEAR pavement markings are widely used at urban signalised intersections to indicate to drivers to avoid entering blocked intersections. For example, ‘Box junctions’ are most widely used in the United Kingdom and other European countries. However, in Australia, KEEP CLEAR markings are mostly used to improve access from side roads onto a main road, especially when the side road is very close to a signalised intersection. This paper aims to reveal how the KEEP CLEAR markings affect the dynamic performance of the queuing vehicles on the main road, where the side road access is near a signalised intersection. Raw traffic field data was collected from an intersection at the Gold Coast, Australia, and the Kanade–Lucas–Tomasi (KLT) feature tracker approach was used to extract dynamic vehicle data from the raw video footage. The data analysis reveals that the KEEP CLEAR markings generate positive effects on the queuing vehicles in discharge on the main road. This finding refutes the traditional viewpoint that the KEEP CLEAR pavement markings will cause delay for the queuing vehicles’ departure due to the enlarged queue spacing. Further studies are suggested in this paper as well.
Resumo:
In the electricity market environment, coordination of system reliability and economics of a power system is of great significance in determining the available transfer capability (ATC). In addition, the risks associated with uncertainties should be properly addressed in the ATC determination process for risk-benefit maximization. Against this background, it is necessary that the ATC be optimally allocated and utilized within relative security constraints. First of all, the non-sequential Monte Carlo stimulation is employed to derive the probability density distribution of ATC of designated areas incorporating uncertainty factors. Second, on the basis of that, a multi-objective optimization model is formulated to determine the multi-area ATC so as to maximize the risk-benefits. Then, the solution to the developed model is achieved by the fast non-dominated sorting (NSGA-II) algorithm, which could decrease the risk caused by uncertainties while coordinating the ATCs of different areas. Finally, the IEEE 118-bus test system is served for demonstrating the essential features of the developed model and employed algorithm.
Resumo:
Increases in functionality, power and intelligence of modern engineered systems led to complex systems with a large number of interconnected dynamic subsystems. In such machines, faults in one subsystem can cascade and affect the behavior of numerous other subsystems. This complicates the traditional fault monitoring procedures because of the need to train models of the faults that the monitoring system needs to detect and recognize. Unavoidable design defects, quality variations and different usage patterns make it infeasible to foresee all possible faults, resulting in limited diagnostic coverage that can only deal with previously anticipated and modeled failures. This leads to missed detections and costly blind swapping of acceptable components because of one’s inability to accurately isolate the source of previously unseen anomalies. To circumvent these difficulties, a new paradigm for diagnostic systems is proposed and discussed in this paper. Its feasibility is demonstrated through application examples in automotive engine diagnostics.
Resumo:
Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.
Resumo:
The generational approach to conceptualising first year student learning behaviour has made a useful contribution to understanding student engagement. It has an explicit focus on student behaviour and we suggest that a capability maturity model interpretation may provide a complementary extension of that understanding as it builds on the generational approach by allowing an assessment of institutional capability to initiate, plan, manage, evaluate and review institutional student engagement practices. The development of a Student Engagement, Success and Retention Maturity Model (SESR-MM) is discussed along with its application in an Australian higher education institution. In this case study, the model identified first, second and third generation approaches and in addition achieved a ‘complementary extension’ of the generational approach, building on it by identifying additional practices not normally considered within the generational concept and indicating the capability of the institution to provide and implement the practices.
Resumo:
This paper presents a practical scheme to control heave motion for hover and automatic landing of a Rotary-wing Unmanned Aerial Vehicle (RUAV) in the presence of strong horizontal gusts. A heave motion model is constructed for the purpose of capturing dynamic variations of thrust due to horizontal gusts. Through construction of an effective gust estimator, a feedback-feedforward controller is developed which uses available measurements from onboard sensors. The proposed controller dynamically and synchronously compensates for aerodynamic variations of heave motion, enhancing disturbance-attenuation capability of the RUAV. Simulation results justify the reliability and efficiency of the suggested gust estimator. Moreover, flight tests conducted on our Eagle helicopter verify suitability of the proposed control strategy for small RUAVs operating in a gusty environment.
Resumo:
A graph theoretic approach is developed for accurately computing haulage costs in earthwork projects. This is vital as haulage is a predominant factor in the real cost of earthworks. A variety of metrics can be used in our approach, but a fuel consumption proxy is recommended. This approach is novel as it considers the constantly changing terrain that results from cutting and filling activities and replaces inaccurate “static” calculations that have been used previously. The approach is also capable of efficiently correcting the violation of top down cutting and bottom up filling conditions that can be found in existing earthwork assignments and sequences. This approach assumes that the project site is partitioned into uniform blocks. A directed graph is then utilised to describe the terrain surface. This digraph is altered after each cut and fill, in order to reflect the true state of the terrain. A shortest path algorithm is successively applied to calculate the cost of each haul and these costs are summed to provide a total cost of haulage