118 resultados para Input-output model
Resumo:
Hydraulic excavators in the mining industry are widely used owing to the large payload capabilities these machines can achieve. However, there are very few optimisation studies for producing efficient hydraulic excavator backets. An efficient bucket can avoid unnecessary weight; greatly influence the payload and optimise the efficiency of hydraulic mining excavators. This paper presents a framework for the development of a scaled hydraulic excavator by examining the geometry and force relationships. A small hydraulic excavator was purchased and fitted with a broom scaled to a factor. Geometric and force relationships of the model were derived to assist computer instrumentation to retrieve necessary variable input for bucket design.
Resumo:
The variability of input parameters is the most important source of overall model uncertainty. Therefore, an in-depth understanding of the variability is essential for uncertainty analysis of stormwater quality model outputs. This paper presents the outcomes of a research study which investigated the variability of pollutants build-up characteristics on road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary highly even within the same land use. Additionally, industrial land use showed relatively higher variability of maximum build-up, build-up rate and particle size distribution, whilst the commercial land use displayed a relatively higher variability of pollutant-solid ratio. Among the various build-up parameters analysed, D50 (volume-median-diameter) displayed the relatively highest variability for all three land uses.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
In asset intensive industries such as mining, oil & gas, utilities etc. most of the capital expenditure happens on acquiring engineering assets. Process of acquiring assets is called as “Procurement” or “Acquisition”. An asset procurement decision should be taken in consideration with the installation, commissioning, operational, maintenance and disposal needs of an asset or spare. However, such cross-functional collaboration and communication does not appear to happen between engineering, maintenance, warehousing and procurement functions in many asset intensive industries. Acquisition planning and execution are two distinct parts of asset acquisition process. Acquisition planning or procurement planning is responsible for determining exactly what is required to be purchased. It is important that an asset acquisition decision is the result of cross-functional decision making process. An acquisition decision leads to a formal purchase order. Most costly asset decisions occur even before they are acquired. Therefore, acquisition decision should be an outcome of an integrated planning & decision making process. Asset intensive organizations both, Government and non Government in Australia spent AUD 102.5 Billion on asset acquisition in year 2008-09. There is widespread evidence of many assets and spare not being used or utilized and in the end are written off. This clearly shows that many organizations end up buying assets or spares which were not required or non-conforming to the needs of user functions. It is due the fact that strategic and software driven procurement process do not consider all the requirements from various functions within the organization which contribute to the operation and maintenance of the asset over its life cycle. There is a lot of research done on how to implement an effective procurement process. There are numerous software solutions available for executing a procurement process. However, not much research is done on how to arrive at a cross functional procurement planning process. It is also important to link procurement planning process to procurement execution process. This research will discuss ““Acquisition Engineering Model” (AEM) framework, which aims at assisting acquisition decision making based on various criteria to satisfy cross-functional organizational requirements. Acquisition Engineering Model (AEM) will consider inputs from corporate asset management strategy, production management, maintenance management, warehousing, finance and HSE. Therefore, it is essential that the multi-criteria driven acquisition planning process is carried out and its output is fed to the asset acquisition (procurement execution) process. An effective procurement decision making framework to perform acquisition planning which considers various functional criteria will be discussed in this paper.
Resumo:
Computational models for cardiomyocyte action potentials (AP) often make use of a large parameter set. This parameter set can contain some elements that are fitted to experimental data independently of any other element, some elements that are derived concurrently with other elements to match experimental data, and some elements that are derived purely from phenomenological fitting to produce the desired AP output. Furthermore, models can make use of several different data sets, not always derived for the same conditions or even the same species. It is consequently uncertain whether the parameter set for a given model is physiologically accurate. Furthermore, it is only recently that the possibility of degeneracy in parameter values in producing a given simulation output has started to be addressed. In this study, we examine the effects of varying two parameters (the L-type calcium current (I(CaL)) and the delayed rectifier potassium current (I(Ks))) in a computational model of a rabbit ventricular cardiomyocyte AP on both the membrane potential (V(m)) and calcium (Ca(2+)) transient. It will subsequently be determined if there is degeneracy in this model to these parameter values, which will have important implications on the stability of these models to cell-to-cell parameter variation, and also whether the current methodology for generating parameter values is flawed. The accuracy of AP duration (APD) as an indicator of AP shape will also be assessed.
Resumo:
In this paper we explore the ability of a recent model-based learning technique Receding Horizon Locally Weighted Regression (RH-LWR) useful for learning temporally dependent systems. In particular this paper investigates the application of RH-LWR to learn control of Multiple-input Multiple-output robot systems. RH-LWR is demonstrated through learning joint velocity and position control of a three Degree of Freedom (DoF) rigid body robot.
Resumo:
In John Frazer's seminal book An Evolutionary Architecture (1995), from which this essay is extracted, a fundamental approach is established for have natural systems can unfold mechanisms for negotiating the complex design space inherent in architectural systems. In this essay, which forms a critical part of the book, Frazer draws both correlations and distinctions from natural processes as emulated in design processes and form as active manifestations within natural systems. Form is seen as an evolving agent generated via the rules of descriptive genetic coding, functioning as a part of a metabolic environment. Frazer's process-model establishes the realm in which computation must manoeuvre to produce a valid solution space, including the operations of self-organisation, complexity and emergent behaviour. Addressing design as an authored practice, he extends the transference of 'creativity' from the explicit impression into form, to the investment of though, organisation and strategy in the computational processes which produce form. Frazer's text concentrates astutely on the practising of the evolutionary paradigm, the output of which postulates an architecture born of the relationships to dynamic environmental and socio-economic contexts, and realised through morphogenetic materialisation.
Resumo:
The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.
Resumo:
This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.
Resumo:
Commonwealth Scientific and Industrial Research Organization (CSIRO) has recently conducted a technology demonstration of a novel fixed wireless broadband access system in rural Australia. The system is based on multi user multiple-input multiple-output orthogonal frequency division multiplexing (MU-MIMO-OFDM). It demonstrated an uplink of six simultaneous users with distances ranging from 10 m to 8.5 km from a central tower, achieving 20 bits s/Hz spectrum efficiency. This paper reports on the analysis of channel capacity and bit error probability simulation based on the measured MUMIMO-OFDM channels obtained during the demonstration, and their comparison with the results based on channels simulated by a novel geometric optics based channel model suitable for MU-MIMO OFDM in rural areas. Despite its simplicity, the model was found to predict channel capacity and bit error rate probability accurately for a typical MU-MIMO-OFDM deployment scenario.
Resumo:
This paper presents a behavioral car-following model based on empirical trajectory data that is able to reproduce the spontaneous formation and ensuing propagation of stop-and-go waves in congested traffic. By analyzing individual drivers’ car-following behavior throughout oscillation cycles it is found that this behavior is consistent across drivers and can be captured by a simple model. The statistical analysis of the model’s parameters reveals that there is a strong correlation between driver behavior before and during the oscillation, and that this correlation should not be ignored if one is interested in microscopic output. If macroscopic outputs are of interest, simulation results indicate that an existing model with fewer parameters can be used instead. This is shown for traffic oscillations caused by rubbernecking as observed in the US 101 NGSIM dataset. The same experiment is used to establish the relationship between rubbernecking behavior and the period of oscillations.
Resumo:
Background/objectives This study estimates the economic outcomes of a nutrition intervention to at-risk patients compared with standard care in the prevention of pressure ulcer. Subjects/methods Statistical models were developed to predict ‘cases of pressure ulcer avoided’, ‘number of bed days gained’ and ‘change to economic costs’ in public hospitals in 2002–2003 in Queensland, Australia. Input parameters were specified and appropriate probability distributions fitted for: number of discharges per annum; incidence rate for pressure ulcer; independent effect of pressure ulcer on length of stay; cost of a bed day; change in risk in developing a pressure ulcer associated with nutrition support; annual cost of the provision of a nutrition support intervention for at-risk patients. A total of 1000 random re-samples were made and the results expressed as output probability distributions. Results The model predicts a mean 2896 (s.d. 632) cases of pressure ulcer avoided; 12 397 (s.d. 4491) bed days released and corresponding mean economic cost saving of euros 2 869 526 (s.d. 2 078 715) with a nutrition support intervention, compared with standard care. Conclusion Nutrition intervention is predicted to be a cost-effective approach in the prevention of pressure ulcer in at-risk patients.
Resumo:
This paper presents a model for generating a MAC tag by injecting the input message directly into the internal state of a nonlinear filter generator. This model generalises a similar model for unkeyed hash functions proposed by Nakano et al. We develop a matrix representation for the accumulation phase of our model and use it to analyse the security of the model against man-in-the-middle forgery attacks based on collisions in the final register contents. The results of this analysis show that some conclusions of Nakano et al regarding the security of their model are incorrect. We also use our results to comment on several recent MAC proposals which can be considered as instances of our model and specify choices of options within the model which should prevent the type of forgery discussed here. In particular, suitable initialisation of the register and active use of a secure nonlinear filter will prevent an attacker from finding a collision in the final register contents which could result in a forged MAC.
Resumo:
Flood related scientific and community-based data are rarely systematically collected and analysed in the Philippines. Over the last decades the Pagsangaan River Basin, Leyte, has experienced several flood events. However, documentation describing flood characteristics such as extent, duration or height of these floods are close to non-existing. To address this issue, computerized flood modelling was used to reproduce past events where there was data available for at least partial calibration and validation. The model was also used to provide scenario-based predictions based on A1B climate change assumptions for the area. The most important input for flood modelling is a Digital Elevation Model (DEM) of the river basin. No accurate topographic maps or Light Detection And Ranging (LIDAR)-generated data are available for the Pagsangaan River. Therefore, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Map (GDEM), Version 1, was chosen as the DEM. Although the horizontal spatial resolution of 30 m is rather desirable, it contains substantial vertical errors. These were identified, different correction methods were tested and the resulting DEM was used for flood modelling. The above mentioned data were combined with cross-sections at various strategic locations of the river network, meteorological records, river water level, and current velocity to develop the 1D-2D flood model. SOBEK was used as modelling software to create different rainfall scenarios, including historic flooding events. Due to the lack of scientific data for the verification of the model quality, interviews with local stakeholders served as the gauge to judge the quality of the generated flood maps. According to interviewees, the model reflects reality more accurately than previously available flood maps. The resulting flood maps are now used by the operations centre of a local flood early warning system for warnings and evacuation alerts. Furthermore these maps can serve as a basis to identify flood hazard areas for spatial land use planning purposes.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.