696 resultados para correlation modelling
Resumo:
Shaft-mounted gearboxes are widely used in industry. The torque arm that holds the reactive torque on the housing of the gearbox, if properly positioned creates the reactive force that lifts the gearbox and unloads the bearings of the output shaft. The shortcoming of these torque arms is that if the gearbox is reversed the direction of the reactive force on the torque arm changes to opposite and added to the weight of the gearbox overloads the bearings shortening their operating life. In this paper, a new patented design of torque arms that develop a controlled lifting force and counteract the weight of the gearbox regardless of the direction of the output shaft rotation is described. Several mathematical models of the conventional and new torque arms were developed and verified experimentally on a specially built test rig that enables modelling of the radial compliance of the gearbox bearings and elastic elements of the torque arms. Comparison showed a good agreement between theoretical and experimental results.
Resumo:
Many researchers have investigated and modelled aspects of Web searching. A number of studies have explored the relationships between individual differences and Web searching. However, limited studies have explored the role of users’ cognitive styles in determining Web searching behaviour. Current models of Web searching have limited consideration of users’ cognitive styles. The impact of users’ cognitive style on Web searching and their relationships are little understood or represented. Individuals differ in their information processing approaches and the way they represent information, thus affecting their performance. To create better models of Web searching we need to understand more about user’s cognitive style and their Web search behaviour, and the relationship between them. More rigorous research is needed in using more complex and meaningful measures of relevance; across a range of different types of search tasks and different populations of Internet users. The project further explores the relationships between the users’ cognitive style and their Web searching. The project will develop a model depicting the relationships between a user’s cognitive style and their Web searching. The related literature, aims and objectives and research design are discussed.
Resumo:
Objective Theoretical models of post-traumatic growth (PTG) have been derived in the general trauma literature to describe the post-trauma experience that facilitates the perception of positive life changes. To develop a statistical model identifying factors that are associated with PTG, structural equation modelling (SEM) was used in the current study to assess the relationships between perception of diagnosis severity, rumination, social support, distress, and PTG. Method A statistical model of PTG was tested in a sample of participants diagnosed with a variety of cancers (N=313). Results An initial principal components analysis of the measure used to assess rumination revealed three components: intrusive rumination, deliberate rumination of benefits, and life purpose rumination. SEM results indicated that the model fit the data well and that 30% of the variance in PTG was explained by the variables. Trauma severity was directly related to distress, but not to PTG. Deliberately ruminating on benefits and social support were directly related to PTG. Life purpose rumination and intrusive rumination were associated with distress. Conclusions The model showed that in addition to having unique correlating factors, distress was not related to PTG, thereby providing support for the notion that these are discrete constructs in the post-diagnosis experience. The statistical model provides support that post-diagnosis experience is simultaneously shaped by positive and negative life changes and that one or the other outcome may be prevalent or may occur concurrently. As such, an implication for practice is the need for supportive care that is holistic in nature.
Resumo:
This paper describes the development of a simulation model for operating theatres. Elective patient scheduling is complicated by several factors; stochastic demand for resources due to variation in the nature and severity of a patient’s illness, unexpected complications in a patient’s course of treatment and the arrival of non-scheduled emergency patients which compete for resources. Extend simulation software was used for its ability to represent highly complex systems and analyse model outputs. Patient arrivals and lengths of surgery are determined by analysis of historical data. The model was used to explore the effects increasing patient arrivals and alternative elective patient admission disciplines would have on the performance measures. The model can be used as a decision support system for hospital planners.
Resumo:
Populations of the Queensland fruit fly, Bactrocera tryoni, are routinely monitored using cue-lure, a male-only attractant. Such monitoring provides no information about females and there is little information available to show if male and female B. tryoni numbers are correlated in the field. Using a data set of 1 148 weekly clearances of orange-ammonia baited traps, which catch both males and females, the correlation between male and female numbers was tested for 48 weeks of the year (four weeks each month) and for the combined data set. Weekly male and female trap catches were almost entirely highly correlated, regardless of mean population size or time of year. For the whole year, the correlation between male and female numbers was r = 0.722, significant at p<0.001. Results suggest that changes in the number if male B. tryoni, as detected through cue-lure sampling, will reflect changes in numbers of female B. tryoni.
Resumo:
Chronicwounds fail to proceed through an orderly process to produce anatomic and functional integrity and are a significant socioeconomic problem. There is much debate about the best way to treat these wounds. In this thesis we review earlier mathematical models of angiogenesis and wound healing. Many of these models assume a chemotactic response of endothelial cells, the primary cell type involved in angiogenesis. Modelling this chemotactic response leads to a system of advection-dominated partial differential equations and we review numerical methods to solve these equations and argue that the finite volume method with flux limiting is best-suited to these problems. One treatment of chronic wounds that is shrouded with controversy is hyperbaric oxygen therapy (HBOT). There is currently no conclusive data showing that HBOT can assist chronic wound healing, but there has been some clinical success. In this thesis we use several mathematical models of wound healing to investigate the use of hyperbaric oxygen therapy to assist the healing process - a novel threespecies model and a more complex six-species model. The second model accounts formore of the biological phenomena but does not lend itself tomathematical analysis. Bothmodels are then used tomake predictions about the efficacy of hyperbaric oxygen therapy and the optimal treatment protocol. Based on our modelling, we are able to make several predictions including that intermittent HBOT will assist chronic wound healing while normobaric oxygen is ineffective in treating such wounds, treatment should continue until healing is complete and finding the right protocol for an individual patient is crucial if HBOT is to be effective. Analysis of the models allows us to derive constraints for the range of HBOT protocols that will stimulate healing, which enables us to predict which patients are more likely to have a positive response to HBOT and thus has the potential to assist in improving both the success rate and thus the cost-effectiveness of this therapy.
Resumo:
“What did you think you were doing?” Was the question posed by the conference organizers to me as the inventor and constructor of the first working Tangible Interfaces over 40 years ago. I think the question was intended to encourage me to talk about the underlying ideas and intentionality rather than describe an endless sequence of electronic bricks and that is what I shall do in this presentation. In the sixties the prevalent idea for a graphics interface was an analogue with sketching which was to somehow be understood by the computer as three dimensional form. I rebelled against this notion for reasons which I will explain in the presentation and instead came up with tangible physical three dimensional intelligent objects. I called these first prototypes “Intelligent Physical Modelling Systems” which is a really dumb name for an obvious concept. I am eternally grateful to Hiroshi Ishii for coining the term “Tangible User Interfaces” - the same idea but with a much smarter name. Another motivator was user involvement in the design process, and that led to the Generator (1979) project with Cedric Price for the world’s first intelligent building capable of organizing itself in response to the appetites of the users. The working model of that project is in MoMA. And the same motivation led to a self builders design kit (1980) for Walter Segal which facilitated self-builders to design their own houses. And indeed as the organizer’s question implied, the motivation and intentionality of these projects developed over the years in step with advancing technology. The speaker will attempt to articulate these changes with medical, psychological and educational examples. Much of this later work indeed stemming from the Media Lab where we are talking. Related topics such as “tangible thinking” and “intelligent teacups” will be introduced and the presentation will end with some speculations for the future. The presentation will be given against a background of images of early prototypes many of which have never been previously published.
Resumo:
Digital modelling tools are the next generation of computer aided design (CAD) tools for the construction industry. They allow a designer to build a virtual model of the building project before the building is constructed. This supports a whole range of analysis, and the identification and resolution of problems before they arise on-site, in ways that were previously not feasible.
Resumo:
This paper argues a model of adaptive design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of efficient use of energy and material resource in the life-cycle of buildings, active involvement of the occupants into micro-climate control within the building, and the natural environment as the physical context. The interactions amongst all the parameters compose a complex system of sustainable architecture design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The latest interpretation of the Second Law of Thermodynamics states a microscopic formulation of an entropy evolution of complex open systems. It provides a design framework for an adaptive system evolves for the optimization in open systems, this adaptive system evolves for the optimization of building environmental performance. The paper concludes that adaptive modelling in entropy evolution is a design alternative for sustainable architecture.
Resumo:
The variability of input parameters is the most important source of overall model uncertainty. Therefore, an in-depth understanding of the variability is essential for uncertainty analysis of stormwater quality model outputs. This paper presents the outcomes of a research study which investigated the variability of pollutants build-up characteristics on road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary highly even within the same land use. Additionally, industrial land use showed relatively higher variability of maximum build-up, build-up rate and particle size distribution, whilst the commercial land use displayed a relatively higher variability of pollutant-solid ratio. Among the various build-up parameters analysed, D50 (volume-median-diameter) displayed the relatively highest variability for all three land uses.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
The common approach to estimate bus dwell time at a BRT station is to apply the traditional dwell time methodology derived for suburban bus stops. In spite of being sensitive to boarding and alighting passenger numbers and to some extent towards fare collection media, these traditional dwell time models do not account for the platform crowding. Moreover, they fall short in accounting for the effects of passenger/s walking along a relatively longer BRT platform. Using the experience from Brisbane busway (BRT) stations, a new variable, Bus Lost Time (LT), is introduced in traditional dwell time model. The bus lost time variable captures the impact of passenger walking and platform crowding on bus dwell time. These are two characteristics which differentiate a BRT station from a bus stop. This paper reports the development of a methodology to estimate bus lost time experienced by buses at a BRT platform. Results were compared with the Transit Capacity and Quality of Servce Manual (TCQSM) approach of dwell time and station capacity estimation. When the bus lost time was used in dwell time calculations it was found that the BRT station platform capacity reduced by 10.1%.
Resumo:
Reliable infrastructure assets impact significantly on quality of life and provide a stable foundation for economic growth and competitiveness. Decisions about the way assets are managed are of utmost importance in achieving this. Timely renewal of infrastructure assets supports reliability and maximum utilisation of infrastructure and enables business and community to grow and prosper. This research initially examined a framework for asset management decisions and then focused on asset renewal optimisation and renewal engineering optimisation in depth. This study had four primary objectives. The first was to develop a new Asset Management Decision Framework (AMDF) for identifying and classifying asset management decisions. The AMDF was developed by applying multi-criteria decision theory, classical management theory and life cycle management. The AMDF is an original and innovative contribution to asset management in that: · it is the first framework to provide guidance for developing asset management decision criteria based on fundamental business objectives; · it is the first framework to provide a decision context identification and analysis process for asset management decisions; and · it is the only comprehensive listing of asset management decision types developed from first principles. The second objective of this research was to develop a novel multi-attribute Asset Renewal Decision Model (ARDM) that takes account of financial, customer service, health and safety, environmental and socio-economic objectives. The unique feature of this ARDM is that it is the only model to optimise timing of asset renewal with respect to fundamental business objectives. The third objective of this research was to develop a novel Renewal Engineering Decision Model (REDM) that uses multiple criteria to determine the optimal timing for renewal engineering. The unique features of this model are that: · it is a novel extension to existing real options valuation models in that it uses overall utility rather than present value of cash flows to model engineering value; and · it is the only REDM that optimises timing of renewal engineering with respect to fundamental business objectives; The final objective was to develop and validate an Asset Renewal Engineering Philosophy (AREP) consisting of three principles of asset renewal engineering. The principles were validated using a novel application of real options theory. The AREP is the only renewal engineering philosophy in existence. The original contributions of this research are expected to enrich the body of knowledge in asset management through effectively addressing the need for an asset management decision framework, asset renewal and renewal engineering optimisation based on fundamental business objectives and a novel renewal engineering philosophy.
Resumo:
The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.