897 resultados para 2447: modelling and forecasting
Resumo:
The mechanical strength and failure behavior of conventional and microstructured silica optical fibers was investigated using a tensile test and fracture mechanics and numerical analyses. The effect of polymer coating on failure behavior was also studied. The results indicate that all these fibers fail in a brittle manner and failure normally starts from fiber surfaces. The failure loads observed in coated fibers are higher than those in bare fibers. The introduction of air holes reduces fiber strength and their geometrical arrangements have a remarkable effect on stress distribution in the longitudinal direction. These results are potentially useful for the design, fabrication and evaluation of optical fibers for a wide range of applications.
Resumo:
As part of an ongoing research on the development of a longer life insulated rail joint (IRJ), this paper reports a field experiment and a simplified 2D numerical modelling for the purpose of investigating the behaviour of rail web in the vicinity of endpost in an insulated rail joint (IRJ) due to wheel passages. A simplified 2D plane stress finite element model is used to simulate the wheel-rail rolling contact impact at IRJ. This model is validated using data from a strain gauged IRJ that was installed in a heavy haul network; data in terms of the vertical and shear strains at specific positions of the IRJ during train passing were captured and compared with the results of the FE model. The comparison indicates a satisfactory agreement between the FE model and the field testing. Furthermore, it demonstrates that the experimental and numerical analyses reported in this paper provide a valuable datum for developing further insight into the behaviour of IRJ under wheel impacts.
Resumo:
Home Automation (HA) has emerged as a prominent ¯eld for researchers and in- vestors confronting the challenge of penetrating the average home user market with products and services emerging from technology based vision. In spite of many technology contri- butions, there is a latent demand for a®ordable and pragmatic assistive technologies for pro-active handling of complex lifestyle related problems faced by home users. This study has pioneered to develop an Initial Technology Roadmap for HA (ITRHA) that formulates a need based vision of 10-15 years, identifying market, product and technology investment opportunities, focusing on those aspects of HA contributing to e±cient management of home and personal life. The concept of Family Life Cycle is developed to understand the temporal needs of family. In order to formally describe a coherent set of family processes, their relationships, and interaction with external elements, a reference model named Fam- ily System is established that identi¯es External Entities, 7 major Family Processes, and 7 subsystems-Finance, Meals, Health, Education, Career, Housing, and Socialisation. Anal- ysis of these subsystems reveals Soft, Hard and Hybrid processes. Rectifying the lack of formal methods for eliciting future user requirements and reassessing evolving market needs, this study has developed a novel method called Requirement Elicitation of Future Users by Systems Scenario (REFUSS), integrating process modelling, and scenario technique within the framework of roadmapping. The REFUSS is used to systematically derive process au- tomation needs relating the process knowledge to future user characteristics identi¯ed from scenarios created to visualise di®erent futures with richly detailed information on lifestyle trends thus enabling learning about the future requirements. Revealing an addressable market size estimate of billions of dollars per annum this research has developed innovative ideas on software based products including Document Management Systems facilitating automated collection, easy retrieval of all documents, In- formation Management System automating information services and Ubiquitous Intelligent System empowering the highly mobile home users with ambient intelligence. Other product ideas include robotic devices of versatile Kitchen Hand and Cleaner Arm that can be time saving. Materialisation of these products require technology investment initiating further research in areas of data extraction, and information integration as well as manipulation and perception, sensor actuator system, tactile sensing, odour detection, and robotic controller. This study recommends new policies on electronic data delivery from service providers as well as new standards on XML based document structure and format.
Resumo:
Multi-level concrete buildings requrre substantial temporary formwork structures to support the slabs during construction. The primary function of this formwork is to safely disperse the applied loads so that the slab being constructed, or the portion of the permanent structure already constructed, is not overloaded. Multi-level formwork is a procedure in which a limited number of formwork and shoring sets are cycled up the building as construction progresses. In this process, each new slab is supported by a number of lower level slabs. The new slab load is, essentially, distributed to these supporting slabs in direct proportion to their relative stiffness. When a slab is post-tensioned using draped tendons, slab lift occurs as a portion of the slab self-weight is balanced. The formwork and shores supporting that slab are unloaded by an amount equivalent to the load balanced by the post-tensioning. This produces a load distribution inherently different from that of a conventionally reinforced slab. Through , theoretical modelling and extensive on-site shore load measurement, this research examines the effects of post-tensioning on multilevel formwork load distribution. The research demonstrates that the load distribution process for post-tensioned slabs allows for improvements to current construction practice. These enhancements include a shortening of the construction period; an improvement in the safety of multi-level form work operations; and a reduction in the quantity of form work materials required for a project. These enhancements are achieved through the general improvement in safety offered by post-tensioning during the various formwork operations. The research demonstrates that there is generally a significant improvement in the factors of safety over those for conventionally reinforced slabs. This improvement in the factor of safety occurs at all stages of the multi-level formwork operation. The general improvement in the factors of safety with post-tensioned slabs allows for a shortening of the slab construction cycle time. Further, the low level of load redistribution that occurs during the stripping operations makes post-tensioned slabs ideally suited to reshoring procedures. Provided the overall number of interconnected levels remains unaltered, it is possible to increase the number of reshored levels while reducing the number of undisturbed shoring levels without altering the factors of safety, thereby, reducing the overall quantity of formwork and shoring materials.
Resumo:
The Inflatable Rescue Boat (IRB) is arguably the most effective rescue tool used by the Australian surf lifesavers. The exceptional features of high mobility and rapid response have enabled it to become an icon on Australia's popular beaches. However, the IRB's extensive use within an environment that is as rugged as it is spectacular, has led it to become a danger to those who risk their lives to save others. Epidemiological research revealed lower limb injuries to be predominant, particularly the right leg. The common types of injuries were fractures and dislocations, as well as muscle or ligament strains and tears. The concern expressed by Surf Life Saving Queensland (SLSQ) and Surf Life Saving Australia (SLSA) led to a biomechanical investigation into this unique and relatively unresearched field. The aim of the research was to identify the causes of injury and propose processes that may reduce the instances and severity of injury to surf lifesavers during IRB operation. Following a review of related research, a design analysis of the craft was undertaken as an introduction to the craft, its design and uses. The mechanical characteristics of the vessel were then evaluated and the accelerations applied to the crew in the IRB were established through field tests. The data were then combined and modelled in the 3-D mathematical modelling and simulation package, MADYMO. A tool was created to compare various scenarios of boat design and methods of operation to determine possible mechanisms to reduce injuries. The results of this study showed that under simulated wave loading the boats flex around a pivot point determined by the position of the hinge in the floorboard. It was also found that the accelerations experienced by the crew exhibited similar characteristics to road vehicle accidents. Staged simulations indicated the attributes of an optimum foam in terms of thickness and density. Likewise, modelling of the boat and crew produced simulations that predicted realistic crew response to tested variables. Unfortunately, the observed lack of adherence to the SLSA footstrap Standard has impeded successful epidemiological and modelling outcomes. If uniformity of boat setup can be assured then epidemiological studies will be able to highlight the influence of implementing changes to the boat design. In conclusion, the research provided a tool to successfully link the epidemiology and injury diagnosis to the mechanical engineering design through the use of biomechanics. This was a novel application of the mathematical modelling software MADYMO. Other craft can also be investigated in this manner to provide solutions to the problem identified and therefore reduce risk of injury for the operators.
Resumo:
Expert knowledge is valuable in many modelling endeavours, particularly where data is not extensive or sufficiently robust. In Bayesian statistics, expert opinion may be formulated as informative priors, to provide an honest reflection of the current state of knowledge, before updating this with new information. Technology is increasingly being exploited to help support the process of eliciting such information. This paper reviews the benefits that have been gained from utilizing technology in this way. These benefits can be structured within a six-step elicitation design framework proposed recently (Low Choy et al., 2009). We assume that the purpose of elicitation is to formulate a Bayesian statistical prior, either to provide a standalone expert-defined model, or for updating new data within a Bayesian analysis. We also assume that the model has been pre-specified before selecting the software. In this case, technology has the most to offer to: targeting what experts know (E2), eliciting and encoding expert opinions (E4), whilst enhancing accuracy (E5), and providing an effective and efficient protocol (E6). Benefits include: -providing an environment with familiar nuances (to make the expert comfortable) where experts can explore their knowledge from various perspectives (E2); -automating tedious or repetitive tasks, thereby minimizing calculation errors, as well as encouraging interaction between elicitors and experts (E5); -cognitive gains by educating users, enabling instant feedback (E2, E4-E5), and providing alternative methods of communicating assessments and feedback information, since experts think and learn differently; and -ensuring a repeatable and transparent protocol is used (E6).
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
Currently in Australia, there are no decision support tools for traffic and transport engineers to assess the crash risk potential of proposed road projects at design level. A selection of equivalent tools already exists for traffic performance assessment, e.g. aaSIDRA or VISSIM. The Urban Crash Risk Assessment Tool (UCRAT) was developed for VicRoads by ARRB Group to promote methodical identification of future crash risks arising from proposed road infrastructure, where safety cannot be evaluated based on past crash history. The tool will assist practitioners with key design decisions to arrive at the safest and the most cost -optimal design options. This paper details the development and application of UCRAT software. This professional tool may be used to calculate an expected mean number of casualty crashes for an intersection, a road link or defined road network consisting of a number of such elements. The mean number of crashes provides a measure of risk associated with the proposed functional design and allows evaluation of alternative options. The tool is based on historical data for existing road infrastructure in metropolitan Melbourne and takes into account the influence of key design features, traffic volumes, road function and the speed environment. Crash prediction modelling and risk assessment approaches were combined to develop its unique algorithms. The tool has application in such projects as road access proposals associated with land use developments, public transport integration projects and new road corridor upgrade proposals.
Resumo:
To accurately and effectively simulate large deformation is one of the major challenges in numerical modeling of metal forming. In this paper, an adaptive local meshless formulation based on the meshless shape functions and the local weak-form is developed for the large deformation analysis. Total Lagrangian (TL) and the Updated Lagrangian (UL) approaches are used and thoroughly compared each other in computational efficiency and accuracy. It has been found that the developed meshless technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming. In addition, the TL has better computational efficiency than the UL. However, the adaptive analysis is much more efficient using the UL approach than using in the TL approach.
Resumo:
Hydraulic excavators in the mining industry are widely used owing to the large payload capabilities these machines can achieve. However, there are very few optimisation studies for producing efficient hydraulic excavator backets. An efficient bucket can avoid unnecessary weight; greatly influence the payload and optimise the efficiency of hydraulic mining excavators. This paper presents a framework for the development of a scaled hydraulic excavator by examining the geometry and force relationships. A small hydraulic excavator was purchased and fitted with a broom scaled to a factor. Geometric and force relationships of the model were derived to assist computer instrumentation to retrieve necessary variable input for bucket design.
Resumo:
Auto rickshaws (3-wheelers) are the most sought after transport among the urban and rural poor in India. The assembly of the vehicle involves assemblies of several major components. The L-angle is the component that connects the front panel with the vehicle floor. Current L-angle part has been observed to experience permanent deformation failure over period of time. This paper studies the effect of the addition of stiffeners on the L-angle to increase the strength of the component. A physical model of the L-angle was reversed engineered and modelled in CAD before static loading analysis were carried out on the model using finite element analysis. The modified L-angle fitted with stiffeners was shown to be able to withstand more load compare to previous design.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.
Resumo:
A computational fluid dynamics (CFD) analysis has been performed for a flat plate photocatalytic reactor using CFD code FLUENT. Under the simulated conditions (Reynolds number, Re around 2650), a detailed time accurate computation shows the different stages of flow evolution and the effects of finite length of the reactor in creating flow instability, which is important to improve the performance of the reactor for storm and wastewater reuse. The efficiency of a photocatalytic reactor for pollutant decontamination depends on reactor hydrodynamics and configurations. This study aims to investigate the role of different parameters on the optimization of the reactor design for its improved performance. In this regard, more modelling and experimental efforts are ongoing to better understand the interplay of the parameters that influence the performance of the flat plate photocatalytic reactor.
Resumo:
This article explores the use of probabilistic classification, namely finite mixture modelling, for identification of complex disease phenotypes, given cross-sectional data. In particular, if focuses on posterior probabilities of subgroup membership, a standard output of finite mixture modelling, and how the quantification of uncertainty in these probabilities can lead to more detailed analyses. Using a Bayesian approach, we describe two practical uses of this uncertainty: (i) as a means of describing a person’s membership to a single or multiple latent subgroups and (ii) as a means of describing identified subgroups by patient-centred covariates not included in model estimation. These proposed uses are demonstrated on a case study in Parkinson’s disease (PD), where latent subgroups are identified using multiple symptoms from the Unified Parkinson’s Disease Rating Scale (UPDRS).