964 resultados para Imaging systems.
Resumo:
In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.
Resumo:
A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.
Resumo:
Sixteen formalin-fixed foetal livers were scanned in vitro using a new system for estimating volume from a sequence of multiplanar 2D ultrasound images. Three different scan techniques were used (radial, parallel and slanted) and four volume estimation algorithms (ellipsoid, planimetry, tetrahedral and ray tracing). Actual liver volumes were measured by water displacement. Twelve of the sixteen livers also received x-ray computed tomography (CT) and magnetic resonance (MR) scans and the volumes were calculated using voxel counting and planimetry. The percentage accuracy (mean ± SD) was 5.3 ± 4.7%, −3.1 ± 9.6% and −0.03 ± 9.7% for ultrasound (radial scans, ray volumes), MR and CT (voxel counting) respectively. The new system may be useful for accurately estimating foetal liver volume in utero.
Resumo:
Ultrasound is used extensively in the field of medical imaging. In this paper, the basic principles of ultrasound are explained using ‘everyday’ physics. Topics include the generation of ultrasound, basic interactions with material and the measurement of blood flow using the Doppler effect.
Resumo:
Nineteen studies met the inclusion criteria. A skin temperature reduction of 5–15 °C, in accordance with the recent PRICE (Protection, Rest, Ice, Compression and Elevation) guidelines, were achieved using cold air, ice massage, crushed ice, cryotherapy cuffs, ice pack, and cold water immersion. There is evidence supporting the use and effectiveness of thermal imaging in order to access skin temperature following the application of cryotherapy. Thermal imaging is a safe and non-invasive method of collecting skin temperature. Although further research is required, in terms of structuring specific guidelines and protocols, thermal imaging appears to be an accurate and reliable method of collecting skin temperature data following cryotherapy. Currently there is ambiguity regarding the optimal skin temperature reductions in a medical or sporting setting. However, this review highlights the ability of several different modalities of cryotherapy to reduce skin temperature.
Resumo:
Design Science Research (DSR) has emerged as an important approach in Information Systems (IS) research, evidenced by the plethora of recent related articles in recognized IS outlets. Nonetheless, discussion continues on the value of DSR for IS and how to conduct strong DSR, with further discussion necessary to better position DSR as a mature and stable research paradigm appropriate for IS. This paper contributes to address this need, by providing a comprehensive conceptual and argumentative positioning of DSR relative to the core of IS. This paper seeks to argue the relevance of DSR as a paradigm that addresses the core of IS discipline well. Here we use the framework defined by Wand and Weber, to position what the core of IS is.
Resumo:
It is accepted that the efficiency of sugar cane clarification is closely linked with sugar juice composition (including suspended or insoluble impurities), the inorganic phosphate content, the liming condition and type, and the interactions between the juice components. These interactions are not well understood, particularly those between calcium, phosphate, and sucrose in sugar cane juice. Studies have been conducted on calcium oxide (CaO)/phosphate/sucrose systems in both synthetic and factory juices to provide further information on the defecation process (i.e., simple liming to effect impurity removal) and to identify an effective clarification process that would result in reduced scaling of sugar factory evaporators, pans, and centrifugals. Results have shown that a two-stage process involving the addition of lime saccharate to a set juice pH followed by the addition of sodium hydroxide to a final juice pH or a similar two-stage process where the order of addition of the alkalis is reversed prior to clarification reduces the impurity loading of the clarified juice compared to that of the clarified juice obtained by the conventional defecation process. The treatment process showed reductions in CaO (27% to 50%) and MgO (up to 20%) in clarified juices with no apparent loss in juice clarity or increase in residence time of the mud particles compared to those in the conventional process. There was also a reduction in the SiO2 content. However, the disadvantage of this process is the significant increase in the Na2O content.
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
Companies face the challenges of expanding their markets, improving products, services and processes, and exploiting intellectual capital in a dynamic network. Therefore, more companies are turning to an Enterprise System (ES). Knowledge management (KM) has also received considerable attention and is continuously gaining the interest of industry, enterprises, and academia. For ES, KM can provide support across the entire lifecycle, from selection and implementation to use. In addition, it is also recognised that an ontology is an appropriate methodology to accomplish a common consensus of communication, as well as to support a diversity of KM activities, such as knowledge repository, retrieval, sharing, and dissemination. This paper examines the role of ontology-based KM for ES (OKES) and investigates the possible integration of ontology-based KM and ES. The authors develop a taxonomy as a framework for understanding OKES research. In order to achieve the objective of this study, a systematic review of existing research was conducted. Based on a theoretical framework of the ES lifecycle, KM, KM for ES, ontology, and ontology-based KM, guided by the framework of study, a taxonomy for OKES is established.
Resumo:
With the continued development of renewable energy generation technologies and increasing pressure to combat the global effects of greenhouse warming, plug-in hybrid electric vehicles (PHEVs) have received worldwide attention, finding applications in North America and Europe. When a large number of PHEVs are introduced into a power system, there will be extensive impacts on power system planning and operation, as well as on electricity market development. It is therefore necessary to properly control PHEV charging and discharging behaviors. Given this background, a new unit commitment model and its solution method that takes into account the optimal PHEV charging and discharging controls is presented in this paper. A 10-unit and 24-hour unit commitment (UC) problem is employed to demonstrate the feasibility and efficiency of the developed method, and the impacts of the wide applications of PHEVs on the operating costs and the emission of the power system are studied. Case studies are also carried out to investigate the impacts of different PHEV penetration levels and different PHEV charging modes on the results of the UC problem. A 100-unit system is employed for further analysis on the impacts of PHEVs on the UC problem in a larger system application. Simulation results demonstrate that the employment of optimized PHEV charging and discharging modes is very helpful for smoothing the load curve profile and enhancing the ability of the power system to accommodate more PHEVs. Furthermore, an optimal Vehicle to Grid (V2G) discharging control provides economic and efficient backups and spinning reserves for the secure and economic operation of the power system
Resumo:
Unmanned Aircraft Systems (UAS) describe a diverse range of aircraft that are operated without a human pilot on-board. Unmanned aircraft range from small rotorcraft, which can fit in the palm of your hand, through to fixed wing aircraft comparable in size to that of a commercial passenger jet. The absence of a pilot on-board allows these aircraft to be developed with unique performance capabilities facilitating a wide range of applications in surveillance, environmental management, agriculture, defence, and search and rescue. However, regulations relating to the safe design and operation of UAS first need to be developed before the many potential benefits from these applications can be realised. According to the International Civil Aviation Organization (ICAO), a Risk Management Process (RMP) should support all civil aviation policy and rulemaking activities (ICAO 2009). The RMP is described in International standard, ISO 31000:2009 (ISO, 2009a). This standard is intentionally generic and high-level, providing limited guidance on how it can be effectively applied to complex socio-technical decision problems such as the development of regulations for UAS. Through the application of principles and tools drawn from systems philosophy and systems engineering, this thesis explores how the RMP can be effectively applied to support the development of safety regulations for UAS. A sound systems-theoretic foundation for the RMP is presented in this thesis. Using the case-study scenario of a UAS operation over an inhabited area and through the novel application of principles drawn from general systems modelling philosophy, a consolidated framework of the definitions of the concepts of: safe, risk and hazard is made. The framework is novel in that it facilitates the representation of broader subjective factors in an assessment of the safety of a system; describes the issues associated with the specification of a system-boundary; makes explicit the hierarchical nature of the relationship between the concepts and the subsequent constraints that exist between them; and can be evaluated using a range of analytic or deliberative modelling techniques. Following the general sequence of the RMP, the thesis explores the issues associated with the quantified specification of safety criteria for UAS. A novel risk analysis tool is presented. In contrast to existing risk tools, the analysis tool presented in this thesis quantifiably characterises both the societal and individual risk of UAS operations as a function of the flight path of the aircraft. A novel structuring of the risk evaluation and risk treatment decision processes is then proposed. The structuring is achieved through the application of the Decision Support Problem Technique; a modelling approach that has been previously used to effectively model complex engineering design processes and to support decision-making in relation to airspace design. The final contribution made by this thesis is in the development of an airworthiness regulatory framework for civil UAS. A novel "airworthiness certification matrix" is proposed as a basis for the definition of UAS "Part 21" regulations. The outcome airworthiness certification matrix provides a flexible, systematic and justifiable method for promulgating airworthiness regulations for UAS. In addition, an approach for deriving "Part 1309" regulations for UAS is presented. In contrast to existing approaches, the approach presented in this thesis facilitates a traceable and objective tailoring of system-level reliability requirements across the diverse range of UAS operations. The significance of the research contained in this thesis is clearly demonstrated by its practical real world outcomes. Industry regulatory development groups and the Civil Aviation Safety Authority have endorsed the proposed airworthiness certification matrix. The risk models have also been used to support research undertaken by the Australian Department of Defence. Ultimately, it is hoped that the outcomes from this research will play a significant part in the shaping of regulations for civil UAS, here in Australia and around the world.
Resumo:
Food modelling systems such as the Core Foods and the Australian Guide to Healthy Eating are frequently used as nutritional assessment tools for menus in ‘well’ groups (such as boarding schools, prisons and mental health facilities), with the draft Foundation and Total Diets (FATD) the latest revision. The aim of this paper is to apply the FATD to an assessment of food provision in a long stay, ‘well’, group setting to determine its usefulness as a tool. A detailed menu review was conducted in a 1000 bed male prison, including verification of all recipes. Full diet histories were collected on 106 prisoners which included foods consumed from the menu and self funded snacks. Both the menu and diet histories were analysed according to core foods, with recipes used to assist in quantification of mixed dishes. Comparison was made of average core foods with Foundation Diet recommendations (FDR) for males. Results showed that the standard menu provided sufficient quantity for 8 of 13 FDRs, however was low in nuts, legumes, refined cereals and marginally low in fruits and orange vegetables. The average prisoner diet achieved 9 of 13 FDRs, notably with margarines and oils less than half and legumes one seventh of recommended. Overall, although the menu and prisoner diets could easily be assessed using the FDRs, it was not consistent with recommendations. In long stay settings other Nutrient Reference Values not modelled in the FATDS need consideration, in particular, Suggested Dietary Targets and professional judgement is required in interpretation.
Resumo:
Human activity-induced vibrations in slender structural sys tems become apparent in many different excitation modes and consequent action effects that cause discomfort to occupants, crowd panic and damage to public infrastructure. Resulting loss of public confidence in safety of structures, economic losses, cost of retrofit and repairs can be significant. Advanced computational and visualisation techniques enable engineers and architects to evolve bold and innovative structural forms, very often without precedence. New composite and hybrid materials that are making their presence in structural systems lack historical evidence of satisfactory performance over anticipated design life. These structural systems are susceptible to multi-modal and coupled excitation that are very complex and have inadequate design guidance in the present codes and good practice guides. Many incidents of amplified resonant response have been reported in buildings, footbridges, stadia a nd other crowded structures with adverse consequences. As a result, attenuation of human-induced vibration of innovative and slender structural systems very ofte n requires special studies during the design process. Dynamic activities possess variable characteristics and thereby induce complex responses in structures that are sensitive to parametric variations. Rigorous analytical techniques are available for investigation of such complex actions and responses to produce acceptable performance in structural systems. This paper presents an overview and a critique of existing code provisions for human-induced vibration followed by studies on the performance of three contrasting structural systems that exhibit complex vibration. The dynamic responses of these systems under human-induced vibrations have been carried out using experimentally validated computer simulation techniques. The outcomes of these studies will have engineering applications for safe and sustainable structures and a basis for developing design guidance.
Resumo:
Unmanned Aircraft Systems (UAS) are one of a number of emerging aviation sectors. Such new aviation concepts present a significant challenge to National Aviation Authorities (NAAs) charged with ensuring the safety of their operation within the existing airspace system. There is significant heritage in the existing body of aviation safety regulations for Conventionally Piloted Aircraft (CPA). It can be argued that the promulgation of these regulations has delivered a level of safety tolerable to society, thus justifying the “default position” of applying these same standards, regulations and regulatory structures to emerging aviation concepts such as UAS. An example of this is the proposed “1309” regulation for UAS, which is based on the 1309 regulation for CPA. However, the absence of a pilot on-board an unmanned aircraft creates a fundamentally different risk paradigm to that of CPA. An appreciation of these differences is essential to the justification of the “default position” and in turn, to ensure the development of effective safety standards and regulations for UAS. This paper explores the suitability of the proposed “1309” regulation for UAS. A detailed review of the proposed regulation is provided and a number of key assumptions are identified and discussed. A high-level model characterising the expected number of third party fatalities on the ground is then used to determine the impact of these assumptions. The results clearly show that the “one size fits all” approach to the definition of 1309 regulations for UAS, which mandates equipment design and installation requirements independent of where the UAS is to be operated, will not lead to an effective management of the risks.
Resumo:
The authors present a Cause-Effect fault diagnosis model, which utilises the Root Cause Analysis approach and takes into account the technical features of a digital substation. The Dempster/Shafer evidence theory is used to integrate different types of fault information in the diagnosis model so as to implement a hierarchical, systematic and comprehensive diagnosis based on the logic relationship between the parent and child nodes such as transformer/circuit-breaker/transmission-line, and between the root and child causes. A real fault scenario is investigated in the case study to demonstrate the developed approach in diagnosing malfunction of protective relays and/or circuit breakers, miss or false alarms, and other commonly encountered faults at a modern digital substation.