905 resultados para post-deformation softening modelling
Resumo:
This paper presents an accurate and robust geometric and material nonlinear formulation to predict structural behaviour of unprotected steel members at elevated temperatures. A fire analysis including large displacement effects for frame structures is presented. This finite element formulation of beam-column elements is based on the plastic hinge approach to model the elasto-plastic strain-hardening material behaviour. The Newton-Raphson method allowing for the thermal-time dependent effect was employed for the solution of the non-linear governing equations for large deflection in thermal history. A combined incremental and total formulation for determining member resistance is employed in this nonlinear solution procedure for the efficient modeling of nonlinear effects. Degradation of material strength with increasing temperature is simulated by a set of temperature-stress-strain curves according to both ECCS and BS5950 Part 8, which implicitly allows for creep deformation. The effects of uniform or non-uniform temperature distribution over the section of the structural steel member are also considered. Several numerical and experimental verifications are presented.
Resumo:
Global awareness for cleaner and renewable energy is transforming the electricity sector at many levels. New technologies are being increasingly integrated into the electricity grid at high, medium and low voltage levels, new taxes on carbon emissions are being introduced and individuals can now produce electricity, mainly through rooftop photovoltaic (PV) systems. While leading to improvements, these changes also introduce challenges, and a question that often rises is ‘how can we manage this constantly evolving grid?’ The Queensland Government and Ergon Energy, one of the two Queensland distribution companies, have partnered with some Australian and German universities on a project to answer this question in a holistic manner. The project investigates the impact the integration of renewables and other new technologies has on the physical structure of the grid, and how this evolving system can be managed in a sustainable and economical manner. To aid understanding of what the future might bring, a software platform has been developed that integrates two modelling techniques: agent-based modelling (ABM) to capture the characteristics of the different system units accurately and dynamically, and particle swarm optimization (PSO) to find the most economical mix of network extension and integration of distributed generation over long periods of time. Using data from Ergon Energy, two types of networks (3 phase, and Single Wired Earth Return or SWER) have been modelled; three-phase networks are usually used in dense networks such as urban areas, while SWER networks are widely used in rural Queensland. Simulations can be performed on these networks to identify the required upgrades, following a three-step process: a) what is already in place and how it performs under current and future loads, b) what can be done to manage it and plan the future grid and c) how these upgrades/new installations will perform over time. The number of small-scale distributed generators, e.g. PV and battery, is now sufficient (and expected to increase) to impact the operation of the grid, which in turn needs to be considered by the distribution network manager when planning for upgrades and/or installations to stay within regulatory limits. Different scenarios can be simulated, with different levels of distributed generation, in-place as well as expected, so that a large number of options can be assessed (Step a). Once the location, sizing and timing of assets upgrade and/or installation are found using optimisation techniques (Step b), it is possible to assess the adequacy of their daily performance using agent-based modelling (Step c). One distinguishing feature of this software is that it is possible to analyse a whole area at once, while still having a tailored solution for each of the sub-areas. To illustrate this, using the impact of battery and PV can have on the two types of networks mentioned above, three design conditions can be identified (amongst others): · Urban conditions o Feeders that have a low take-up of solar generators, may benefit from adding solar panels o Feeders that need voltage support at specific times, may be assisted by installing batteries · Rural conditions - SWER network o Feeders that need voltage support as well as peak lopping may benefit from both battery and solar panel installations. This small example demonstrates that no single solution can be applied across all three areas, and there is a need to be selective in which one is applied to each branch of the network. This is currently the function of the engineer who can define various scenarios against a configuration, test them and iterate towards an appropriate solution. Future work will focus on increasing the level of automation in identifying areas where particular solutions are applicable.
Resumo:
This research was commissioned by Metecno Pty Ltd, trading as Bondor®. The InsulLiving house was designed and constructed by Bondor®. The house instrumentation (electricity circuits, indoor environment, weather station) was provided by Bondor and supplied and installed by independent contractors. This report contains analysis of data collected from the InsulLiving house at Burpengary during 1 year of occupancy by a family of four for the period 1 April 2012 – 31 March 2013. The data shows a daily average electricity consumption 48% less than the regional average. The analysis confirms that the 9 star house performed thermally slightly better than the simulated performance. The home was 'near zero energy', with its modest 2.1kW solar power system meeting all of the needs for space heating and cooling, lighting and most water heating.
Resumo:
Stigmergy is a biological term used when discussing a sub-set of insect swarm-behaviour describing the apparent organisation seen during their activities. Stigmergy describes a communication mechanism based on environment-mediated signals which trigger responses among the insects. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, where the pheromones are a form of environment-mediated communication. What is interesting with this phenomenon is that highly organized societies are achieved without an apparent management structure. Stigmergy is also observed in human environments, both natural and engineered. It is implicit in the Web where sites provide a virtual environment supporting coordinative contributions. Researchers in varying disciplines appreciate the power of this phenomenon and have studied how to exploit it. As stigmergy becomes more widely researched we see its definition mutate as papers citing original work become referenced themselves. Each paper interprets these works in ways very specific to the research being conducted. Our own research aims to better understand what improves the collaborative function of a Web site when exploiting the phenomenon. However when researching stigmergy to develop our understanding we discover a lack of a standardized and abstract model for the phenomenon. Papers frequently cited the same generic descriptions before becoming intimately focused on formal specifications of an algorithm, or esoteric discussions regarding sub-facets of the topic. None provide a holistic and macro-level view to model and standardize the nomenclature. This paper provides a content analysis of influential literature documenting the numerous theoretical and experimental papers that have focused on stigmergy. We establish that stigmergy is a phenomenon that transcends the insect world and is more than just a metaphor when applied to the human world. We present from our own research our general theory and abstract model of semantics of stigma in stigmergy. We hope our model will clarify the nuances of the phenomenon into a useful road-map, and standardise vocabulary that we witness becoming confused and divergent. Furthermore, this paper documents the analysis on which we base our next paper: Special Theory of Stigmergy: A Design Pattern for Web 2.0 Collaboration.
Resumo:
Introduction The culture in many team sports involves consumption of large amounts of alcohol after training/competition. The effect of such a practice on recovery processes underlying protein turnover in human skeletal muscle are unknown. We determined the effect of alcohol intake on rates of myofibrillar protein synthesis (MPS) following strenuous exercise with carbohydrate (CHO) or protein ingestion. Methods In a randomized cross-over design, 8 physically active males completed three experimental trials comprising resistance exercise (8×5 reps leg extension, 80% 1 repetition maximum) followed by continuous (30 min, 63% peak power output (PPO)) and high intensity interval (10×30 s, 110% PPO) cycling. Immediately, and 4 h post-exercise, subjects consumed either 500 mL of whey protein (25 g; PRO), alcohol (1.5 g·kg body mass−1, 12±2 standard drinks) co-ingested with protein (ALC-PRO), or an energy-matched quantity of carbohydrate also with alcohol (25 g maltodextrin; ALC-CHO). Subjects also consumed a CHO meal (1.5 g CHO·kg body mass−1) 2 h post-exercise. Muscle biopsies were taken at rest, 2 and 8 h post-exercise. Results Blood alcohol concentration was elevated above baseline with ALC-CHO and ALC-PRO throughout recovery (P<0.05). Phosphorylation of mTORSer2448 2 h after exercise was higher with PRO compared to ALC-PRO and ALC-CHO (P<0.05), while p70S6K phosphorylation was higher 2 h post-exercise with ALC-PRO and PRO compared to ALC-CHO (P<0.05). Rates of MPS increased above rest for all conditions (~29–109%, P<0.05). However, compared to PRO, there was a hierarchical reduction in MPS with ALC-PRO (24%, P<0.05) and with ALC-CHO (37%, P<0.05). Conclusion We provide novel data demonstrating that alcohol consumption reduces rates of MPS following a bout of concurrent exercise, even when co-ingested with protein. We conclude that alcohol ingestion suppresses the anabolic response in skeletal muscle and may therefore impair recovery and adaptation to training and/or subsequent performance.
Resumo:
This thesis investigates how ownership structure and corporate governance relate to the post-listing liquidity of IPO firms. Using a sample of 1,049 Chinese IPOs from 2001 to 2010, the results show firms with a broader shareholder base and higher ownership concentration have greater post-listing liquidity. So do firms with higher state ownership and lower institution ownership. Corporate governance is also important; post-listing liquidity is higher for firms with CEO duality, a larger and more independent board, and more frequent board meetings. The 2005 Split Share Structure Reform, which increased the proportion of tradable shares, has a positive impact on liquidity.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performace of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made.
Resumo:
Philosophical inquiry in the teaching and learning of mathematics has received continued, albeit limited, attention over many years (e.g., Daniel, 2000; English, 1994; Lafortune, Daniel, Fallascio, & Schleider, 2000; Kennedy, 2012a). The rich contributions these communities can offer school mathematics, however, have not received the deserved recognition, especially from the mathematics education community. This is a perplexing situation given the close relationship between the two disciplines and their shared values for empowering students to solve a range of challenging problems, often unanticipated, and often requiring broadened reasoning. In this article, I first present my understanding of philosophical inquiry as it pertains to the mathematics classroom, taking into consideration the significant work that has been undertaken on socio-political contexts in mathematics education (e.g., Skovsmose & Greer, 2012). I then consider one approach to advancing philosophical inquiry in the mathematics classroom, namely, through modelling activities that require interpretation, questioning, and multiple approaches to solution. The design of these problem activities, set within life-based contexts, provides an ideal vehicle for stimulating philosophical inquiry.
Resumo:
The number of office building retrofit projects is increasing. These projects are characterised by processes which have a close relationship with waste generation and therefore demand a high level of waste management. In a preliminary study reported separately, we identified seven critical factors of on-site waste generation in office building retrofit projects. Through semi-structured interviews and Interpretive Structural Modelling, this research further investigated the interrelationships among these critical waste factors, to identify each factor’s level of influence on waste generation and propose effective solutions for waste minimization. “Organizational commitment” was identified as the fundamental issue for waste generation in the ISM system. Factors related to plan, design and construction processes were found to be located in the middle levels of the ISM model but still had significant impacts on the system as a whole. Based on the interview findings and ISM analysis results, some practical solutions were proposed for waste minimization in building retrofit projects: (1) reusable and adaptable fit-out design; (2) a system for as-built drawings and building information; (3) integrated planning for retrofitting work process and waste management; and (4) waste benchmarking development for retrofit projects. This research will provide a better understanding of waste issues associated with building retrofit projects and facilitate enhanced waste minimization.
Resumo:
Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.
Resumo:
Terrorists usually target high occupancy iconic and public buildings using vehicle borne incendiary devices in order to claim a maximum number of lives and cause extensive damage to public property. While initial casualties are due to direct shock by the explosion, collapse of structural elements may extensively increase the total figure. Most of these buildings have been or are built without consideration of their vulnerability to such events. Therefore, the vulnerability and residual capacity assessment of buildings to deliberately exploded bombs is important to provide mitigation strategies to protect the buildings' occupants and the property. Explosive loads and their effects on a building have therefore attracted significant attention in the recent past. Comprehensive and economical design strategies must be developed for future construction. This research investigates the response and damage of reinforced concrete (RC) framed buildings together with their load bearing key structural components to a near field blast event. Finite element method (FEM) based analysis was used to investigate the structural framing system and components for global stability, followed by a rigorous analysis of key structural components for damage evaluation using the codes SAP2000 and LS DYNA respectively. The research involved four important areas in structural engineering. They are blast load determination, numerical modelling with FEM techniques, material performance under high strain rate and non-linear dynamic structural analysis. The response and damage of a RC framed building for different blast load scenarios were investigated. The blast influence region for a two dimensional RC frame was investigated for different load conditions and identified the critical region for each loading case. Two types of design methods are recommended for RC columns to provide superior residual capacities. They are RC columns detailing with multi-layer steel reinforcement cages and a composite columns including a central structural steel core. These are to provide post blast gravity load resisting capacity compared to typical RC column against a catastrophic collapse. Overall, this research broadens the current knowledge of blast and residual capacity analysis of RC framed structures and recommends methods to evaluate and mitigate blast impact on key elements of multi-storey buildings.
Resumo:
During the evolution of the music industry, developments in the media environment have required music firms to adapt in order to survive. Changes in broadcast radio programming during the 1950s; the Compact Cassette during the 1970s; and the deregulation of media ownership during the 1990s are all examples of changes which have heavily affected the music industry. This study explores similar contemporary dynamics, examines how decision makers in the music industry perceive and make sense of the developments, and reveals how they revise their business strategies, based on their mental models of the media environment. A qualitative system dynamics model is developed in order to support the reasoning brought forward by the study. The model is empirically grounded, but is also based on previous music industry research and a theoretical platform constituted by concepts from evolutionary economics and sociology of culture. The empirical data primarily consist of 36 personal interviews with decision makers in the American, British and Swedish music industrial ecosystems. The study argues that the model which is proposed, more effectively explains contemporary music industry dynamics than music industry models presented by previous research initiatives. Supported by the model, the study is able to show how “new” media outlets make old music business models obsolete and challenge the industry’s traditional power structures. It is no longer possible to expose music at one outlet (usually broadcast radio) in the hope that it will lead to sales of the same music at another (e.g. a compact disc). The study shows that many music industry decision makers still have not embraced the new logic, and have not yet challenged their traditional mental models of the media environment. Rather, they remain focused on preserving the pivotal role held by the CD and other physical distribution technologies. Further, the study shows that while many music firms remain attached to the old models, other firms, primarily music publishers, have accepted the transformation, and have reluctantly recognised the realities of a virtualised environment.
Resumo:
The world is increasingly moving towards more open models of publishing and communication. The UK government has demonstrated a firm commitment to ensuring that academic research outputs are made available to all who might benefit from access to them, and its open access policy attempts to make academic publications freely available to readers, rather than being locked behind pay walls or only available to researchers with access to well-funded university libraries. Open access policies have an important role to play in fostering an open innovation ecosystem and ensuring that maximum value is derived from investments in university-based research. But are we ready to embrace this change?
Resumo:
The aim of this study was to validate the Children’s Eating Behaviour Questionnaire (CEBQ) in three ethnically and culturally diverse samples of mothers in Australia. Confirmatory factor analysis utilising structural equation modelling examined whether the established 8-factor model of the CEBQ was supported in our three populations: (i) a community sample of first-time mothers allocated to the control group of the NOURISH trial (mean child age = 24 months [SD = 1]; N = 244); (ii) a sample of immigrant Indian mothers of children aged 1–5 years (mean age = 34 months [SD = 14]; N = 203), and (iii) a sample of immigrant Chinese mothers of children aged 1–4 years (mean age = 36 months [SD = 14]; N = 216). The original 8-factor model provided an acceptable fit to the data in the NOURISH sample with minor post hoc re-specifications (two error covariances on Satiety Responsiveness and an item-factor covariance to account for a cross-loading of an item (Fussiness) on Satiety Responsiveness). The re-specified model showed reasonable fit in both the Indian and Chinese samples. Cronbach’s α estimates ranged from .73 to .91 in the Australian sample and .61–.88 in the immigrant samples. This study supports the appropriateness of the CEBQ in the multicultural Australian context.