241 resultados para 2447: modelling and forecasting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cold-formed steel wall frame systems using lipped or unlipped C-sections and gypsum plasterboard lining are commonly utilised in the construction of both the load bearing and non-load bearing walls in the residential, commercial and industrial buildings. However, the structural behaviour of unlined and lined stud wall frames is not well understood and adequate design rules are not available. A detailed research program was therefore undertaken to investigate the behaviour of stud wall frame systems. As the first step in this research, the problem relating to the degree of end fixity of stud was investigated. The studs are usually connected to the top and bottom tracks and the degree of end fixity provided by these tracks is not adequately addressed by the design codes. A finite element model of unlined frames was therefore developed, and validated using full scale experimental results. It was then used in a detailed parametric study to develop appropriate design rules for unlined wall frames. This study has shown that by using appropriate effective length factors, the ultimate load and failure modes of the unlined studs can be accurately predicted using the provisions of Australian or American cold-formed steel structures design codes. This paper presents the details of the finite element analyses, the results and recommended design rules for unlined wall frames.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scratch assays are difficult to reproduce. Here we identify a previously overlooked source of variability which could partially explain this difficulty. We analyse a suite of scratch assays in which we vary the initial degree of confluence (initial cell density). Our results indicate that the rate of re-colonisation is very sensitive to the initial density. To quantify the relative roles of cell migration and proliferation, we calibrate the solution of the Fisher–Kolmogorov model to cell density profiles to provide estimates of the cell diffusivity, D, and the cell proliferation rate, λ. This procedure indicates that the estimates of D and λ are very sensitive to the initial density. This dependence suggests that the Fisher–Kolmogorov model does not accurately represent the details of the collective cell spreading process, since this model assumes that D and λ are constants that ought to be independent of the initial density. Since higher initial cell density leads to enhanced spreading, we also calibrate the solution of the Porous–Fisher model to the data as this model assumes that the cell flux is an increasing function of the cell density. Estimates of D and λ associated with the Porous–Fisher model are less sensitive to the initial density, suggesting that the Porous–Fisher model provides a better description of the experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improved forecasting of urban rail patronage is essential for effective policy development and efficient planning for new rail infrastructure. Past modelling and forecasting of urban rail patronage has been based on legacy modelling approaches and often conducted at the general level of public transport demand, rather than being specific to urban rail. This project canvassed current Australian practice and international best practice to develop and estimate time series and cross-sectional models of rail patronage for Australian mainland state capital cities. This involved the implementation of a large online survey of rail riders and non-riders for each of the state capital cities, thereby resulting in a comprehensive database of respondent socio-economic profiles, travel experience, attitudes to rail and other modes of travel, together with stated preference responses to a wide range of urban travel scenarios. Estimation of the models provided a demonstration of their ability to provide information on the major influences on the urban rail travel decision. Rail fares, congestion and rail service supply all have a strong influence on rail patronage, while a number of less significant factors such as fuel price and access to a motor vehicle are also influential. Of note, too, is the relative homogeneity of rail user profiles across the state capitals. Rail users tended to have higher incomes and education levels. They are also younger and more likely to be in full-time employment than non-rail users. The project analysis reported here represents only a small proportion of what could be accomplished utilising the survey database. More comprehensive investigation was beyond the scope of the project and has been left for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a growing need for parametric design software that communicates building performance feedback in early architectural exploration to support decision-making. This paper examines how the circuit of design and analysis process can be closed to provide active and concurrent feedback between architecture and services engineering domains. It presents the structure for an openly customisable design system that couples parametric modelling and energy analysis software to allow designers to assess the performance of early design iterations quickly. Finally, it discusses how user interactions with the system foster information exchanges that facilitate the sharing of design intelligence across disciplines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complex interaction of the bones of the foot has been explored in detail in recent years, which has led to the acknowledgement in the biomechanics community that the foot can no longer be considered as a single rigid segment. With the advance of motion analysis technology it has become possible to quantify the biomechanics of simplified units or segments that make up the foot. Advances in technology coupled with reducing hardware prices has resulted in the uptake of more advanced tools available for clinical gait analysis. The increased use of these techniques in clinical practice requires defined standards for modelling and reporting of foot and ankle kinematics. This systematic review aims to provide a critical appraisal of commonly used foot and ankle marker sets designed to assess kinematics and thus provide a theoretical background for the development of modelling standards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complex interaction of the bones of the foot has been explored in detail in recent years, which has led to the acknowledgement in the biomechanics community that the foot can no longer be considered as a single rigid segment. With the advance of motion analysis technology it has become possible to quantify the biomechanics of simplified units or segments that make up the foot. Advances in technology coupled with reducing hardware prices has resulted in the uptake of more advanced tools available for clinical gait analysis. The increased use of these techniques in clinical practice requires defined standards for modelling and reporting of foot and ankle kinematics. This systematic review aims to provide a critical appraisal of commonly used foot and ankle marker sets designed to assess kinematics and thus provide a theoretical background for the development of modelling standards.