947 resultados para curved-layer fused deposition modelling (FDM)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire design is an essential part of the overall design procedure of structural steel members and systems. Conventionally, increased fire rating is provided simply by adding more plasterboards to Light gauge Steel Frame (LSF) stud walls, which is inefficient. However, recently Kolarkar & Mahendran (2008) developed a new composite wall panel system, where the insulation was located externally between the plasterboards on both sides of the steel wall frame. Numerical and experimental studies were undertaken to investigate the structural and fire performance of LSF walls using the new composite panels under axial compression. This paper presents the details of the numerical studies of the new LSF walls and the results. It also includes brief details of the experimental studies. Experimental and numerical results were compared for the purpose of validating the developed numerical model. The paper also describes the structural and fire performance of the new LSF wall system in comparison to traditional wall systems using cavity insulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The exchange of design models in the design and construction industry is evolving away from 2-dimensional computer-aided design (CAD) and paper towards semantically-rich 3-dimensional digital models. This approach, known as Building Information Modelling (BIM), is anticipated to become the primary means of information exchange between the various parties involved in construction projects. From a technical perspective, the domain represents an interesting study in model-based interoperability, since the models are large and complex, and the industry is one in which collaboration is a vital part of business. In this paper, we present our experiences with issues of model-based interoperability in exchanging building information models between various tools, and in implementing tools which consume BIM models, particularly using the industry standard IFC data modelling format. We report on the successes and challenges in these endeavours, as the industry endeavours to move further towards fully digitised information exchange.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the main studies on transit users’ route choice in thecontext of transit assignment. The studies are categorized into three groups: static transit assignment, within-day dynamic transit assignment, and emerging approaches. The motivations and behavioural assumptions of these approaches are re-examined. The first group includes shortest-path heuristics in all-or-nothing assignment, random utility maximization route-choice models in stochastic assignment, and user equilibrium based assignment. The second group covers within-day dynamics in transit users’ route choice, transit network formulations, and dynamic transit assignment. The third group introduces the emerging studies on behavioural complexities, day-to-day dynamics, and real-time dynamics in transit users’ route choice. Future research directions are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With an increasing level of collaboration amongst researchers, software developers and industry practitioners in the past three decades, building information modelling (BIM) is now recognized as an emerging technological and procedural shift within the architect, engineering and construction (AEC) industry. BIM is not only considered as a way to make a profound impact on the professions of AEC, but is also regarded as an approach to assist the industry to develop new ways of thinking and practice. Despite the widespread development and recognition of BIM, a succinct and systematic review of the existing BIM research and achievement is scarce. It is also necessary to take stock on existing applications and have a fresh look at where BIM should be heading and how it can benefit from the advances being made. This paper first presents a review of BIM research and achievement in AEC industry. A number of suggestions are then made for future research in BIM. This paper maintains that the value of BIM during design and construction phases is well documented over the last decade, and new research needs to expand the level of development and analysis from design/build stage to postconstruction and facility asset management. New research in BIM could also move beyond the traditional building type to managing the broader range of facilities and built assets and providing preventative maintenance schedules for sustainable and intelligent buildings

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many infrastructure and necessity systems such as electricity and telecommunication in Europe and the Northern America were used to be operated as monopolies, if not state-owned. However, they have now been disintegrated into a group of smaller companies managed by different stakeholders. Railways are no exceptions. Since the early 1980s, there have been reforms in the shape of restructuring of the national railways in different parts of the world. Continuous refinements are still conducted to allow better utilisation of railway resources and quality of service. There has been a growing interest for the industry to understand the impacts of these reforms on the operation efficiency and constraints. A number of post-evaluations have been conducted by analysing the performance of the stakeholders on their profits (Crompton and Jupe 2003), quality of train service (Shaw 2001) and engineering operations (Watson 2001). Results from these studies are valuable for future improvement in the system, followed by a new cycle of post-evaluations. However, direct implementation of these changes is often costly and the consequences take a long period of time (e.g. years) to surface. With the advance of fast computing technologies, computer simulation is a cost-effective means to evaluate a hypothetical change in a system prior to actual implementation. For example, simulation suites have been developed to study a variety of traffic control strategies according to sophisticated models of train dynamics, traction and power systems (Goodman, Siu and Ho 1998, Ho and Yeung 2001). Unfortunately, under the restructured railway environment, it is by no means easy to model the complex behaviour of the stakeholders and the interactions between them. Multi-agent system (MAS) is a recently developed modelling technique which may be useful in assisting the railway industry to conduct simulations on the restructured railway system. In MAS, a real-world entity is modelled as a software agent that is autonomous, reactive to changes, able to initiate proactive actions and social communicative acts. It has been applied in the areas of supply-chain management processes (García-Flores, Wang and Goltz 2000, Jennings et al. 2000a, b) and e-commerce activities (Au, Ngai and Parameswaran 2003, Liu and You 2003), in which the objectives and behaviour of the buyers and sellers are captured by software agents. It is therefore beneficial to investigate the suitability or feasibility of applying agent modelling in railways and the extent to which it might help in developing better resource management strategies. This paper sets out to examine the benefits of using MAS to model the resource management process in railways. Section 2 first describes the business environment after the railway 2 Modelling issues on the railway resource management process using MAS reforms. Then the problems emerge from the restructuring process are identified in section 3. Section 4 describes the realisation of a MAS for railway resource management under the restructured scheme and the feasible studies expected from the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the simulation model development of passenger flow in a metro station. The model allows studies of passenger flow in stations with different layouts and facilities, thus providing valuable information, such as passenger flow and density of passenger at critical locations and passenger-handling facilities within a station, to the operators. The adoption of the concept of Petri nets in the simulation model is discussed. Examples are provided to demonstrate its application to passenger flow analysis, train scheduling and the testing of alternative station layouts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real-world business processes are resource-intensive. In work environments human resources usually multitask, both human and non-human resources are typically shared between tasks, and multiple resources are sometimes necessary to undertake a single task. However, current Business Process Management Systems focus on task-resource allocation in terms of individual human resources only and lack support for a full spectrum of resource classes (e.g., human or non-human, application or non-application, individual or teamwork, schedulable or unschedulable) that could contribute to tasks within a business process. In this paper we develop a conceptual data model of resources that takes into account the various resource classes and their interactions. The resulting conceptual resource model is validated using a real-life healthcare scenario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software used by architectural and industrial designers – has moved from becoming a tool for drafting, towards use in verification, simulation, project management and project sharing remotely. In more advanced models, parameters for the designed object can be adjusted so a family of variations can be produced rapidly. With advances in computer aided design technology, numerous design options can now be generated and analyzed in real time. However the use of digital tools to support design as an activity is still at an early stage and has largely been limited in functionality with regard to the design process. To date, major CAD vendors have not developed an integrated tool that is able to both leverage specialized design knowledge from various discipline domains (known as expert knowledge systems) and support the creation of design alternatives that satisfy different forms of constraints. We propose that evolutionary computing and machine learning be linked with parametric design techniques to record and respond to a designer’s own way of working and design history. It is expected that this will lead to results that impact on future work on design support systems-(ergonomics and interface) as well as implicit constraint and problem definition for problems that are difficult to quantify.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multi-criteria decision making methods, Preference METHods for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA), and the two-way Positive Matrix Factorization (PMF) receptor model were applied to airborne fine particle compositional data collected at three sites in Hong Kong during two monitoring campaigns held from November 2000 to October 2001 and November 2004 to October 2005. PROMETHEE/GAIA indicated that the three sites were worse during the later monitoring campaign, and that the order of the air quality at the sites during each campaign was: rural site > urban site > roadside site. The PMF analysis on the other hand, identified 6 common sources at all of the sites (diesel vehicle, fresh sea salt, secondary sulphate, soil, aged sea salt and oil combustion) which accounted for approximately 68.8 ± 8.7% of the fine particle mass at the sites. In addition, road dust, gasoline vehicle, biomass burning, secondary nitrate, and metal processing were identified at some of the sites. Secondary sulphate was found to be the highest contributor to the fine particle mass at the rural and urban sites with vehicle emission as a high contributor to the roadside site. The PMF results are broadly similar to those obtained in a previous analysis by PCA/APCS. However, the PMF analysis resolved more factors at each site than the PCA/APCS. In addition, the study demonstrated that combined results from multi-criteria decision making analysis and receptor modelling can provide more detailed information that can be used to formulate the scientific basis for mitigating air pollution in the region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.