194 resultados para Kahler geometry
Resumo:
Computational Fluid Dynamics (CFD) simulations are widely used in mechanical engineering. Although achieving a high level of confidence in numerical modelling is of crucial importance in the field of turbomachinery, verification and validation of CFD simulations are very tricky especially for complex flows encountered in radial turbines. Comprehensive studies of radial machines are available in the literature. Unfortunately, none of them include enough detailed geometric data to be properly reproduced and so cannot be considered for academic research and validation purposes. As a consequence, design improvements of such configurations are difficult. Moreover, it seems that well-developed analyses of radial turbines are used in commercial software but are not available in the open literature especially at high pressure ratios. It is the purpose of this paper to provide a fully open set of data to reproduce the exact geometry of the high pressure ratio single stage radial-inflow turbine used in the Sundstrand Power Systems T-100 Multipurpose Small Power Unit. First, preliminary one-dimensional meanline design and analysis are performed using the commercial software RITAL from Concepts-NREC in order to establish a complete reference test case available for turbomachinery code validation. The proposed design of the existing turbine is then carefully and successfully checked against the geometrical and experimental data partially published in the literature. Then, three-dimensional Reynolds-Averaged Navier-Stokes simulations are conducted by means of the Axcent-PushButton CFDR CFD software. The effect of the tip clearance gap is investigated in detail for a wide range of operating conditions. The results confirm that the 3D geometry is correctly reproduced. It also reveals that the turbine is shocked while designed to give a high-subsonic flow and highlight the importance of the diffuser.
Resumo:
The present paper presents and discusses the use of dierent codes regarding the numerical simulation of a radial-in ow turbine. A radial-in ow turbine test case was selected from published literature [1] and commercial codes (Fluent and CFX) were used to perform the steady-state numerical simulations. An in-house compressible- ow simulation code, Eilmer3 [2] was also adapted in order to make it suitable to perform turbomachinery simulations and preliminary results are presented and discussed. The code itself as well as its adaptation, comprising the addition of terms for the rotating frame of reference, programmable boundary conditions for periodic boundaries and a mixing plane interface between the rotating and non-rotating blocks are also discussed. Several cases with dierent orders of complexity in terms of geometry were considered and the results were compared across the dierent codes. The agreement between these results and published data is also discussed.
Resumo:
In order to obtain a more compact Superconducting Fault Current limiter (SFCL), a special geometry of core and AC coil is required. This results in a unique magnetic flux pattern which differs from those associated with conventional round core arrangements. In this paper the magnetic flux density within a Fault Current Limiter (FCL) is described. Both experimental and analytical approaches are considered. A small scale prototype of an FCL was constructed in order to conduct the experiments. This prototype comprises a single phase. The analysis covers both the steady state and the short-circuit condition. Simulation results were obtained using commercial software based on the Finite Element Method (FEM). The magnetic flux saturating the cores, leakage magnetic flux giving rise to electromagnetic forces and leakage magnetic flux flowing in the enclosing tank are computed.
Resumo:
Railroad corridors contain large number of Insulated Rail Joints (IRJs) that act as safety critical elements in the circuitries of the signaling and broken rail identification systems. IRJs are regarded as sources of excitation for the passage of loaded wheels leading to high impact forces; these forces in turn cause dips, cross levels and twists to the railroad geometry in close proximity to the sections containing the IRJs in addition to the local damages to the railhead of the IRJs. Therefore, a systematic monitoring of the IRJs in railroad is prudent to mitigate potential risk of their sudden failure (e.g., broken tie plates) under the traffic. This paper presents a simple method of periodic recording of images using time-lapse photography and total station surveying measurements to understand the ongoing deterioration of the IRJs and their surroundings. Over a 500 day period, data were collected to examine the trends in narrowing of the joint gap due to plastic deformation the railhead edges and the dips, cross levels and twists caused to the railroad geometry due to the settlement of ties (sleepers) around the IRJs. The results reflect that the average progressive settlement beneath the IRJs is larger than that under the continuously welded rail, which leads to excessive deviation of railroad profile, cross levels and twists.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
A major factor in the stratospheric collection process is the relative density of particles at the collection altitude. With current aircraft-borne collector plate geometries, one potential extraterrestrial particle of about 10 micron diameter is collected approximately every hour. However, a new design for the collector plate, termed the Large Area Collector (LAC), allows a factor of 10 improvement in collection efficiency over current conventional geometry. The implementation of LAC design on future stratospheric collection flights will provide many opportunities for additional data on both terrestrial and extraterrestrial phenomena. With the improvement in collection efficiency, LAC's may provide a suitable number of potential extraterrestrial particles in one short flight of between 4 and 8 hours duration. Alternatively, total collection periods of approximately 40 hours enhance the probability that rare particles can be retrieved from the stratosphere. This latter approach is of great value for the cosmochemist who may wish to perform sophisticated analyses on interplanetary dust greater than a picogram. The former approach, involving short duration flights, may also provide invaluable data on the source of many extraterrestrial particles. The time dependence of particle entry to the collection altitude is an important parameter which may be correlated with specific global events (e.g., meteoroid streams) provided the collection time is known to an accuracy of 2 hours.
Resumo:
The LiteSteel Beam (LSB) is a new hollow flange section developed in Australia with a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. The LSB is subjected to a relatively new Lateral Distortional Buckling (LDB) mode when used as flexural members. Unlike the commonly observed lateral torsional buckling, lateral distortional buckling of LSBs is characterised by cross sectional change due to web distortion. Lateral distortional buckling causes significant moment capacity reduction for LSBs with intermediate spans. Therefore a detailed investigation was undertaken to determine the methods of reducing the effects of lateral distortional buckling in LSB flexural members. For this purpose the use of web stiffeners was investigated using finite element analyses of LSBs with different web stiffener spacing and sizes. It was found that the use of 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges at third span points considerably reduced the lateral distortional buckling effects in LSBs. Suitable design rules were then developed to calculate the enhanced elastic lateral distortional buckling moments and the higher ultimate moment capacities of LSBs with the chosen web stiffener arrangement. This paper presents the details of this investigation and the results.
Resumo:
In recent years, there has been a growing interest from the design and construction community to adopt Building Information Models (BIM). BIM provides semantically-rich information models that explicitly represent both 3D geometric information (e.g., component dimensions), along with non-geometric properties (e.g., material properties). While the richness of design information offered by BIM is evident, there are still tremendous challenges in getting construction-specific information out of BIM, limiting the usability of these models for construction. In this paper, we describe our approach for extracting construction-specific design conditions from a BIM model based on user-defined queries. This approach leverages an ontology of features we are developing to formalize the design conditions that affect construction. Our current implementation analyzes the component geometry and topological relationships between components in a BIM model represented using the Industry Foundation Classes (IFC) to identify construction features. We describe the reasoning process implemented to extract these construction features, and provide a critique of the IFC’s to support the querying process. We use examples from two case studies to illustrate the construction features, the querying process, and the challenges involved in deriving construction features from an IFC model.
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
Small-angle and ultra-small-angle neutron scattering (SANS and USANS), low-pressure adsorption (N2 and CO2), and high-pressure mercury intrusion measurements were performed on a suite of North American shale reservoir samples providing the first ever comparison of all these techniques for characterizing the complex pore structure of shales. The techniques were used to gain insight into the nature of the pore structure including pore geometry, pore size distribution and accessible versus inaccessible porosity. Reservoir samples for analysis were taken from currently-active shale gas plays including the Barnett, Marcellus, Haynesville, Eagle Ford, Woodford, Muskwa, and Duvernay shales. Low-pressure adsorption revealed strong differences in BET surface area and pore volumes for the sample suite, consistent with variability in composition of the samples. The combination of CO2 and N2 adsorption data allowed pore size distributions to be created for micro–meso–macroporosity up to a limit of �1000 Å. Pore size distributions are either uni- or multi-modal. The adsorption-derived pore size distributions for some samples are inconsistent with mercury intrusion data, likely owing to a combination of grain compression during high-pressure intrusion, and the fact that mercury intrusion yields information about pore throat rather than pore body distributions. SANS/USANS scattering data indicate a fractal geometry (power-law scattering) for a wide range of pore sizes and provide evidence that nanometer-scale spatial ordering occurs in lower mesopore–micropore range for some samples, which may be associated with inter-layer spacing in clay minerals. SANS/USANS pore radius distributions were converted to pore volume distributions for direct comparison with adsorption data. For the overlap region between the two methods, the agreement is quite good. Accessible porosity in the pore size (radius) range 5 nm–10 lm was determined for a Barnett shale sample using the contrast matching method with pressurized deuterated methane fluid. The results demonstrate that accessible porosity is pore-size dependent.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
Effective digital human model (DHM) simulation of automotive driver packaging ergonomics, safety and comfort depends on accurate modelling of occupant posture, which is strongly related to the mechanical interaction between human body soft tissue and flexible seat components. This paper presents a finite-element study simulating the deflection of seat cushion foam and supportive seat structures, as well as human buttock and thigh soft tissue when seated. The three-dimensional data used for modelling thigh and buttock geometry were taken on one 95th percentile male subject, representing the bivariate percentiles of the combined hip breadth (seated) and buttock-to-knee length distributions of a selected Australian and US population. A thigh-buttock surface shell based on this data was generated for the analytic model. A 6mm neoprene layer was offset from the shell to account for the compression of body tissue expected through sitting in a seat. The thigh-buttock model is therefore made of two layers, covering thin to moderate thigh and buttock proportions, but not more fleshy sizes. To replicate the effects of skin and fat, the neoprene rubber layer was modelled as a hyperelastic material with viscoelastic behaviour in a Neo-Hookean material model. Finite element (FE) analysis was performed in ANSYS V13 WB (Canonsburg, USA). It is hypothesized that the presented FE simulation delivers a valid result, compared to a standard SAE physical test and the real phenomenon of human-seat indentation. The analytical model is based on the CAD assembly of a Ford Territory seat. The optimized seat frame, suspension and foam pad CAD data were transformed and meshed into FE models and indented by the two layer, soft surface human FE model. Converging results with the least computational effort were achieved for a bonded connection between cushion and seat base as well as cushion and suspension, no separation between neoprene and indenter shell and a frictional connection between cushion pad and neoprene. The result is compared to a previous simulation of an indentation with a hard shell human finite-element model of equal geometry, and to the physical indentation result, which is approached with very high fidelity. We conclude that (a) SAE composite buttock form indentation of a suspended seat cushion can be validly simulated in a FE model of merely similar geometry, but using a two-layer hard/soft structure. (b) Human-seat indentation of a suspended seat cushion can be validly simulated with a simplified human buttock-thigh model for a selected anthropomorphism.
Resumo:
This article reports on the design and implementation of a Computer-Aided Die Design System (CADDS) for sheet-metal blanks. The system is designed by considering several factors, such as the complexity of blank geometry, reduction in scrap material, production requirements, availability of press equipment and standard parts, punch profile complexity, and tool elements manufacturing method. The interaction among these parameters and how they affect designers' decision patterns is described. The system is implemented by interfacing AutoCAD with the higher level languages FORTRAN 77 and AutoLISP. A database of standard die elements is created by parametric programming, which is an enhanced feature of AutoCAD. The greatest advantage achieved by the system is the rapid generation of the most efficient strip and die layouts, including information about the tool configuration.
Resumo:
Pressure feeder chutes are pieces of equipment used in sugar cane crushing to increase the amount of cane that can be put through a mill. The continuous pressure feeder was developed with the objective to provide a constant feed of bagasse under pressure to the mouth of the crushing mills. The pressure feeder chute is used in a sugarcane milling unit to transfer bagasse from one set of crushing rolls to a second set of crushing rolls. There have been many pressure feeder chute failures in the past. The pressure feeder chute is quite vulnerable and if the bagasse throughput is blocked at the mill rollers, the pressure build-up in the chute can be enormous, which can ultimately result in failure. The result is substantial damage to the rollers, mill and chute construction, and downtimes of up to 48 hours can be experienced. Part of the problem is that the bagasse behaviour in the pressure feeder chute is not understood well. If the pressure feeder chute behaviour was understood, then the chute geometry design could be modified in order to minimise risk of failure. There are possible avenues for changing pressure feeder chute design and operations with a view to producing more reliable pressure feeder chutes in the future. There have been previous attempts to conduct experimental work to determine the causes of pressure feeder chute failures. There are certain guidelines available, however pressure feeder chute failures continue. Pressure feeder chute behaviour still remains poorly understood. This thesis contains the work carried out between April 14th 2009 and October 10th 2012 that focuses on the design of an experimental apparatus to measure forces and visually observe bagasse behaviour in an attempt to understand bagasse behaviour in pressure feeder chutes and minimise the risk of failure.