201 resultados para Computational Geometry and Object Modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, magnetohydrodynamic natural convection boundary layer flow of an electrically conducting and viscous incompressible fluid along a heated vertical flat plate with uniform heat and mass flux in the presence of strong cross magnetic field has been investigated. For smooth integrations the boundary layer equations are transformed in to a convenient dimensionless form by using stream function formulation as well as the free variable formulation. The nonsimilar parabolic partial differential equations are integrated numerically for Pr ≪1 that is appropriate for liquid metals against the local Hartmann parameter ξ . Further, asymptotic solutions are obtained near the leading edge using regular perturbation method for smaller values of ξ . Solutions for values of ξ ≫ 1 are also obtained by employing the matched asymptotic technique. The results obtained for small, large and all ξ regimes are examined in terms of shear stress, τw, rate of heat transfer, qw, and rate of mass transfer, mw, for important physical parameter. Attention has been given to the influence of Schmidt number, Sc, buoyancy ratio parameter, N and local Hartmann parameter, ξ on velocity, temperature and concentration distributions and noted that velocity and temperature of the fluid achieve their asymptotic profiles for Sc ≥ 10:0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective digital human model (DHM) simulation of automotive driver packaging ergonomics, safety and comfort depends on accurate modelling of occupant posture, which is strongly related to the mechanical interaction between human body soft tissue and flexible seat components. This paper presents a finite-element study simulating the deflection of seat cushion foam and supportive seat structures, as well as human buttock and thigh soft tissue when seated. The three-dimensional data used for modelling thigh and buttock geometry were taken on one 95th percentile male subject, representing the bivariate percentiles of the combined hip breadth (seated) and buttock-to-knee length distributions of a selected Australian and US population. A thigh-buttock surface shell based on this data was generated for the analytic model. A 6mm neoprene layer was offset from the shell to account for the compression of body tissue expected through sitting in a seat. The thigh-buttock model is therefore made of two layers, covering thin to moderate thigh and buttock proportions, but not more fleshy sizes. To replicate the effects of skin and fat, the neoprene rubber layer was modelled as a hyperelastic material with viscoelastic behaviour in a Neo-Hookean material model. Finite element (FE) analysis was performed in ANSYS V13 WB (Canonsburg, USA). It is hypothesized that the presented FE simulation delivers a valid result, compared to a standard SAE physical test and the real phenomenon of human-seat indentation. The analytical model is based on the CAD assembly of a Ford Territory seat. The optimized seat frame, suspension and foam pad CAD data were transformed and meshed into FE models and indented by the two layer, soft surface human FE model. Converging results with the least computational effort were achieved for a bonded connection between cushion and seat base as well as cushion and suspension, no separation between neoprene and indenter shell and a frictional connection between cushion pad and neoprene. The result is compared to a previous simulation of an indentation with a hard shell human finite-element model of equal geometry, and to the physical indentation result, which is approached with very high fidelity. We conclude that (a) SAE composite buttock form indentation of a suspended seat cushion can be validly simulated in a FE model of merely similar geometry, but using a two-layer hard/soft structure. (b) Human-seat indentation of a suspended seat cushion can be validly simulated with a simplified human buttock-thigh model for a selected anthropomorphism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lyngbya majuscula is a cyanobacterium (blue-green algae) occurring naturally in tropical and subtropical coastal areas worldwide. Deception Bay, in Northern Moreton Bay, Queensland, has a history of Lyngbya blooms, and forms a case study for this investigation. The South East Queensland (SEQ) Healthy Waterways Partnership, collaboration between government, industry, research and the community, was formed to address issues affecting the health of the river catchments and waterways of South East Queensland. The Partnership coordinated the Lyngbya Research and Management Program (2005-2007) which culminated in a Coastal Algal Blooms (CAB) Action Plan for harmful and nuisance algal blooms, such as Lyngbya majuscula. This first phase of the project was predominantly of a scientific nature and also facilitated the collection of additional data to better understand Lyngbya blooms. The second phase of this project, SEQ Healthy Waterways Strategy 2007-2012, is now underway to implement the CAB Action Plan and as such is more management focussed. As part of the first phase of the project, a Science model for the initiation of a Lyngbya bloom was built using Bayesian Networks (BN). The structure of the Science Bayesian Network was built by the Lyngbya Science Working Group (LSWG) which was drawn from diverse disciplines. The BN was then quantified with annual data and expert knowledge. Scenario testing confirmed the expected temporal nature of bloom initiation and it was recommended that the next version of the BN be extended to take this into account. Elicitation for this BN thus occurred at three levels: design, quantification and verification. The first level involved construction of the conceptual model itself, definition of the nodes within the model and identification of sources of information to quantify the nodes. The second level included elicitation of expert opinion and representation of this information in a form suitable for inclusion in the BN. The third and final level concerned the specification of scenarios used to verify the model. The second phase of the project provides the opportunity to update the network with the newly collected detailed data obtained during the previous phase of the project. Specifically the temporal nature of Lyngbya blooms is of interest. Management efforts need to be directed to the most vulnerable periods to bloom initiation in the Bay. To model the temporal aspects of Lyngbya we are using Object Oriented Bayesian networks (OOBN) to create ‘time slices’ for each of the periods of interest during the summer. OOBNs provide a framework to simplify knowledge representation and facilitate reuse of nodes and network fragments. An OOBN is more hierarchical than a traditional BN with any sub-network able to contain other sub-networks. Connectivity between OOBNs is an important feature and allows information flow between the time slices. This study demonstrates more sophisticated use of expert information within Bayesian networks, which combine expert knowledge with data (categorized using expert-defined thresholds) within an expert-defined model structure. Based on the results from the verification process the experts are able to target areas requiring greater precision and those exhibiting temporal behaviour. The time slices incorporate the data for that time period for each of the temporal nodes (instead of using the annual data from the previous static Science BN) and include lag effects to allow the effect from one time slice to flow to the next time slice. We demonstrate a concurrent steady increase in the probability of initiation of a Lyngbya bloom and conclude that the inclusion of temporal aspects in the BN model is consistent with the perceptions of Lyngbya behaviour held by the stakeholders. This extended model provides a more accurate representation of the increased risk of algal blooms in the summer months and show that the opinions elicited to inform a static BN can be readily extended to a dynamic OOBN, providing more comprehensive information for decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of graphical processing unit (GPU) parallel processing is becoming a part of mainstream statistical practice. The reliance of Bayesian statistics on Markov Chain Monte Carlo (MCMC) methods makes the applicability of parallel processing not immediately obvious. It is illustrated that there are substantial gains in improved computational time for MCMC and other methods of evaluation by computing the likelihood using GPU parallel processing. Examples use data from the Global Terrorism Database to model terrorist activity in Colombia from 2000 through 2010 and a likelihood based on the explicit convolution of two negative-binomial processes. Results show decreases in computational time by a factor of over 200. Factors influencing these improvements and guidelines for programming parallel implementations of the likelihood are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The conflicts in Iraq and Afghanistan have been epitomized by the insurgents’ use of the improvised explosive device against vehicle-borne security forces. These weapons, capable of causing multiple severely injured casualties in a single incident, pose the most prevalent single threat to Coalition troops operating in the region. Improvements in personal protection and medical care have resulted in increasing numbers of casualties surviving with complex lower limb injuries, often leading to long-term disability. Thus, there exists an urgent requirement to investigate and mitigate against the mechanism of extremity injury caused by these devices. This will necessitate an ontological approach, linking molecular, cellular and tissue interaction to physiological dysfunction. This can only be achieved via a collaborative approach between clinicians, natural scientists and engineers, combining physical and numerical modelling tools with clinical data from the battlefield. In this article, we compile existing knowledge on the effects of explosions on skeletal injury, review and critique relevant experimental and computational research related to lower limb injury and damage and propose research foci required to drive the development of future mitigation technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cells respond to various biochemical and physical cues during wound–healing and tumour progression. In vitro assays used to study these processes are typically conducted in one particular geometry and it is unclear how the assay geometry affects the capacity of cell populations to spread, or whether the relevant mechanisms, such as cell motility and cell proliferation, are somehow sensitive to the geometry of the assay. In this work we use a circular barrier assay to characterise the spreading of cell populations in two different geometries. Assay 1 describes a tumour–like geometry where a cell population spreads outwards into an open space. Assay 2 describes a wound–like geometry where a cell population spreads inwards to close a void. We use a combination of discrete and continuum mathematical models and automated image processing methods to obtain independent estimates of the effective cell diffusivity, D, and the effective cell proliferation rate, λ. Using our parameterised mathematical model we confirm that our estimates of D and λ accurately predict the time–evolution of the location of the leading edge and the cell density profiles for both assay 1 and assay 2. Our work suggests that the effective cell diffusivity is up to 50% lower for assay 2 compared to assay 1, whereas the effective cell proliferation rate is up to 30% lower for assay 2 compared to assay 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mining industry faces concurrent pressures of reducing water use, energy consumption and greenhouse gas (GHG) emissions in coming years. However, the interactions between water and energy use, as well as GHG e missions have largely been neglected in modelling studies to date. In addition, investigations tend to focus on the unit operation scale, with little consideration of whole-of-site or regional scale effects. This paper presents an application of a hierarchical systems model (HSM) developed to represent water, energy and GHG emissions fluxes at scales ranging from the unit operation, to the site level, to the regional level. The model allows for the linkages between water use, energy use and GHG emissions to be examined in a fl exible and intuitive way, so that mine sites can predict energy and emissions impacts of water use reduction schemes and vice versa. This paper examines whether this approach can also be applied to the regional scale with multiple mine sites. The model is used to conduct a case study of several coal mines in the Bowen Basin, Australia, to compare the utility of centralised and decentralised mine water treatment schemes. The case study takes into account geographical factors (such as water pumping distances and elevations), economic factors (such as capital and operating cost curves for desalination treatment plants) and regional factors (such as regionally varying climates and associated variance in mine water volumes and quality). The case study results indicate that treatment of saline mine water incurs a trade-off between water and energy use in all cases. However, significant cost differences between centralised and decentralised schemes can be observed in a simple economic analysis. Further research will examine the possibility for deriving model up-scaling algorithms to reduce computational requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A numerical time-dependent model of an active magnetic regenerator (AMR) was developed for cooling in the kilowatt range. Earlier numerical models have been mostly developed for cooling power in the 0.4 kW range. In contrast, this paper reports the applicability of magnetic refrigeration to the 50 kW range. A packed bed active magnetic regenerator was modelled and the influence of parameters such as geometry and operating parameters were studied for different geometries. The pressure drop for AMR bed length and particle diameter was also studied. High cooling power and coefficient of performance (COP) were achieved by optimization of the diameter of the magnetocaloric powder particles and operating frequency. The optimum operating conditions of the AMR for a cooling capacity of 50 kW was determined for a temperature span of 15 K. The predicted coefficient of performance (COP) was found to be ∼6, making it an attractive alternative to vapour compression systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational fluid dynamics, analytical solutions, and mathematical modelling approaches are used to gain insights into the distribution of fumigant gas within farm-scale, grain storage silos. Both fan-forced and tablet fumigation are considered in this work, which develops new models for use by researchers, primary producers and silo manufacturers to assist in the eradication grain storage pests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.