992 resultados para Process visualisation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combined mathematical model for predicting heat penetration and microbial inactivation in a solid body heated by conduction was tested experimentally by inoculating agar cylinders with Salmonella typhimurium or Enterococcus faecium and heating in a water bath. Regions of growth where bacteria had survived after heating were measured by image analysis and compared with model predictions. Visualisation of the regions of growth was improved by incorporating chromogenic metabolic indicators into the agar. Preliminary tests established that the model performed satisfactorily with both test organisms and with cylinders of different diameter. The model was then used in simulation studies in which the parameters D, z, inoculum size, cylinder diameter and heating temperature were systematically varied. These simulations showed that the biological variables D, z and inoculum size had a relatively small effect on the time needed to eliminate bacteria at the cylinder axis in comparison with the physical variables heating temperature and cylinder diameter, which had a much greater relative effect. (c) 2005 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demolition has recently been more concerned with the potential damage to the environment by its generated wastes. Waste exchange is apparently the main means by which the problem is currently dealt with. There is little or no consideration on wastes during the planning or designing stage. By utilising a knowledge system and visualisation technologies, a waste management plan can be integrated into the 4D model so as to effectively promote the interactions between demolition waste demanders and the demolition designer. As a result, the 4D visualisation provides not only the graphical schedule for the demolition process, but also the waste handling plan and waste production schedule. This research aims to analysis the integration technology of a waste management plan and the 4D visualisation model for a demolition project and to discuss the related technical and management issues. The integrated demolition visualisation enables to facilitate waste handling during the demolition processes thus to achieve environmentally friendly demolition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional and structural changes within an electrolyte solution above an electrochemically active metal surface have been visualised using magnetic resonance imaging (MRI) for the first time. In these proof-of-concept experiments, zinc metal was galvanically corroded in a saturated lithium chloride solution. Magnetic resonance relaxation maps were taken during the corrosion process and spatial variations in both T1 and T2 relaxation times were observed to change with time. These changes were attributed to changes in the speciation of zinc ions in the electrolyte.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change is predicted to impact countries, regions and localities differently. However, common to the predicted impacts is a global trend toward increased levels of carbon dioxide and rising sea levels. Governments and communities need to take into account the likely impacts of climate on the landscape, both built and natural. There is a growing and significant body of climate change research. Much of this information produced by domain experts for a range of disciplines is complex and difficult for planners, decision makers and communities to act upon. The need to communicate often complex scientific information which can be used to assist in the planning cycle is a key challenge. This paper draws from a range of international examples of the use of visualisation in the context of landscape planning to communicate climate change impact and adaptation options within the context of the planning cycle. Missing from the literature, however, is a multi-scalar approach which allows decision makers, planners and communities to seamlessly explore scenarios at their special level of interest, as well as to collectively understand what is driving these at a larger scale, and what the implications are at ever more local levels. Visualisation tools such as digital globes provide one way to bring together multi-scaled spatial–temporal datasets. We present an initial development with this goal in mind. Future research is required to determine the best tools for communicating particular complex scientific data and also to better understand how visualisation can be used to improve the landscape planning process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the early 2000s, Information Systems researchers in Australia had begun to emphasise socio-technical approaches in innovation adoption of technologies. The ‘essentialist' approaches to adoption (for example, Innovation Diffusion or TAM), suggest an essence is largely responsible for rate of adoption (Tatnall, 2011) or a new technology introduced may spark innovation. The socio-technical factors in implementing an innovation are largely flouted by researchers and hospitals. Innovation Translation is an approach that purports that any innovation needs to be customised and translated in to context before it can be adopted. Equally, Actor-Network Theory (ANT) is an approach that embraces the differences in technical and human factors and socio-professional aspects in a non-deterministic manner. The research reported in this paper is an attempt to combined the two approaches in an effective manner, to visualise the socio-technical factors in RFID technology adoption in an Australian hospital. This research investigation demonstrates RFID technology translation in an Australian hospital using a case approach (Yin, 2009). Data was collected using a process of focus groups and interviews, analysed with document analysis and concept mapping techniques. The data was then reconstructed in a ‘movie script' format, with Acts and Scenes funnelled to ANT informed abstraction at the end of each Act. The information visualisation at the end of each Act using ANT informed Lens reveal the re-negotiation and improvement of network relationships between the people (factors) involved including nurses, patient care orderlies, management staff and non-human participants such as equipment and technology. The paper augments the current gaps in literature regarding socio-technical approaches in technology adoption within Australian healthcare context, which is transitioning from non-integrated nearly technophobic hospitals in the last decade to a tech-savvy integrated era. More importantly, the ANT visualisation addresses one of the criticisms of ANT i.e. its insufficiency to explain relationship formations between participants and over changes of events in relationship networks (Greenhalgh & Stones, 2010).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report that phosphoinositol-binding sorting nexin 5 ( SNX5) associates with newly formed macropinosomes induced by EGF stimulation. We used the recruitment of GFP-SNX5 to macropinosomes to track their maturation. Initially, GFP-SNX5 is sequestered to discrete subdomains of the macropinosome; these subdomains are subsequently incorporated into highly dynamic, often branched, tubular structures. Time-lapse videomicroscopy revealed the highly dynamic extension of SNX5-labelled tubules and their departure from the macropinosome body to follow predefined paths towards the perinuclear region of the cell, before fusing with early endosomal acceptor membranes. The extension and departure of these tubular structures occurs rapidly over 5-10 minutes and is dependent upon intact microtubules. As the tubular structures depart from the macropinosome there is a reduction in the surface area and an increase in tension of the limiting membrane of the macropinosome. In addition to the recruitment of SNX5 to the macropinosome, Rab5, SNX1 and EEA1 are also recruited by newly formed macropinosomes, followed by the accumulation of Rab7. SNX5 forms heterodimers with SNX1 and this interaction is required for endosome association of SNX5. We propose that the departure of SNX5-positive tubules represents a rapid mechanism of recycling components from macropinosomes thereby promoting their maturation into Rab7-positive structures. Collectively these findings provide a detailed real-time characterisation of the maturation process of the macropinocytic endosome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D visualisation of digital information is a recent phenomenon. It relies on users understanding 3D perspectival spaces. Questions about the universal access of such spaces has been debated since its inception in the European Renaissance. Perspective has since become a strong cultural influence in Western visual communication. Perspective imaging assists the process of experimenting by the sketching or modelling of ideas. In particular, the recent 3D modelling of an essentially non-dimensional Cyber-space raises questions of how we think about information in general. While alternate methods clearly exist they are rarely explored within the 3D paradigm (such as Chinese isometry). This paper seeks to generate further discussion on the historical background of perspective and its role in underpinning this emergent field. © 2005 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been a great effort to combine the technologies and techniques of GIS and process models. This project examines the issues of linking a standard current generation 2½d GIS with several existing model codes. The focus for the project has been the Shropshire Groundwater Scheme, which is being developed to augment flow in the River Severn during drought periods by pumping water from the Shropshire Aquifer. Previous authors have demonstrated that under certain circumstances pumping could reduce the soil moisture available for crops. This project follows earlier work at Aston in which the effects of drawdown were delineated and quantified through the development of a software package that implemented a technique which brought together the significant spatially varying parameters. This technique is repeated here, but using a standard GIS called GRASS. The GIS proved adequate for the task and the added functionality provided by the general purpose GIS - the data capture, manipulation and visualisation facilities - were of great benefit. The bulk of the project is concerned with examining the issues of the linkage of GIS and environmental process models. To this end a groundwater model (Modflow) and a soil moisture model (SWMS2D) were linked to the GIS and a crop model was implemented within the GIS. A loose-linked approach was adopted and secondary and surrogate data were used wherever possible. The implications of which relate to; justification of a loose-linked versus a closely integrated approach; how, technically, to achieve the linkage; how to reconcile the different data models used by the GIS and the process models; control of the movement of data between models of environmental subsystems, to model the total system; the advantages and disadvantages of using a current generation GIS as a medium for linking environmental process models; generation of input data, including the use of geostatistic, stochastic simulation, remote sensing, regression equations and mapped data; issues of accuracy, uncertainty and simply providing adequate data for the complex models; how such a modelling system fits into an organisational framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Popular dimension reduction and visualisation algorithms rely on the assumption that input dissimilarities are typically Euclidean, for instance Metric Multidimensional Scaling, t-distributed Stochastic Neighbour Embedding and the Gaussian Process Latent Variable Model. It is well known that this assumption does not hold for most datasets and often high-dimensional data sits upon a manifold of unknown global geometry. We present a method for improving the manifold charting process, coupled with Elastic MDS, such that we no longer assume that the manifold is Euclidean, or of any particular structure. We draw on the benefits of different dissimilarity measures allowing for the relative responsibilities, under a linear combination, to drive the visualisation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article explores and discusses the development of a mapping tool inspired by Charles Renouvier’s philosophical novel Uchronie (l’utopie dans l’histoire) (1876). The article explains the research and design process of creating a uchronian map of a formerly empty site in Fish Island in East London and describes a participatory workshop titled ‘Hackney Wick and Fish Island: Future Perfect(s)’ (25 April 2015) that used uchronian mapping to explore past and future development imaginaries of two sites in the neighbourhood. Given a uchronian mapping template, a protocol and a dossier of planning and other documents, participants were encouraged to develop their own uchronian map of each site, and in doing so test and question the process of visualizing ‘what was supposed to happen’, ‘what actually happened’ and ‘what could have happened’. The article concludes with a reflection on uchronian mapping as a tool for researching, analysing and making visible urban alternatives.