37 resultados para Fluid and crystallized Intelligence


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combination of the two-fluid and drift flux models have been used to model the transport of fibrous debris. This debris is generated during loss of coolant accidents in the primary circuit of pressurized or boiling water nuclear reactors, as high pressure steam or water jets can damage adjacent insulation materials including mineral wool blankets. Fibre agglomerates released from the mineral wools may reach the containment sump strainers, where they can accumulate and compromise the long-term operation of the emergency core cooling system. Single-effect experiments of sedimentation in a quiescent rectangular column and sedimentation in a horizontal flow are used to verify and validate this particular application of the multiphase numerical models. The utilization of both modeling approaches allows a number of pseudocontinuous dispersed phases of spherical wetted agglomerates to be modeled simultaneously. Key effects on the transport of the fibre agglomerates are particle size, density and turbulent dispersion, as well as the relative viscosity of the fluid-fibre mixture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiscale systems that are characterized by a great range of spatial–temporal scales arise widely in many scientific domains. These range from the study of protein conformational dynamics to multiphase processes in, for example, granular media or haemodynamics, and from nuclear reactor physics to astrophysics. Despite the diversity in subject areas and terminology, there are many common challenges in multiscale modelling, including validation and design of tools for programming and executing multiscale simulations. This Theme Issue seeks to establish common frameworks for theoretical modelling, computing and validation, and to help practical applications to benefit from the modelling results. This Theme Issue has been inspired by discussions held during two recent workshops in 2013: ‘Multiscale modelling and simulation’ at the Lorentz Center, Leiden (http://www.lorentzcenter.nl/lc/web/2013/569/info.php3?wsid=569&venue=Snellius), and ‘Multiscale systems: linking quantum chemistry, molecular dynamics and microfluidic hydrodynamics’ at the Royal Society Kavli Centre. The objective of both meetings was to identify common approaches for dealing with multiscale problems across different applications in fluid and soft matter systems. This was achieved by bringing together experts from several diverse communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interaction of the wound dressing as a biomaterial with the wound bed is the central issue of this chapter. The interfacial phenomenon that encompasses the biological and biochemical consequences that arise when a biomaterial is introduced to a host biological environment is discussed. A great deal can be learned from observations arising from the behaviour of biomaterials at other body sites; one particularly relevant body site in the context of wound healing is the anterior eye. The cornea, tear film and posterior surface of the contact lens provide an informative model of the parallel interface that exists between the chronic wound bed, wound fluid and the dressing biomaterial. © 2011 Woodhead Publishing Limited All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural networks are statistical models and learning rules are estimators. In this paper a theory for measuring generalisation is developed by combining Bayesian decision theory with information geometry. The performance of an estimator is measured by the information divergence between the true distribution and the estimate, averaged over the Bayesian posterior. This unifies the majority of error measures currently in use. The optimal estimators also reveal some intricate interrelationships among information geometry, Banach spaces and sufficient statistics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a method of recognizing handwritten digits by fitting generative models that are built from deformable B-splines with Gaussian ``ink generators'' spaced along the length of the spline. The splines are adjusted using a novel elastic matching procedure based on the Expectation Maximization (EM) algorithm that maximizes the likelihood of the model generating the data. This approach has many advantages. (1) After identifying the model most likely to have generated the data, the system not only produces a classification of the digit but also a rich description of the instantiation parameters which can yield information such as the writing style. (2) During the process of explaining the image, generative models can perform recognition driven segmentation. (3) The method involves a relatively small number of parameters and hence training is relatively easy and fast. (4) Unlike many other recognition schemes it does not rely on some form of pre-normalization of input images, but can handle arbitrary scalings, translations and a limited degree of image rotation. We have demonstrated our method of fitting models to images does not get trapped in poor local minima. The main disadvantage of the method is it requires much more computation than more standard OCR techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis ¸iteBishop98a in several directions: bf(1) We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping (GTM). bf(2) We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. bf(3) Using tools from differential geometry we derive expressions for local directional curvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the ancestor visualization plots which are captured by a child model. We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set and apply our system to two more complex 12- and 18-dimensional data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Expert systems, and artificial intelligence more generally, can provide a useful means for representing decision-making processes. By linking expert systems software to simulation software an effective means of including these decision-making processes in a simulation model can be achieved. This paper demonstrates how a commercial-off-the-shelf simulation package (Witness) can be linked to an expert systems package (XpertRule) through a Visual Basic interface. The methodology adopted could be used for models, and possibly software, other than those presented here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c=1,...,m. For a two-class problem, the probability of class one given x is estimated by s(y(x)), where s(y)=1/(1+e-y). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multiclass problems (m>2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design and synthesis of biomaterials covers a growing number of biomedical applications. The use of biomaterials in biological environment is associated with a number of problems, the most important of which is biocompatabUity. If the implanted biomaterial is not compatible with the environment, it will be rejected by the biological site. This may be manifested in many ways depending on the environment in which it is used. Adsorption of proteins takes place almost instantaneously when a biomaterial comes into contact with most biological fluids. The eye is a unique body site for the study of protein interactions with biomaterials, because of its ease of access and deceptive complexity of the tears. The use of contact lenses for either vision correction and cosmetic reasons or as a route for the controlled drug delivery, has significantly increased in recent years. It is relatively easy to introduce a contact lens Into the tear fluid and remove after a few minutes without surgery or trauma to the patient. A range of analytical techniques were used and developed to measure the proteins absorbed to some existing commercial contact lens materials and also to novel hydrogels synthesised within the research group. Analysis of the identity and quantity of proteins absorbed to biomaterials revealed the importance of many factors on the absorption process. The effect of biomaterial structure, protein nature in terms of size. shape and charge and pH of the environment on the absorption process were examined in order to determine the relative up-take of tear proteins. This study showed that both lysozyme and lactoferrin penetrate the lens matrix of ionic materials. Measurement of the mobility and activity of the protein deposited into the surface and within the matrix of ionic lens materials demonstrated that the mobility is pH dependent and, within the experimental errors, the biological activity of lysozyme remained unchanged after adsorption and desorption. The study on the effect of different monomers copolymerised with hydroxyethyl methacrylate (HEMA) on the protein up-take showed that monomers producing a positive charge on the copolymer can reduce the spoilation with lysozyme. The studies were extended to real cases in order to compare the patient dependent factors. The in-vivo studies showed that the spoilation is patient dependent as well as other factors. Studies on the extrinsic factors such as dye used in colour lenses showed that the addition of colourant affects protein absorption and, in one case, its effect is beneficial to the wearer as it reduces the quantity of the protein absorbed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ultimate aim of this project was to design new biomaterials which will improve the efficiency of ocular drug delivery systems. Initially, it was necessary to review the information available on the nature of the tear fluid and its relationship with the eye. An extensive survey of the relevant literature was made. There is a common belief in the literature that the ocular glycoprotein, mucin, plays an important role in tear film stability, and furthermore, that it exists as an adherent layer covering the corneal surface. If this belief is true, the muco-corneal interaction provides the ideal basis for the development of sustained release drug delivery. Preliminary investigations were made to assess the ability of mucin to adhere to polymer surfaces. The intention was to develop a synthetic model which would mimic the supposed corneal/mucin interaction. Analytical procedures included the use of microscopy (phase contrast and fluorescence), fluorophotometry, and mucin-staining dyes. Additionally, the physical properties of tears and tear models were assessed under conditions mimicking those of the preocular environment, using rheological and tensiometric techniques. The wetting abilities of these tear models and opthalmic formulations were also investigated. Tissue culture techniques were employed to enable the surface properties of the corneal surface to be studied by means of cultured corneal cells. The results of these investigations enabled the calculation of interfacial and surface characteristics of tears, tear models, and the corneal surface. Over all, this work cast doubt on the accepted relationship of mucin with the cornea. A corneal surface model was designed, on the basis of the information obtained during this project, which would possess similar surface chemical properties (i.e. would be biomimetic) to the more complex original. This model, together with the information gained on the properties of tears and solutions intended for ocular instillation, could be valuable in the design of drug formulations with enhanced ocular retention times. Furthermore, the model itself may form the basis for the design of an effective drug-carrier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis reports of a study into the effect upon organisations of co-operative information systems (CIS) incorporating flexible communications, group support and group working technologies. A review of the literature leads to the development of a model of effect based upon co-operative business tasks. CIS have the potential to change how co-operative business tasks are carried out and their principal effect (or performance) may therefore be evaluated by determining to what extent they are being employed to perform these tasks. A significant feature of CIS use identified is the extent to which they may be designed to fulfil particular tasks, or by contrast, may be applied creatively by users in an emergent fashion to perform tasks. A research instrument is developed using a survey questionnaire to elicit users judgements of the extent to which a CIS is employed to fulfil a range of co-operative tasks. This research instrument is applied to a longitudinal study of Novell GroupWise introduction at Northamptonshire County Council during which qualitative as well as quantitative data were gathered. A method of analysis of questionnaire results using principles from fuzzy mathematics and artificial intelligence is developed and demonstrated. Conclusions from the longitudinal study include the importance of early experiences in setting patterns for use for CIS, the persistence of patterns of use over time and the dominance of designed usage of the technology over emergent use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Light occlusions are one of the most significant difficulties of photometric stereo methods. When three or more images are available without occlusion, the local surface orientation is overdetermined so that shape can be computed and the shadowed pixels can be discarded. In this paper, we look at the challenging case when only two images are available without occlusion, leading to a one degree of freedom ambiguity per pixel in the local orientation. We show that, in the presence of noise, integrability alone cannot resolve this ambiguity and reconstruct the geometry in the shadowed regions. As the problem is ill-posed in the presence of noise, we describe two regularization schemes that improve the numerical performance of the algorithm while preserving the data. Finally, the paper describes how this theory applies in the framework of color photometric stereo where one is restricted to only three images and light occlusions are common. Experiments on synthetic and real image sequences are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrated vehicle health management (IVHM) is a collection of data relevant to the present and future performance of a vehicle system and its transformation into information can be used to support operational decisions. This design and operation concept embraces an integration of sensors, communication technologies, and artificial intelligence to provide vehicle-wide abilities to diagnose problems and recommend solutions. This article aims to report the state-of-the-art of IVHM research by presenting a systematic review of the literature. The literature from different sources is collated and analysed, and the major emerging themes are presented. On this basis, the article describes the IVHM concept and its evolution, discusses configurations and existing applications along with main drivers, potential benefits and barriers to adoption, summarizes design guidelines and available methods, and identifies future research challenges.