945 resultados para Fluid and crystallized Intelligence
Resumo:
The ultimate aim of this project was to design new biomaterials which will improve the efficiency of ocular drug delivery systems. Initially, it was necessary to review the information available on the nature of the tear fluid and its relationship with the eye. An extensive survey of the relevant literature was made. There is a common belief in the literature that the ocular glycoprotein, mucin, plays an important role in tear film stability, and furthermore, that it exists as an adherent layer covering the corneal surface. If this belief is true, the muco-corneal interaction provides the ideal basis for the development of sustained release drug delivery. Preliminary investigations were made to assess the ability of mucin to adhere to polymer surfaces. The intention was to develop a synthetic model which would mimic the supposed corneal/mucin interaction. Analytical procedures included the use of microscopy (phase contrast and fluorescence), fluorophotometry, and mucin-staining dyes. Additionally, the physical properties of tears and tear models were assessed under conditions mimicking those of the preocular environment, using rheological and tensiometric techniques. The wetting abilities of these tear models and opthalmic formulations were also investigated. Tissue culture techniques were employed to enable the surface properties of the corneal surface to be studied by means of cultured corneal cells. The results of these investigations enabled the calculation of interfacial and surface characteristics of tears, tear models, and the corneal surface. Over all, this work cast doubt on the accepted relationship of mucin with the cornea. A corneal surface model was designed, on the basis of the information obtained during this project, which would possess similar surface chemical properties (i.e. would be biomimetic) to the more complex original. This model, together with the information gained on the properties of tears and solutions intended for ocular instillation, could be valuable in the design of drug formulations with enhanced ocular retention times. Furthermore, the model itself may form the basis for the design of an effective drug-carrier.
Resumo:
The thesis reports of a study into the effect upon organisations of co-operative information systems (CIS) incorporating flexible communications, group support and group working technologies. A review of the literature leads to the development of a model of effect based upon co-operative business tasks. CIS have the potential to change how co-operative business tasks are carried out and their principal effect (or performance) may therefore be evaluated by determining to what extent they are being employed to perform these tasks. A significant feature of CIS use identified is the extent to which they may be designed to fulfil particular tasks, or by contrast, may be applied creatively by users in an emergent fashion to perform tasks. A research instrument is developed using a survey questionnaire to elicit users judgements of the extent to which a CIS is employed to fulfil a range of co-operative tasks. This research instrument is applied to a longitudinal study of Novell GroupWise introduction at Northamptonshire County Council during which qualitative as well as quantitative data were gathered. A method of analysis of questionnaire results using principles from fuzzy mathematics and artificial intelligence is developed and demonstrated. Conclusions from the longitudinal study include the importance of early experiences in setting patterns for use for CIS, the persistence of patterns of use over time and the dominance of designed usage of the technology over emergent use.
Resumo:
Light occlusions are one of the most significant difficulties of photometric stereo methods. When three or more images are available without occlusion, the local surface orientation is overdetermined so that shape can be computed and the shadowed pixels can be discarded. In this paper, we look at the challenging case when only two images are available without occlusion, leading to a one degree of freedom ambiguity per pixel in the local orientation. We show that, in the presence of noise, integrability alone cannot resolve this ambiguity and reconstruct the geometry in the shadowed regions. As the problem is ill-posed in the presence of noise, we describe two regularization schemes that improve the numerical performance of the algorithm while preserving the data. Finally, the paper describes how this theory applies in the framework of color photometric stereo where one is restricted to only three images and light occlusions are common. Experiments on synthetic and real image sequences are presented.
Resumo:
This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results.
Resumo:
Integrated vehicle health management (IVHM) is a collection of data relevant to the present and future performance of a vehicle system and its transformation into information can be used to support operational decisions. This design and operation concept embraces an integration of sensors, communication technologies, and artificial intelligence to provide vehicle-wide abilities to diagnose problems and recommend solutions. This article aims to report the state-of-the-art of IVHM research by presenting a systematic review of the literature. The literature from different sources is collated and analysed, and the major emerging themes are presented. On this basis, the article describes the IVHM concept and its evolution, discusses configurations and existing applications along with main drivers, potential benefits and barriers to adoption, summarizes design guidelines and available methods, and identifies future research challenges.
Resumo:
We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system.
Resumo:
Purpose: The paper aims to explore the nature and purpose of higher education (HE) in the twenty-first century, focussing on how it can help fashion a green knowledge-based economy by developing approaches to learning and teaching that are social, networked and ecologically sensitive. Design/methodology/approach: The paper presents a discursive analysis of the skills and knowledge requirements of an emerging green knowledge-based economy using a range of policy focussed and academic research literature. Findings: The business opportunities that are emerging as a more sustainable world is developed requires the knowledge and skills that can capture and move then forward but in a complex and uncertain worlds learning needs to non-linear, creative and emergent. Practical implications: Sustainable learning and the attributes graduates will need to exhibit are prefigured in the activities and learning characterising the work and play facilitated by new media technologies. Social implications: Greater emphasis is required in higher learning understood as the capability to learn, adapt and direct sustainable change requires interprofessional co-operation that must utlise the potential of new media technologies to enhance social learning and collective intelligence. Originality/value: The practical relationship between low-carbon economic development, social sustainability and HE learning is based on both normative criteria and actual and emerging projections in economic, technological and skills needs.
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.
Resumo:
New sol-gel functionalized poly-ethylene glycol (PEGM)/SiO
Resumo:
In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behavior. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a "surprising" event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. In this paper, we discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian dynamic decision networks. Copyright © 2014 ACM.
Resumo:
This thesis is concerned with the nature of biomaterial interactions with compromised host tissue sites. Both ocular and dermal tissues can be wounded, following injury, disease or surgery, and consequently require the use of a biomaterial. Clear analogies exist between the cornea/tear film/contact lens and the dermal wound bed/wound fluid/skin adhesive wound dressing. The work described in this thesis builds upon established biochemistry to examine specific aspects of the interaction of biomaterials with compromised ocular and dermal tissue sites, with a particular focus on the role of vitronectin. Vitronectin is a prominent cell adhesion glycoprotein present in both tear fluid and wound fluid, and has a role in the regulation and upregulation of plasmin. The interaction of contact lenses with the cornea was assessed by a novel on-lens cell-based vitronectin assay technique. Vitronectin mapping showed that vitronectin-mediated cell adhesion to contact lens surfaces was due to the contact lens-corneal mechanical interaction rather than deposition out of the tear film. This deposition is associated predominantly with the peripheral region of the posterior contact lens surface. The locus of vitronectin deposition on the contact lens surface, which is affected by material modulus, is potentially an important factor in the generation of plasmin in the posterior tear film. Use of the vitronectin mapping technique on ex vivo bandage contact lenses revealed greater vitronectin-mediated cell adhesion to the contact lens surfaces in comparison to lenses worn in the healthy eye. The results suggest that vitronectin is more readily deposited from the impaired corneal tissue bed than the intact healthy tissue bed. Significantly, subjects with a deficient tear film were found to deposit high vitronectin-mediated cell adhesion levels to the BCL surface, thus highlighting the influence of the contact lens-tissue interaction upon deposition. Biomimetic principles imply that adhesive materials for wound applications, including hydrogels and hydrocolloids, should closely match the surface energy parameters of skin. The surface properties of hydrocolloid adhesives were found to be easily modified by contact with siliconised plastic release liners. In contrast, paper release liners did not significantly affect the adhesive surface properties. In order to characterise such materials in the actual wound environment, which is an extremely challenging task, preliminary considerations for the design of an artificial wound fluid model from an animal serum base were addressed.
Resumo:
In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
Regional climate models (RCMs) provide reliable climatic predictions for the next 90 years with high horizontal and temporal resolution. In the 21st century northward latitudinal and upward altitudinal shift of the distribution of plant species and phytogeographical units is expected. It is discussed how the modeling of phytogeographical unit can be reduced to modeling plant distributions. Predicted shift of the Moesz line is studied as case study (with three different modeling approaches) using 36 parameters of REMO regional climate data-set, ArcGIS geographic information software, and periods of 1961-1990 (reference period), 2011-2040, and 2041-2070. The disadvantages of this relatively simple climate envelope modeling (CEM) approach are then discussed and several ways of model improvement are suggested. Some statistical and artificial intelligence (AI) methods (logistic regression, cluster analysis and other clustering methods, decision tree, evolutionary algorithm, artificial neural network) are able to provide development of the model. Among them artificial neural networks (ANN) seems to be the most suitable algorithm for this purpose, which provides a black box method for distribution modeling.
Resumo:
The introduction of phase change material fluid and nanofluid in micro-channel heat sink design can significantly increase the cooling capacity of the heat sink because of the unique features of these two kinds of fluids. To better assist the design of a high performance micro-channel heat sink using phase change fluid and nanofluid, the heat transfer enhancement mechanism behind the flow with such fluids must be completely understood. ^ A detailed parametric study is conducted to further investigate the heat transfer enhancement of the phase change material particle suspension flow, by using the two-phase non-thermal-equilibrium model developed by Hao and Tao (2004). The parametric study is conducted under normal conditions with Reynolds numbers of Re = 90–600 and phase change material particle concentrations of ϵp ≤ 0.25, as well as extreme conditions of very low Reynolds numbers (Re < 50) and high phase change material particle concentration (ϵp = 50%–70%) slurry flow. By using the two newly-defined parameters, named effectiveness factor ϵeff and performance index PI, respectively, it is found that there exists an optimal relation between the channel design parameters L and D, particle volume fraction ϵp, Reynolds number Re, and the wall heat flux qw. The influence of the particle volume fraction ϵp, particle size dp, and the particle viscosity μ p, to the phase change material suspension flow, are investigated and discussed. The model was validated by available experimental data. The conclusions will assist designers in making their decisions that relate to the design or selection of a micro-pump suitable for micro or mini scale heat transfer devices. ^ To understand the heat transfer enhancement mechanism of the nanofluid flow from the particle level, the lattice Boltzmann method is used because of its mesoscopic feature and its many numerical advantages. By using a two-component lattice Boltzmann model, the heat transfer enhancement of the nanofluid is analyzed, through incorporating the different forces acting on the nanoparticles to the two-component lattice Boltzmann model. It is found that the nanofluid has better heat transfer enhancement at low Reynolds numbers, and the Brownian motion effect of the nanoparticles will be weakened by the increase of flow speed. ^