99 resultados para knowledge-based systems
Resumo:
Modelling human interaction and decision-making within a simulation presents a particular challenge. This paper describes a methodology that is being developed known as 'knowledge based improvement'. The purpose of this methodology is to elicit decision-making strategies via a simulation model and to represent them using artificial intelligence techniques. Further to this, having identified an individual's decision-making strategy, the methodology aims to look for improvements in decision-making. The methodology is being tested on unplanned maintenance operations at a Ford engine assembly plant
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
In present day knowledge societies political decisions are often justified on the basis of scientific expertise. Traditionally, a linear relation between knowledge production and application was postulated which would lead, with more and better science, to better policies. Empirical studies in Science and Technology studies have essentially demolished this idea. However, it is still powerful, not least among practitioners working in fields where decision making is based on large doses of expert knowledge. Based on conceptual work in the field of Science and Technology Studies (STS) I shall examine two cases of global environmental governance, ozone layer protection and global climate change. I will argue that hybridization and purification are important for two major forms of scientific expertise. One is delivered though scientific advocacy (by individual scientists or groups of scientists), the other through expert committees, i.e. institutionalized forms of collecting and communicating expertise to decision makers. Based on this analysis lessons will be drawn, also with regard to the stalling efforts at establishing an international forestry regime.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
This paper identifies the important limiting processes in transmission capacity for amplified soliton systems. Some novel control techniques are described for optimizing this capacity. In particular, dispersion compensation and phase conjugation are identified as offering good control of jitter without the need for many new components in the system. An advanced average soliton model is described and demonstrated to permit large amplifier spacing. The potential for solitons in high-dispersion land-based systems is discussed and results are presented showing 10 Gbit s$^{-1}$ transmission over 1000 km with significant amplifier spacing.
Resumo:
Liposomes have been imaged using a plethora of techniques. However, few of these methods offer the ability to study these systems in their natural hydrated state without the requirement of drying, staining, and fixation of the vesicles. However, the ability to image a liposome in its hydrated state is the ideal scenario for visualization of these dynamic lipid structures and environmental scanning electron microscopy (ESEM), with its ability to image wet systems without prior sample preparation, offers potential advantages to the above methods. In our studies, we have used ESEM to not only investigate the morphology of liposomes and niosomes but also to dynamically follow the changes in structure of lipid films and liposome suspensions as water condenses on to or evaporates from the sample. In particular, changes in liposome morphology were studied using ESEM in real time to investigate the resistance of liposomes to coalescence during dehydration thereby providing an alternative assay of liposome formulation and stability. Based on this protocol, we have also studied niosome-based systems and cationic liposome/DNA complexes. Copyright © Informa Healthcare.
Resumo:
This book challenges the accepted notion that the transition from the command economy to market based systems is complete across the post-Soviet space. While it is noted that different political economies have developed in such states, such as Russia’s ‘managed democracy’, events such as Ukraine gaining ‘market economy status’ by the European Union and acceding to the World Trade Organisation in 2008 are taken as evidence that the reform period is over. Such thinking is based on numerous assumptions; specifically that economic transition has defined start and end points, that the formal economy now has primacy over other forms of economic practices and that national economic growth leads to the ‘trickle down’ of wealth to those marginalised by the transition process. Based on extensive ethnographic and quantitative research, conducted in Ukraine and Russia between 2004 - 2007, this book questions these assumptions by stating that the economies that operate across post-Soviet spaces are far from the textbook idea of a market economy. Through this the whole notion of ‘transition’ is problematised and the importance of informal economies to everyday life is demonstrated. Using case studies of various sectors, such as entrepreneurial behaviour and the higher education system, it is also shown how corruption has invaded almost all sectors of the post-Soviet every day.
Resumo:
This thesis presents theoretical investigation of three topics concerned with nonlinear optical pulse propagation in optical fibres. The techniques used are mathematical analysis and numerical modelling. Firstly, dispersion-managed (DM) solitons in fibre lines employing a weak dispersion map are analysed by means of a perturbation approach. In the case of small dispersion map strengths the average pulse dynamics is described by a perturbation approach (NLS) equation. Applying a perturbation theory, based on the Inverse Scattering Transform method, an analytic expression for the envelope of the DM soliton is derived. This expression correctly predicts the power enhancement arising from the dispersion management.Secondly, autosoliton transmission in DM fibre systems with periodical in-line deployment of nonlinear optical loop mirrors (NOLMs) is investigated. The use of in-line NOLMs is addressed as a general technique for all-optical passive 2R regeneration of return-to-zero data in high speed transmission system with strong dispersion management. By system optimisation, the feasibility of ultra-long single-channel and wavelength-division multiplexed data transmission at bit-rates ³ 40 Gbit s-1 in standard fibre-based systems is demonstrated. The tolerance limits of the results are defined.Thirdly, solutions of the NLS equation with gain and normal dispersion, that describes optical pulse propagation in an amplifying medium, are examined. A self-similar parabolic solution in the energy-containing core of the pulse is matched through Painlevé functions to the linear low-amplitude tails. The analysis provides a full description of the features of high-power pulses generated in an amplifying medium.
Resumo:
The two main objectives of the research work conducted were firstly, to investigate the processing and rheological characteristics of a new generation metallocene catalysed linear low density polyethylene (m-LLDPE), in order to establish the thermal oxidative degradation mechanism, and secondly, to examine the role of selected commercial stabilisers on the melt stability of the polymers. The unstabilised m-LLDPE polymer was extruded (pass I) using a twin screw extruder, at different temperatures (210-285°C) and screw speeds (50-20rpm) and was subjected to multiple extrusions (passes, 2-5) carried out under the same processing conditions used in the first pass. A traditional Ziegler/Natta catalysed linear low density polyethylene (z-LLDPE) produced by the same manufacturer was also subjected to a similar processing regime in order to compare the processability and the oxidative degradation mechanism (s) of the new m-LLDPE with that of the more traditional z-LLDPE. The effect of some of the main extrusion characteristics of the polymers (m-LLDPE and z-LLDPE) on their melt rheological behaviour was investigated by examining their melt flow performance monitored at two fixed low shear rate values, and their rheological behaviour investigated over the entire shear rates experienced during extrusion using a twin-bore capillary rheometer. Capillary rheometric measurements, which determine the viscous and elastic properties of polymers, have shown that both polymers are shear thinning but the m-LLDPE has a higher viscosity than z-LLDPE and the extent of reduction in viscosity of the former when the extrusion temperature was increased from 210°C to 285°C was much higher than in the case of the z-LLDPE polymer. This was supplied by the findings that the m-LLDPE polymer required higher power consumption under all extrusion conditions examined. It was fUliher revealed that the m-LLDPE undergoes a higher extent of melt fracture, the onset of which occurs under much lower shear rates than the Ziegler-based polymer and this was attributed to its higher shear viscosity and narrower molecular weight distribution (MWD). Melt flow measurements and GPC have shown that after the first extrusion pass, the initial narrower MWD of m-LLDPE is retained (compared to z-LLDPE), but upon further multiple extrusion passes it undergoes much faster broadening of its MWD which shifts to higher Mw polymer fractions, paliicularly at the high screw speeds. The MWD of z-LLDPE polymer on the other hand shifts towards the lower Mw end. All the evidence suggest therefore the m-LLDPE undergoes predominantly cross-linking reactions under all processing conditions whereas z-LLDPE undergoes both cross-linking and chain scission reactions with the latter occurring predominantly under more severe processing conditions (higher temperatures and screw speeds, 285°CI200rpm). The stabilisation of both polymers with synergistic combinations of a hindered phenol (Irganox 1076) and a phosphite (Weston 399) at low concentrations has shown a high extent of melt stabilisation in both polymers (extrusion temperatures 210-285°C and screw speeds 50-200rpm). The best Irganox 1076/Weston 399 system was found to be at an optimum 1:4 w/w ratio, respectively and was found to be most effective in the z-LLDPE polymer. The melt stabilising effectiveness of a Vitamin E/Ultranox 626 system used at a fraction of the total concentration of Irganox 1076/Weston 399 system was found to be higher in both polymers (under all extrusion conditions). It was found that AOs which operate primarily as alkyl (Re) radical scavengers are the most effective in inhibiting the thermal oxidative degradation of m-LLDPE in the melt; this polymer was shown to degrade in the melt primarily via alky radicals resulting in crosslinking. Metallocene polymers stabilised with single antioxidants of Irganox HP 136 (a lactone) and Irganox E201 (vitamin E) produced the highest extent of melt stability and the least discolouration during processing (260°C/1 OOrpm). Furthermore, synergistic combinations of Irganox HP I 36/Ultranox 626 (XP-60) system produced very high levels of melt and colour stability (comparable to the Vitamin E based systems) in the mLLDPE polymer. The addition of Irganox 1076 to an Irganox HP 136/Ultranox 626 system was found not to result in increasing melt stability but gave rise to increasing discolouration of the m-LLDPE polymer. The blending of a hydroxylamine (lrgastab FS042) with a lactone and Vitamin E (in combination with a phosphite) did not increase melt stability but induced severe discolouration of resultant polymer samples.
Resumo:
Xerox Customer Engagement activity is informed by the "Go To Market" strategy, and "Intelligent Coverage" sales philosophy. The realisation of this philosophy necessitates a sophisticated level of Market Understanding, and the effective integration of the direct channels of Customer Engagement. Sophisticated Market Understanding requires the mapping and coding of the entire UK market at the DMU (Decision Making Unit) level, which in turn enables the creation of tailored coverage prescriptions. Effective Channel Integration is made possible by the organisation of Customer Engagement work according to a single, process defined structure: the Selling Process. Organising by process facilitates the discipline of Task Substitution, which leads logically to creation of Hybrid Selling models. Productive Customer Engagement requires Selling Process specialisation by industry sector, customer segment and product group. The research shows that Xerox's Market Database (MDB) plays a central role in delivering the Go To Market strategic aims. It is a tool for knowledge based selling, enables productive SFA (Sales Force Automation) and, in sum, is critical to the efficient and effective deployment of Customer Engagement resources. Intelligent Coverage is not possible without the MDB. Analysis of the case evidence has resulted in the definition of 60 idiographic statements. These statements are about how Xerox organise and manage three direct channels of Customer Engagement: Face to Face, Telebusiness and Ebusiness. Xerox is shown to employ a process-oriented, IT-enabled, holistic approach to Customer Engagement productivity. The significance of the research is that it represents a detailed (perhaps unequalled) level of rich description of the interplay between IT and a holistic, process-oriented management philosophy.
Resumo:
A series of ethylene propylene terpolymer vulcanizates, prepared by varying termonomer type, cure system, cure time and cure temperature, are characterized by determining the number and type of cross-links present. The termonomers used represent the types currently available in commercial quantities. Characterization is carried out by measuring the C1 constant of the Mooney Rivlin Saunders equation before and after treatment with the chemical probes propane-2-thiol/piperidine and n-hexane thiol/piperidine, thus making it possible to calculate the relative proportions of mono-sulphidic, di-sulphidic and poly- sulphidic cross-links. The cure systems used included both sulphur and peroxide formulations. Specific physical properties are determined for each network and an attempt is made to correlate observed changes in these with variations in network structure. A survey of the economics of each formulation based on a calculated efficiency parameter for each cure system is included. Values of C1 are calculated from compression modulus data after the reliability of the technique when used with ethylene propylene terpolymers had been established. This is carried out by comparing values from both compression and extension stress strain measurements for natural rubber vulcanizates and by assessing the effects of sample dimensions and the degree of swelling. The technique of compression modulus is much more widely applicable than previously thought. The basic structure of an ethylene propylene terpolymer network appears to be independent of the type of cure system used ( sulphur based systems only), the proportions of constituent cross-links being nearly constant.
Resumo:
This work has used novel polymer design and fabrication technology to generate bead form polymer based systems, with variable, yet controlled release properties, specifically for the delivery of macromolecules, essentially peptides of therapeutic interest. The work involved investigation of the potential interaction between matrix ultrastructural morphology, in vitro release kinetics, bioactivity and immunoreactivity of selected macromolecules with limited hydrolytic stability, delivered from controlled release vehicles. The underlying principle involved photo-polymerisation of the monomer, hydroxyethyl methacrylate, around frozen ice crystals, leading to the production of a macroporous hydrophilic matrix. Bead form matrices were fabricated in controllable size ranges in the region of 100µm - 3mm in diameter. The initial stages of the project involved the study of how variables, delivery speed of the monomer and stirring speed of the non solvent, affectedthe formation of macroporous bead form matrices. From this an optimal bench system for bead production was developed. Careful selection of monomer, solvents, crosslinking agent and polymerisation conditions led to a variable but controllable distribution of pore sizes (0.5 - 4µm). Release of surrogate macromolecules, bovine serum albumin and FITC-linked dextrans, enabled factors relating to the size and solubility of the macromolecule on the rate of release to be studied. Incorporation of bioactive macromolecules allowed retained bioactivity to be determined (glucose oxidase and interleukin-2), whilst the release of insulin enabled determination of both bioactivity (using rat epididymal fat pad) and immunoreactivity (RIA). The work carried out has led to the generation of macroporous bead form matrices, fabricated from a tissue biocompatible hydrogel, capable of the sustained, controlled release of biologically active peptides, with potential use in the pharmaceutical and agrochemical industries.
Resumo:
Diagnosing faults in wastewater treatment, like diagnosis of most problems, requires bi-directional plausible reasoning. This means that both predictive (from causes to symptoms) and diagnostic (from symptoms to causes) inferences have to be made, depending on the evidence available, in reasoning for the final diagnosis. The use of computer technology for the purpose of diagnosing faults in the wastewater process has been explored, and a rule-based expert system was initiated. It was found that such an approach has serious limitations in its ability to reason bi-directionally, which makes it unsuitable for diagnosing tasks under the conditions of uncertainty. The probabilistic approach known as Bayesian Belief Networks (BBNS) was then critically reviewed, and was found to be well-suited for diagnosis under uncertainty. The theory and application of BBNs are outlined. A full-scale BBN for the diagnosis of faults in a wastewater treatment plant based on the activated sludge system has been developed in this research. Results from the BBN show good agreement with the predictions of wastewater experts. It can be concluded that the BBNs are far superior to rule-based systems based on certainty factors in their ability to diagnose faults and predict systems in complex operating systems having inherently uncertain behaviour.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
Enhanced immune responses for DNA and subunit vaccines potentiated by surfactant vesicle based delivery systems outlined in the present study, provides proof of principle for the beneficial aspects of vesicle mediated vaccination. The dehydration-rehydration technique was used to entrap plasmid DNA or subunit antigens into lipid-based (liposomes) or non-ionic surfactant-based (niosomes) dehydration-rehydration vesicles (DRV). Using this procedure, it was shown that both these types of antigens can be effectively entrapped in DRV liposomes and DRV niosomes. The vesicle size of DRV niosomes was shown to be twice the diameter (~2µm) of that of their liposome counterparts. Incorporation of cryoprotectants such as sucrose in the DRV procedure resulted in reduced vesicle sizes while retaining high DNA incorporation efficiency (~95%). Transfection studies in COS 7 cells demonstrated that the choice of cationic lipid, the helper lipid, and the method of preparation, all influenced transfection efficiency indicating a strong interdependency of these factors. This phenomenon has been further reinforced when 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE): cholesteryl 3b- [N-(N’ ,N’ -dimethylaminoethane)-carbamoyl] cholesterol (DC-Chol)/DNA complexes were supplemented with non-ionic surfactants. Morphological analysis of these complexes using transmission electron microscopy and environmental scanning electron microscopy (ESEM) revealed the presence of heterogeneous structures which may be essential for an efficient transfection in addition to the fusogenic properties of DOPE. In vivo evaluation of these DNA incorporated vesicle systems in BALB/c mice showed weak antibody and cell-mediated immune (CMI) responses. Subsequent mock challenge with hepatitis B antigen demonstrated that, 1-monopalmitoyl glycerol (MP) based DRV, is a more promising DNA vaccine adjuvant. Studying these DRV systems as adjuvants for the Hepatitis B subunit antigen (HBsAg) revealed a balanced antibody/CMI response profile on the basis of the HBsAg specific antibody and cytokine responses which were higher than unadjuvated antigen. The effect of addition of MP, cholesterol and trehalose 6,6’-dibehenate (TDB) on the stability and immuno-efficacy of dimethyldioctadecylammonium bromide (DDA) vesicles was investigated. Differential scanning calorimetry showed a reduction in transition temperature of DDA vesicles by ~12°C when incorporated with surfactants. ESEM of MP based DRV system indicated an increased vesicle stability upon incorporation of antigen. Adjuvant activity of these systems tested in C57BL/6j mice against three subunit antigens i.e., mycobacterial fusion protein- Ag85B-ESAT-6, and two malarial antigens - merozoite surface protein-1, (MSP1), and glutamate rich protein, (GLURP) revealed that while MP and DDA based systems induced comparable antibody responses, DDA based systems induced powerful CMI responses.