864 resultados para fate and effect modelling
Resumo:
Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.
Resumo:
When applying multivariate analysis techniques in information systems and social science disciplines, such as management information systems (MIS) and marketing, the assumption that the empirical data originate from a single homogeneous population is often unrealistic. When applying a causal modeling approach, such as partial least squares (PLS) path modeling, segmentation is a key issue in coping with the problem of heterogeneity in estimated cause-and-effect relationships. This chapter presents a new PLS path modeling approach which classifies units on the basis of the heterogeneity of the estimates in the inner model. If unobserved heterogeneity significantly affects the estimated path model relationships on the aggregate data level, the methodology will allow homogenous groups of observations to be created that exhibit distinctive path model estimates. The approach will, thus, provide differentiated analytical outcomes that permit more precise interpretations of each segment formed. An application on a large data set in an example of the American customer satisfaction index (ACSI) substantiates the methodology’s effectiveness in evaluating PLS path modeling results.
Resumo:
The use of digital communication systems is increasing very rapidly. This is due to lower system implementation cost compared to analogue transmission and at the same time, the ease with which several types of data sources (data, digitised speech and video, etc.) can be mixed. The emergence of packet broadcast techniques as an efficient type of multiplexing, especially with the use of contention random multiple access protocols, has led to a wide-spread application of these distributed access protocols in local area networks (LANs) and a further extension of them to radio and mobile radio communication applications. In this research, a proposal for a modified version of the distributed access contention protocol which uses the packet broadcast switching technique has been achieved. The carrier sense multiple access with collision avoidance (CSMA/CA) is found to be the most appropriate protocol which has the ability to satisfy equally the operational requirements for local area networks as well as for radio and mobile radio applications. The suggested version of the protocol is designed in a way in which all desirable features of its precedents is maintained. However, all the shortcomings are eliminated and additional features have been added to strengthen its ability to work with radio and mobile radio channels. Operational performance evaluation of the protocol has been carried out for the two types of non-persistent and slotted non-persistent, through mathematical and simulation modelling of the protocol. The results obtained from the two modelling procedures validate the accuracy of both methods, which compares favourably with its precedent protocol CSMA/CD (with collision detection). A further extension of the protocol operation has been suggested to operate with multichannel systems. Two multichannel systems based on the CSMA/CA protocol for medium access are therefore proposed. These are; the dynamic multichannel system, which is based on two types of channel selection, the random choice (RC) and the idle choice (IC), and the sequential multichannel system. The latter has been proposed in order to supress the effect of the hidden terminal, which always represents a major problem with the usage of the contention random multiple access protocols with radio and mobile radio channels. Verification of their operation performance evaluation has been carried out using mathematical modelling for the dynamic system. However, simulation modelling has been chosen for the sequential system. Both systems are found to improve system operation and fault tolerance when compared to single channel operation.
Resumo:
The thesis presents an experimentally validated modelling study of the flow of combustion air in an industrial radiant tube burner (RTB). The RTB is used typically in industrial heat treating furnaces. The work has been initiated because of the need for improvements in burner lifetime and performance which are related to the fluid mechanics of the com busting flow, and a fundamental understanding of this is therefore necessary. To achieve this, a detailed three-dimensional Computational Fluid Dynamics (CFD) model has been used, validated with experimental air flow, temperature and flue gas measurements. Initially, the work programme is presented and the theory behind RTB design and operation in addition to the theory behind swirling flows and methane combustion. NOx reduction techniques are discussed and numerical modelling of combusting flows is detailed in this section. The importance of turbulence, radiation and combustion modelling is highlighted, as well as the numerical schemes that incorporate discretization, finite volume theory and convergence. The study first focuses on the combustion air flow and its delivery to the combustion zone. An isothermal computational model was developed to allow the examination of the flow characteristics as it enters the burner and progresses through the various sections prior to the discharge face in the combustion area. Important features identified include the air recuperator swirler coil, the step ring, the primary/secondary air splitting flame tube and the fuel nozzle. It was revealed that the effectiveness of the air recuperator swirler is significantly compromised by the need for a generous assembly tolerance. Also, there is a substantial circumferential flow maldistribution introduced by the swirier, but that this is effectively removed by the positioning of a ring constriction in the downstream passage. Computations using the k-ε turbulence model show good agreement with experimentally measured velocity profiles in the combustion zone and proved the use of the modelling strategy prior to the combustion study. Reasonable mesh independence was obtained with 200,000 nodes. Agreement was poorer with the RNG k-ε and Reynolds Stress models. The study continues to address the combustion process itself and the heat transfer process internal to the RTB. A series of combustion and radiation model configurations were developed and the optimum combination of the Eddy Dissipation (ED) combustion model and the Discrete Transfer (DT) radiation model was used successfully to validate a burner experimental test. The previously cold flow validated k-ε turbulence model was used and reasonable mesh independence was obtained with 300,000 nodes. The combination showed good agreement with temperature measurements in the inner and outer walls of the burner, as well as with flue gas composition measured at the exhaust. The inner tube wall temperature predictions validated the experimental measurements in the largest portion of the thermocouple locations, highlighting a small flame bias to one side, although the model slightly over predicts the temperatures towards the downstream end of the inner tube. NOx emissions were initially over predicted, however, the use of a combustion flame temperature limiting subroutine allowed convergence to the experimental value of 451 ppmv. With the validated model, the effectiveness of certain RTB features identified previously is analysed, and an analysis of the energy transfers throughout the burner is presented, to identify the dominant mechanisms in each region. The optimum turbulence-combustion-radiation model selection was then the baseline for further model development. One of these models, an eccentrically positioned flame tube model highlights the failure mode of the RTB during long term operation. Other models were developed to address NOx reduction and improvement of the flame profile in the burner combustion zone. These included a modified fuel nozzle design, with 12 circular section fuel ports, which demonstrates a longer and more symmetric flame, although with limited success in NOx reduction. In addition, a zero bypass swirler coil model was developed that highlights the effect of the stronger swirling combustion flow. A reduced diameter and a 20 mm forward displaced flame tube model shows limited success in NOx reduction; although the latter demonstrated improvements in the discharge face heat distribution and improvements in the flame symmetry. Finally, Flue Gas Recirculation (FGR) modelling attempts indicate the difficulty of the application of this NOx reduction technique in the Wellman RTB. Recommendations for further work are made that include design mitigations for the fuel nozzle and further burner modelling is suggested to improve computational validation. The introduction of fuel staging is proposed, as well as a modification in the inner tube to enhance the effect of FGR.
Resumo:
Topical and transdermal formulations are promising platforms for the delivery of drugs. A unit dose topical or transdermal drug delivery system that optimises the solubility of drugs within the vehicle provides a novel dosage form for efficacious delivery that also offers a simple manufacture technique is desirable. This study used Witepsol® H15 wax as a abase for the delivery system. One aspect of this project involved determination of the solubility of ibuprofen, flurbiprofen and naproxen in the was using microscopy, Higuchi release kinetics, HyperDSC and mathematical modelling techniques. Correlations between the results obtained via these techniques were noted with additional merits such as provision of valuable information on drug release kinetics and possible interactions between the drug and excipients. A second aspect of this project involved the incorporation of additional excipients: Tween 20 (T), Carbopol®971 (C) and menthol (M) to the wax formulation. On in vitro permeation through porcine skin, the preferred formulations were: ibuprofen (5% w/w) within Witepsol®H15 + 1% w/w T; flurbiprofen (10% w/w) within Witepsol®H15 + 1% w/w T; naproxen (5% w/w) within Witepsol®H15 + 1% w/w T + 1% C and sodium diclofenac (10% w/w) within Witepsol®H15 + 1% w/w T + 1% w/w T + 1% w/w C + 5% w/w M. Unit dose transdermal tablets containing ibuprofen and diclofenac were produced with improved flux compared to marketed products; Voltarol Emugel® demonstrated flux of 1.68x10-3 cm/h compared to 123 x 10-3 cm/h for the optimised product as detailed above; Ibugel Forte® demonstrated a permeation coefficient value of 7.65 x 10-3 cm/h compared to 8.69 x 10-3 cm/h for the optimised product as described above.
Resumo:
The revival of terracotta and faience in British architecture was widespread, dramatic in its results and, for two decades, the subject of intense debate. However the materials have been frequently denigrated and more generally disregarded by both architects and historians. This study sets out to record and explain the rise and fall of interest in terracotta and faience, the extent and nature of the industry and the range of architectural usage in the Victorian, Edwardian and inter-war periods. The first two chapters record the faltering use of terracotta as an 'artificial stone', until the material gained its own identity, largely through the appreciation of Italian architecture. In the mid-Victorian period, terracotta will be seen to have become symbolic of the philosophy of the Victoria and Albert Museum and its Art School in attempting to reform both architecture and the decorative arts. The adoption of terracotta was furthered as much by industrial as aesthetic factors; three chapters examine how the exploitation of coal-measure clays, developments in the processes of manufacture, the changing motivation of industrialists and differing economics of production served to promote and then to hinder expansion and adaptation. The practical values of economy, durability and fire-resistance and the aesthetic potential, seen in terms of colour and decorative and sculptural modelling, became inter-related in the work of the architects who made extensive use of architectural ceramics. A correlation emerges between the free Gothic style, exemplified by the designs of Alfred Waterhouse and the use of red terracotta supplied from Ruabon, and between the eclectic Renaissance style and a buff material produced by different manufacturers.These patterns were modified as a result of the adoption of faience for facing external walls as well as interiors, and because of the new architectural requirements and tastes of the twentieth century. The general timidity in exploiting the scope for polychromatic decoration and the increasing opposition to architectural ceramics is contrasted with the most successful schemes produced for cinemas, chain-stores and factories. In the last chapter, those undertaken by the Hathern Station Brick and Terracotta Company between 1896 and 1939 are used as a case study; they confirm that manufacturers, architects and clients were all committed to creating a modern and yet decorative architecture, appropriate for new building types and that would appeal to and be comprehensible to the public.
Resumo:
The confusion over the concept of accessibility in transport planning and the deficiencies of existing accessibility indices are examined by developing a conceptual framework of accessibility with a fundamental distinction being drawn between the, often conflicting, theoretical and practical dimensions. The theoretical validity of alternative indices is assessed with reference to the problems and assumptions implicit in defining, measuring, valuing and aggregating the variables and components comprising accessibility. The major deficiencies of existing indices are identified as the inability of indices to take account of the potential to link trips between more than one activity location and the level of assumptions implicit in valuing and aggregating accessibility information. In this context, it is argued that accessibility information is more appropriately expressed on a comparative basis in the form of a profile rather than as a composite single-unit index and that the present confines of accessibility measurement must be extended in line with current developments in disaggregate travel and activity modelling. The sensitivity of accessibility levels to the use of alternative value judgements, alternative forms and levels of aggregation and the inclusion of information on the potential to link trips is examined by undertaking a case study. Accessibility profiles are developed for 23 zones in the London Borough of Hammersmith and Fulham showing the accessibility of the elderly to post offices and grocers. In a practical context, the profiles assist in identifying areas and individuals with relatively poor accessibility. The incidence and nature of linked trip-making and its significance and implications for accessibility measurement are explored further by analysing the results of a survey of the elderly's travel patterns. It is concluded that future accessibility analysis should be undertaken at a disaggregate level, taking account of the potential opportunity available from nonhome as well as home origins.
Resumo:
This thesis covers both experimental and computer investigations into the dynamic behaviour of mechanical seals. The literature survey shows no investigations on the effect of vibration on mechanical seals of the type common in the various process industries. Typical seal designs are discussed. A form of Reynolds' equation has been developed that permits the calculation of stiffnesses and damping coefficients for the fluid film. The dynamics of the mechanical seal floating ring have been investigated using approximate formulae, and it has been shown that the floating ring will behave as a rigid body. Some elements, such as the radial damping due to the fluid film, are small and may be neglected. The equations of motion of the floating ring have been developed utilising the significant elements, and a solution technique described. The stiffness and damping coefficients of nitrile rubber o-rings have been obtained. These show a wide variation, with a constant stiffness up to 60 Hz. The importance of the effect of temperature on the properties is discussed. An unsuccessful test rig is described in the appendices. The dynamic behaviour of a mechanical seal has been investigated experimentally, including the effect of changes of speed, sealed pressure and seal geometry. The results, as expected, show that high vibration levels result in both high leakage and seal temperatures. Computer programs have been developed to solve Reynolds' Equation and the equations of motion. Two solution techniques for this latter program were developed, the unsuccesful technique is described in the appendices. Some stability problems were encountered, but despite these the solution shows good agreement with some of the experimental conditions. Possible reasons for the discrepancies are discussed. Various suggestions for future work in this field are given. These include the combining of the programs and more extensive experimental and computer modelling.
Resumo:
This thesis investigates the modelling of drying processes for the promotion of market-led Demand Side Management (DSM) as applied to the UK Public Electricity Suppliers. A review of DSM in the electricity supply industry is provided, together with a discussion of the relevant drivers supporting market-led DSM and energy services (ES). The potential opportunities for ES in a fully deregulated energy market are outlined. It is suggested that targeted industrial sector energy efficiency schemes offer significant opportunity for long term customer and supplier benefit. On a process level, industrial drying is highlighted as offering significant scope for the application of energy services. Drying is an energy-intensive process used widely throughout industry. The results of an energy survey suggest that 17.7 per cent of total UK industrial energy use derives from drying processes. Comparison with published work indicates that energy use for drying shows an increasing trend against a background of reducing overall industrial energy use. Airless drying is highlighted as offering potential energy saving and production benefits to industry. To this end, a comprehensive review of the novel airless drying technology and its background theory is made. Advantages and disadvantages of airless operation are defined and the limited market penetration of airless drying is identified, as are the key opportunities for energy saving. Limited literature has been found which details the modelling of energy use for airless drying. A review of drying theory and previous modelling work is made in an attempt to model energy consumption for drying processes. The history of drying models is presented as well as a discussion of the different approaches taken and their relative merits. The viability of deriving energy use from empirical drying data is examined. Adaptive neuro fuzzy inference systems (ANFIS) are successfully applied to the modelling of drying rates for 3 drying technologies, namely convective air, heat pump and airless drying. The ANFIS systems are then integrated into a novel energy services model for the prediction of relative drying times, energy cost and atmospheric carbon dioxide emission levels. The author believes that this work constitutes the first to use fuzzy systems for the modelling of drying performance as an energy services approach to DSM. To gain an insight into the 'real world' use of energy for drying, this thesis presents a unique first-order energy audit of every ceramic sanitaryware manufacturing site in the UK. Previously unknown patterns of energy use are highlighted. Supplementary comments on the timing and use of drying systems are also made. The limitations of such large scope energy surveys are discussed.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
This paper considers the role of HR in ethics and social responsibility and questions why, despite an acceptance of a role in ethical stewardship, the HR profession appears to be reluctant to embrace its responsibilities in this area. The study explores how HR professionals see their role in relation to ethical stewardship of the organisation, and the factors that inhibit its execution. A survey of 113 UK-based HR professionals, working in both domestic and multinational corporations, was conducted to explore their perceptions of the role of HR in maintaining ethical and socially responsible action in their organisations, and to identify features of the organisational environment which might help or hinder this role being effectively carried out. The findings indicate that although there is a clear understanding of the expectations of ethical stewardship, HR professionals often face difficulties in fulfilling this role because of competing tensions and perceptions of their role within their organisations. A way forward is proposed, which draws on the positive individual factors highlighted in this research to explore how approaches to organisational development (through positive deviance) may reduce these tensions to enable the better fulfilment of ethical responsibilities within organisations. The involvement and active modelling of ethical behaviour by senior management, coupled with an open approach to surfacing organisational values and building HR procedures, which support socially responsible action, are crucial to achieving socially responsible organisations. Finally, this paper challenges the HR profession, through professional and academic institutions internationally, to embrace their role in achieving this. © 2013 Taylor & Francis.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
Background To determine the pharmacokinetics (PK) of a new i.v. formulation of paracetamol (Perfalgan) in children ≤15 yr of age. Methods After obtaining written informed consent, children under 16 yr of age were recruited to this study. Blood samples were obtained at 0, 15, 30 min, 1, 2, 4, 6, and 8 h after administration of a weight-dependent dose of i.v. paracetamol. Paracetamol concentration was measured using a validated high-performance liquid chromatographic assay with ultraviolet detection method, with a lower limit of quantification (LLOQ) of 900 pg on column and an intra-day coefficient of variation of 14.3% at the LLOQ. Population PK analysis was performed by non-linear mixed-effect modelling using NONMEM. Results One hundred and fifty-nine blood samples from 33 children aged 1.8–15 yr, weight 13.7–56 kg, were analysed. Data were best described by a two-compartment model. Only body weight as a covariate significantly improved the goodness of fit of the model. The final population models for paracetamol clearance (CL), V1 (central volume of distribution), Q (inter-compartmental clearance), and V2 (peripheral volume of distribution) were: 16.51×(WT/70)0.75, 28.4×(WT/70), 11.32×(WT/70)0.75, and 13.26×(WT/70), respectively (CL, Q in litres per hour, WT in kilograms, and V1 and V2 in litres). Conclusions In children aged 1.8–15 yr, the PK parameters for i.v. paracetamol were not influenced directly by age but were by total body weight and, using allometric size scaling, significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).