906 resultados para model-based reasoning processes
Resumo:
Feature detection is a crucial stage of visual processing. In previous feature-marking experiments we found that peaks in the 3rd derivative of the luminance profile can signify edges where there are no 1st derivative peaks nor 2nd derivative zero-crossings (Wallis and George 'Mach edges' (the edges of Mach bands) were nicely predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test of the model, we now use a new class of stimuli, formed by adding a linear luminance ramp to the blurred triangle waves used previously. The ramp has no effect on the second or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing only one edge as the added ramp gradient increases. In experiment 1, subjects judged whether one or two edges were visible on each trial. In experiment 2, subjects used a cursor to mark perceived edges and bars. The position and polarity of the marked edges were close to model predictions. Both experiments produced the predicted shift from two to one Mach edge, but the shift was less complete than predicted. We conclude that the model is a useful predictor of edge perception, but needs some modification.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
The preparation and characterisation of collagen: PCL, gelatin: PCL and gelatin/collagen:PCL biocomposites for manufacture of tissue engineered skin substitutes are reported. Films of collagen: PLC, gelatin: PCL (1:4, 1:8 and 1:20 w/w) and gelatin/collagen:PCL (1:8 and 1:20 w/w) biocomposites were prepared by impregnation of lyophilised collagen and/or gelatin mats by PCL solutions followed by solvent evaporation. In vitro assays of total protein release of collagen:PCL and gelatin: PCL biocomposite films revealed an expected inverse relationship between the collagen release rate and the content of synthetic polymer in the biocomposite samples that may be exploited for controlled presentation and release of biopharmaceuticals such as growth factors. Good compatibility of all biocomposite groups was proven by interaction with 3T3 fibroblasts, normal human epidermal keratinocytes (NHEK), and primary human epidermal keratinocytes (PHEK) and dermal fibroblasts (PHDF) in vitro respectively. The 1:20 collagen: PCL materials exhibiting good cell growth curves and mechanical characteristics were selected for engineering of skin substitutes in this work. The tissue-engineered skin model based on single-donor PHEK and PHDF with differentiated confluent epidermal layer and fibrous porous dermal layer was then developed successfully in vitro proven by SEM and immunohistochemistry assay. The following in vivo animal study on athymic mice revealed early complete wound healing in 10 days and good integration of co-cultured skin substitutes with adjacent mice skin structures. Thus the co-cultured skin substitutes based on 1:20 collagen: PCL biocomposite membranes was proven in principle. The approach to skin modelling reported here may find application in wound treatment, gene therapy and screening of new pharmaceuticals.
Resumo:
In this thesis, I view the historical background of Zimbabwe to show the patterns of traditional life that existed prior to settlerism. The form, nature, pace and impact of settlerism and colonialism up to the time of independence are also discussed to show how they affected the health of the population and the pace of development of the country. The political, social and economic underdevelopment of the African people that occurred in Zimbabwe prior to independence was a result of deliberate, politically motivated and controlled policy initiatives. These led to inequatable, inadequate, inappropriate and inaccessible health care provision. It is submitted that since it was the politics that determined the pace of underdevelopment, it must be the politics that must be at the forefront of the development strategy adopted. In the face of the amed conflict that existed in Zimbabwe, existing frameworks of analyses are shown to be inadequate for planning purposes because of their inability to provide indications about the stability of future outcomes. The Metagame technique of analysis of options is proposed as a methology that can be applied in such situations. It rejects deterministic predicative models as misleading and advocates an interactive model based on objective and subjective valuation of human behaviour. In conclusion, the search for stable outcomes rather than optimal and best solutions strategies is advocated in decision making in organisations of all sizes.
Resumo:
This thesis reports the results of DEM (Discrete Element Method) simulations of rotating drums operated in a number of different flow regimes. DEM simulations of drum granulation have also been conducted. The aim was to demonstrate that a realistic simulation is possible, and further understanding of the particle motion and granulation processes in a rotating drum. The simulation model has shown good qualitative and quantitative agreement with other published experimental results. A two-dimensional bed of 5000 disc particles, with properties similar to glass has been simulated in the rolling mode (Froude number 0.0076) with a fractional drum fill of approximately 30%. Particle velocity fields in the cascading layer, bed cross-section, and at the drum wall have shown good agreement with experimental PEPT data. Particle avalanches in the cascading layer have been shown to be consistent with single layers of particles cascading down the free surface towards the drum wall. Particle slip at the drum wall has been shown to depend on angular position, and ranged from 20% at the toe and shoulder, to less than 1% at the mid-point. Three-dimensional DEM simulations of a moderately cascading bed of 50,000 spherical elastic particles (Froude number 0.83) with a fractional fill of approximately 30% have also been performed. The drum axis was inclined by 50 to the horizontal with periodic boundaries at the ends of the drum. The mean period of bed circulation was found to be 0.28s. A liquid binder was added to the system using a spray model based on the concept of a wet surface energy. Granule formation and breakage processes have been demonstrated in the system.
Resumo:
Molecular dynamics (MD) has been used to identify the relative distribution of dysprosium in the phosphate glass DyAl0.30P3.05O9.62. The MD model has been compared directly with experimental data obtained from neutron diffraction to enable a detailed comparison beyond the total structure factor level. The MD simulation gives Dy ... Dy correlations at 3.80(5) and 6.40(5) angstrom with relative coordination numbers of 0.8(1) and 7.3(5), thus providing evidence of minority rare-earth clustering within these glasses. The nearest neighbour Dy-O peak occurs at 2.30 angstrom with each Dy atom having on average 5.8 nearest neighbour oxygen atoms. The MD simulation is consistent with the phosphate network model based on interlinked PO4 tetrahedra where the addition of network modifiers Dy3+ depolymerizes the phosphate network through the breakage of P-(O)-P bonds whilst leaving the tetrahedral units intact. The role of aluminium within the network has been taken into explicit account, and A1 is found to be predominantly (78 tetrahedrally coordinated. In fact all four A1 bonds are found to be to P (via an oxygen atom) with negligible amounts of Al-O-Dy bonds present. This provides an important insight into the role of Al additives in improving the mechanical properties of these glasses.
Resumo:
Purpose – There appears to be an ever-insatiable demand from markets for organisations to improve their products and services. To meet this, there is a need to provide business process improvement (BPI) methodologies that are holistic, structured and procedural. Therefore, this paper describes research that has formed and tested a generic and practical methodology termed model-based and integrated process improvement (MIPI) to support the implementation of BPI; and to validate its effectiveness in organisations. This methodology has been created as an aid for practitioners within organisations. Design/methodology/approach – The research objectives were achieved by: reviewing and analysing current methodologies, and selecting a few frameworks against key performance indicators. Using a refined Delphi approach and semi-structured interview with the “experts” in the field. Intervention, case study and process research approach to evaluating a methodology. Findings – The BPI methodology was successfully formed and applied by the researcher and directly by the companies involved against the criteria of feasibility, usability and usefulness. Research limitations/implications – The paper has demonstrated a new knowledge on how to systematically assess a BPI methodology in practice. Practical implications – Model-based and integrated process improvement methodology (MIPI) methodology offers the practitioner (experienced and novice) a set of step-by-step aids necessary to make informed, consistent and efficient changes to business processes. Originality/value – The novelty of this research work is the creation of a holistic workbook-based methodology with relevant tools and techniques. It extends the capabilities of existing methodologies.
Resumo:
Continuing advances in digital image capture and storage are resulting in a proliferation of imagery and associated problems of information overload in image domains. In this work we present a framework that supports image management using an interactive approach that captures and reuses task-based contextual information. Our framework models the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. During image analysis, interactions are captured and a task context is dynamically constructed so that human expertise, proficiency and knowledge can be leveraged to support other users in carrying out similar domain tasks using case-based reasoning techniques. In this article we present our framework for capturing task context and describe how we have implemented the framework as two image retrieval applications in the geo-spatial and medical domains. We present an evaluation that tests the efficiency of our algorithms for retrieving image context information and the effectiveness of the framework for carrying out goal-directed image tasks. © 2010 Springer Science+Business Media, LLC.
Resumo:
The paper proposes an ISE (Information goal, Search strategy, Evaluation threshold) user classification model based on Information Foraging Theory for understanding user interaction with content-based image retrieval (CBIR). The proposed model is verified by a multiple linear regression analysis based on 50 users' interaction features collected from a task-based user study of interactive CBIR systems. To our best knowledge, this is the first principled user classification model in CBIR verified by a formal and systematic qualitative analysis of extensive user interaction data. Copyright 2010 ACM.
Resumo:
Hospitals everywhere are integrating health data using electronic health record (EHR) systems, and disparate and multimedia patient data can be input by different caregivers at different locations as encapsulated patient profiles. Healthcare institutions are also using the flexibility and speed of wireless computing to improve quality and reduce costs. We are developing a mobile application that allows doctors to efficiently record and access complete and accurate real-time patient information. The system integrates medical imagery with textual patient profiles as well as expert interactions by healthcare personnel using knowledge management and case-based reasoning techniques. The application can assist other caregivers in searching large repositories of previous patient cases. Patients' symptoms can be input to a portable device and the application can quickly retrieve similar profiles which can be used to support effective diagnoses and prognoses by comparing symptoms, treatments, diagnosis, test results and other patient information. © 2007 Sage Publications.
Resumo:
Background: Coronary heart disease (CHD) is a public health priority in the UK. The National Service Framework (NSF) has set standards for the prevention, diagnosis and treatment of CHD, which include the use of cholesterol-lowering agents aimed at achieving targets of blood total cholesterol (TC) < 5.0 mmol/L and low density lipoprotein-cholesterol (LDL-C) < 3.0 mmol/L. In order to achieve these targets cost effectively, prescribers need to make an informed choice from the range of statins available. Aim: To estimate the average and relative cost effectiveness of atorvastatin, fluvastatin, pravastatin and simvastatin in achieving the NSF LDL-C and TC targets. Design: Model-based economic evaluation. Methods: An economic model was constructed to estimate the number of patients achieving the NSF targets for LDL-C and TC at each dose of statin, and to calculate the average drug cost and incremental drug cost per patient achieving the target levels. The population baseline LDL-C and TC, and drug efficacy and drug costs were taken from previously published data. Estimates of the distribution of patients receiving each dose of statin were derived from the UK national DIN-LINK database. Results: The estimated annual drug cost per 1000 patients treated with atorvastatin was £289 000, with simvastatin £315 000, with pravastatin £333 000 and with fluvastatin £167 000. The percentages of patients achieving target are 74.4%, 46.4%, 28.4% and 13.2% for atorvastatin, simvastatin, pravastatin and fluvastatin, respectively. Incremental drug cost per extra patient treated to LDL-C and TC targets compared with fluvastafin were £198 and £226 for atorvastatin, £443 and £567 for simvastatin and £1089 and £2298 for pravastatin, using 2002 drug costs. Conclusions: As a result of its superior efficacy, atorvastatin generates a favourable cost-effectiveness profile as measured by drug cost per patient treated to LDL-C and TC targets. For a given drug budget, more patients would achieve NSF LDL-C and TC targets with atorvastatin than with any of the other statins examined.
Resumo:
Methodology of computer-aided investigation and provision of safety for complex constructions and a prototype of the intelligent applied system, which implements it, are considered. The methodology is determined by the model of the object under scrutiny, by the structure and functions of investigation of safety as well as by a set of research methods. The methods are based on the technologies of object-oriented databases, expert systems and on the mathematical modeling. The intelligent system’s prototype represents component software, which provides for support of decision making in the process of safety investigations and investigation of the cause of failure. Support of decision making is executed by analogy, by determined search for the precedents (cases) with respect to predicted (on the stage of design) and observed (on the stage of exploitation) parameters of the damage, destruction and malfunction of a complex hazardous construction.
Resumo:
This paper aims at development of procedures and algorithms for application of artificial intelligence tools to acquire process and analyze various types of knowledge. The proposed environment integrates techniques of knowledge and decision process modeling such as neural networks and fuzzy logic-based reasoning methods. The problem of an identification of complex processes with the use of neuro-fuzzy systems is solved. The proposed classifier has been successfully applied for building one decision support systems for solving managerial problem.
Resumo:
Let V be an array. The range query problem concerns the design of data structures for implementing the following operations. The operation update(j,x) has the effect vj ← vj + x, and the query operation retrieve(i,j) returns the partial sum vi + ... + vj. These tasks are to be performed on-line. We define an algebraic model – based on the use of matrices – for the study of the problem. In this paper we establish as well a lower bound for the sum of the average complexity of both kinds of operations, and demonstrate that this lower bound is near optimal – in terms of asymptotic complexity.