13 resultados para development of processes
em Aston University Research Archive
Resumo:
In 1974 Dr D M Bramwell published his research work at the University of Aston a part of which was the establishment of an elemental work study data base covering drainage construction. The Transport and Road Research Laboratory decided to, extend that work as part of their continuing research programme into the design and construction of buried pipelines by placing a research contract with Bryant Construction. This research may be considered under two broad categories. In the first, site studies were undertaken to validate and extend the data base. The studies showed good agreement with the existing data with the exception of the excavation trench shoring and pipelaying data which was amended to incorporate new construction plant and methods. An inter-active on-line computer system for drainage estimating was developed. This system stores the elemental data, synthesizes the standard time of each drainage operation and is used to determine the required resources and construction method of the total drainage activity. The remainder of the research was into the general topic of construction efficiency. An on-line command driven computer system was produced. This system uses a stochastic simulation technique, based on distributions of site efficiency measurements to evaluate the effects of varying performance levels. The analysis of this performance data quantities the variability inherent in construction and demonstrates how some of this variability can be reconciled by considering the characteristics of a contract. A long term trend of decreasing efficiency with contract duration was also identified. The results obtained from the simulation suite were compared to site records collected from current contracts. This showed that this approach will give comparable answers, but these are greatly affected by the site performance parameters.
Resumo:
Investment in transport infrastructure can be highly sensitive to uncertainty. The scale and lead time of strategic transport programmes are such that they require continuing policy support and accurate forecasting. Delay, cost escalation and abandonment of projects often result if these conditions are not present. In Part One the physical characteristics of infrastructure are identified as a major constraint on planning processes. The extent to which strategies and techniques acknowledge these constraints is examined. A simple simulation model is developed to evaluate the effects on system development of variations in the scale and lead time of investments. In Part Two, two case studies of strategic infrastructure investment are analysed. The absence of a policy consensus for airport location was an important factor in the delayed resolution of the Third London Airport issue. In London itself, the traffic and environmental effects of major highway investment ultimately resulted in the abandonment of plans to construct urban motorways. In both cases, the infrastructure implications of alternative strategies are reviewed with reference to the problems of uncertainty. In conclusion, the scale of infrastructure investment is considered the most important of the constraints on the processes of transport planning. Adequate appraisal of such constraints may best be achieved by evaluation more closely aligned to policy objectives.
Resumo:
Tissue transglutaminase (TG2) is a Ca2+-dependent enzyme and probably the most ubiquitously expressed member of the mammalian transglutaminase family. TG2 plays a number of important roles in a variety of biological processes. Via its transamidating function, it is responsible for the cross-linking of proteins by forming isopeptide bonds between glutamine and lysine residues. Intracellularly, Ca2+ activation of the enzyme is normally tightly regulated by the binding of GTP. However, upregulated levels of TG2 are associated with many disease states like celiac sprue, certain types of cancer, fibrosis, cystic fibrosis, multiple sclerosis, Alzheimer's, Huntington's and Parkinson's disease. Selective inhibitors for TG2 both cell penetrating and non-cell penetrating would therefore serve as novel therapeutic tools for the treatment of these disease states. Moreover, they would provide useful tools to fully elucidate the cellular mechanisms TG2 is involved in and help comprehend how the enzyme is regulated at the cellular level. The current paper is intended to give an update on the recently discovered classes of TG2 inhibitors along with their structure-activity relationships. The biological properties of these derivatives, in terms of both activity and selectivity, will also be reported in order to translate their potential for future therapeutic developments. © 2011 Springer-Verlag.
Resumo:
In order to survive in the increasingly customer-oriented marketplace, continuous quality improvement marks the fastest growing quality organization’s success. In recent years, attention has been focused on intelligent systems which have shown great promise in supporting quality control. However, only a small number of the currently used systems are reported to be operating effectively because they are designed to maintain a quality level within the specified process, rather than to focus on cooperation within the production workflow. This paper proposes an intelligent system with a newly designed algorithm and the universal process data exchange standard to overcome the challenges of demanding customers who seek high-quality and low-cost products. The intelligent quality management system is equipped with the ‘‘distributed process mining” feature to provide all levels of employees with the ability to understand the relationships between processes, especially when any aspect of the process is going to degrade or fail. An example of generalized fuzzy association rules are applied in manufacturing sector to demonstrate how the proposed iterative process mining algorithm finds the relationships between distributed process parameters and the presence of quality problems.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
In vitro studies of drug absorption processes are undertaken to assess drug candidate or formulation suitability, mechanism investigation, and ultimately for the development of predictive models. This study included each of these approaches, with the aim of developing novel in vitro methods for inclusion in a drug absorption model. Two model analgesic drugs, ibuprofen and paracetamol, were selected. The study focused on three main areas, the interaction of the model drugs with co-administered antacids, the elucidation of the mechanisms responsible for the increased absorption rate observed in a novel paracetamol formulation and the development of novel ibuprofen tablet formulations containing alkalising excipients as dissolution promoters.Several novel dissolution methods were developed. A method to study the interaction of drug/excipient mixtures in the powder form was successfully used to select suitable dissolution enhancing exicipents. A method to study intrinsic dissolution rate using paddle apparatus was developed and used to study dissolution mechanisms. Methods to simulate stomach and intestine environments in terms of media composition and volume and drug/antacid doses were developed. Antacid addition greatly increased the dissolution of ibuprofen in the stomach model.Novel methods to measure drug permeability through rat stomach and intestine were developed, using sac methodology. The methods allowed direct comparison of the apparent permeability values obtained. Tissue stability, reproducibility and integrity was observed, with selectivity between paracellular and transcellular markers and hydrophilic and lipophilic compounds within an homologous series of beta-blockers.
Resumo:
Soil erosion is one of the most pressing issues facing developing countries. The need for soil erosion assessment is paramount as a successful and productive agricultural base is necessary for economic growth and stability. In Ghana, a country with an expanding population and high potential for economic growth, agriculture is an important resource; however, most of the crop production is restricted to low technology shifting cultivation agriculture. The high intensity seasonal rainfall coincides with the early growing period of many of the crops meaning that plots are very susceptible to erosion, especially on steep sided valleys in the region south of Lake Volta. This research investigated the processes of soil erosion by rainfall with the aim of producing a sediment yield model for a small semi-agricultural catchment in rural Ghana. Various types of modelling techniques were considered to discover those most applicable to the sub-tropical environment of Southern Ghana. Once an appropriate model had been developed and calibrated, the aim was to look at how to enable the scaling up of the model using sub-catchments to calculate sedimentation rates of Lake Volta. An experimental catchment was located in Ghana, south west of Lake Volta, where data on rainstorms and the associated streamflow, sediment loads and soil data (moisture content, classification and particle size distribution) was collected to calibrate the model. Additional data was obtained from the Soil Research Institute in Ghana to explore calibration of the Universal Soil Loss Equation (USLE, Wischmeier and Smith, 1978) for Ghanaian soils and environment. It was shown that the USLE could be successfully converted to provide meaningful soil loss estimates in the Ghanaian environment. However, due to experimental difficulties, the proposed theory and methodology of the sediment yield model could only be tested in principle. Future work may include validation of the model and subsequent scaling up to estimate sedimentation rates in Lake Volta.
Resumo:
The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.
Resumo:
There is currently great scientific and medical interest in the potential of tissue grown from stem cells. These cells present opportunities for generating model systems for drug screening and toxicological testing which would be expected to be more relevant to human outcomes than animal based tissue preparations. Newly realised astrocytic roles in the brain have fundamental implications within the context of stem cell derived neuronal networks. If the aim of stem cell neuroscience is to generate functional neuronal networks that behave as networks do in the brain, then it becomes clear that we must include and understand all the cellular components that comprise that network, and which are important to support synaptic integrity and cell to cell signalling. We have shown that stem cell derived neurons exhibit spontaneous and coordinated calcium elevations in clusters and in extended processes, indicating local and long distance signalling (1). Tetrodotoxin sensitive network activity could also be evoked by electrical stimulation. Similarly, astrocytes exhibit morphology and functional properties consistent with this glial cell type. Astrocytes also respond to neuronal activity and to exogenously applied neurotransmitters with calcium elevations, and in contrast to neurons, also exhibited spontaneous rhythmic calcium oscillations. Astroctyes also generate propagating calcium waves that are gap junction and purinergic signalling dependent. Our results show that stem cell derived astrocytes exhibit appropriate functionality and that stem cell neuronal networks interact with astrocytic networks in co-culture. Using mixed cultures of stem cell derived neurons and astrocytes, we have also shown both cell types also modulate their glucose uptake, glycogen turnover and lactate production in response to glutamate as well as increased neuronal activity (2). This finding is consistent with their neuron-astrocyte metabolic coupling thus demonstrating a tractable human model, which will facilitate the study of the metabolic coupling between neurons and astrocytes and its relationship with CNS functional issues ranging from plasticity to neurodegeneration. Indeed, cultures treated with oligomers of amyloid beta 1-42 (Aβ1-42) also display a clear hypometabolism, particularly with regard to utilization of substrates such as glucose (3). Both co-cultures of neurons and astrocytes and purified cultures of astrocytes showed a significant decrease in glucose uptake after treatment with 2 and 0.2 μmol/L Aβ at all time points investigated (p <0.01). In addition, a significant increase in the glycogen content of cells was also measured. Mixed neuron and astrocyte co-cultures as well as pure astrocyte cultures showed an initial decrease in glycogen levels at 6 hours compared with control at 0.2 μmol/L and 2 μmol/L P <0.01. These changes were accompanied by changes in NAD+/NADH (P<0.05), ATP (P<0.05), and glutathione levels (P<0.05), suggesting a disruption in the energy-redox axis within these cultures. The high energy demands associated with neuronal functions such as memory formation and protection from oxidative stress put these cells at particular risk from Aβ-induced hypometabolism. As numerous cell types interact in the brain it is important that any in vitro model developed reflects this arrangement. Our findings indicate that stem cell derived neuron and astrocyte networks can communicate, and so have the potential to interact in a tripartite manner as is seen in vivo. This study therefore lays the foundation for further development of stem cell derived neurons and astrocytes into therapeutic cell replacement and human toxicology/disease models. More recently our data provides evidence for a detrimental effect of Aβ on carbohydrate metabolism in both neurons and astrocytes. As a purely in vitro system, human stem cell models can be readily manipulated and maintained in culture for a period of months without the use of animals. In our laboratory cultures can be maintained in culture for up to 12 months months thus providing the opportunity to study the consequences of these changes over extended periods of time relevant to aspects of the disease progression time frame in vivo. In addition, their human origin provides a more realistic in vitro model as well as informing other human in vitro models such as patient-derived iPSC.
Resumo:
Small and Medium Enterprises (SMEs) play an important part in the economy of any country. Initially, a flat management hierarchy, quick response to market changes and cost competitiveness were seen as the competitive characteristics of an SME. Recently, in developed economies, technological capabilities (TCs) management- managing existing and developing or assimilating new technological capabilities for continuous process and product innovations, has become important for both large organisations and SMEs to achieve sustained competitiveness. Therefore, various technological innovation capability (TIC) models have been developed at firm level to assess firms‘ innovation capability level. These models output help policy makers and firm managers to devise policies for deepening a firm‘s technical knowledge generation, acquisition and exploitation capabilities for sustained technological competitive edge. However, in developing countries TCs management is more of TCs upgrading: acquisitions of TCs from abroad, and then assimilating, innovating and exploiting them. Most of the TIC models for developing countries delineate the level of TIC required as firms move from the acquisition to innovative level. However, these models do not provide tools for assessing the existing level of TIC of a firm and various factors affecting TIC, to help practical interventions for TCs upgrading of firms for improved or new processes and products. Recently, the Government of Pakistan (GOP) has realised the importance of TCs upgrading in SMEs-especially export-oriented, for their sustained competitiveness. The GOP has launched various initiatives with local and foreign assistance to identify ways and means of upgrading local SMEs capabilities. This research targets this gap and developed a TICs assessment model for identifying the existing level of TIC of manufacturing SMEs existing in clusters in Sialkot, Pakistan. SME executives in three different export-oriented clusters at Sialkot were interviewed to analyse technological capabilities development initiatives (CDIs) taken by them to develop and upgrade their firms‘ TCs. Data analysed at CDI, firm, cluster and cross-cluster level first helped classify interviewed firms as leader, follower and reactor, with leader firms claiming to introduce mostly new CDIs to their cluster. Second, the data analysis displayed that mostly interviewed leader firms exhibited ‗learning by interacting‘ and ‗learning by training‘ capabilities for expertise acquisition from customers and international consultants. However, these leader firms did not show much evidence of learning by using, reverse engineering and R&D capabilities, which according to the extant literature are necessary for upgrading existing TIC level and thus TCs of firm for better value-added processes and products. The research results are supported by extant literature on Sialkot clusters. Thus, in sum, a TIC assessment model was developed in this research which qualitatively identified interviewed firms‘ TIC levels, the factors affecting them, and is validated by existing literature on interviewed Sialkot clusters. Further, the research gives policy level recommendations for TIC and thus TCs upgrading at firm and cluster level for targeting better value-added markets.
Resumo:
The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.