60 resultados para Process control -- Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years there has been a remarkable increase in information exchange between organizations due to changes in market structures and new forms of business relationships. The increase in the volume of business-to-business (B2B) transactions has contributed significantly to the expanding need for electronic systems that could effectively support communication between collaborating organizations. Examples of such collaborating systems include those that offer various types of business-to-business services, e.g. electronic commerce, electronic procurement systems, electronic links between legacy systems, or outsourced systems providing data processing services via electronic media. Development and running of B2B electronic systems has not been problem free. One of the most intractable issues found in B2B systems is the prevalence of inter-organisational conflict reported to exist and persists between the participants of interorganisational electronic networks. There have been very few attempts, however, to prescribe any practical method of detecting the antecedents of such conflict early in B2B development to facilitate smooth construction and the subsequent operation of B2B services. The research reported in this paper focuses on the identification and analysis of antecedent conflict in a joint process involving different organizations in a B2B venture. The proposed method involves identification of domain stakeholders, capturing and packaging their views and concerns into a reusable form, and the application of captured domain experience in B2B systems development. The concepts and methods introduced in this paper have been illustrated with examples drawn from our study of six web-enabled payroll systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spray forming process is a novel method of rapidly manufacturing tools and dies for stamping and injection operations. The process sprays molten tool steel from a set of arc spray guns onto a ceramic former to build up a thick steel shell. The volumetric contraction that occurs as the steel cools is offset by a volumetric expansion taking place within the sprayed steel, which allows the dimensional accurate tools to be produced. To ensure that the required phase transformation takes place, the temperature of the steel is regulated during spraying. The sprayed metal acts both as a source of mass and a source of heat and by adjusting the rate at which metal is sprayed; the surface temperature profile over the surface of the steel can be controlled. The temperature profile is measured using a thermal imaging camera and regulated by adjusting the rate at which the guns spray the steel. Because the temperature is regulated by adjusting the feed rate to an actuator that is moving over the surface, this is an example of mobile control, which is a class of distributed parameter control. The dynamic system has been controlled using a PI controller before. The paper describes the application of H∞ tracking type controller as the desire was for the average temperature to follow a desired profile. A study on the controllability of the underlying system was aimed at.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Air-atomized pure aluminium powder with 15 at.% MgB2 was mechanically milled (MMed) by using a vibrational ball mill, and MMed powders were consolidated by spark plasma sintering (SPS) to produce composite materials with high specific strength. Solid-state reactions of MMed powders have been examined by X-ray diffraction (XRD), and mechanical properties of the SPSed materials have been evaluated by hardness measurements and compression tests. Orientation images of microstructures were obtained via the electron backscatter diffraction (EBSD) technique.

The solid-state reactions in the Al–15 at.% MgB2 composite materials occurred between the MMed powders and process control agent (PCA) after heating at 773–873 K for 24 h. The products of the solid-state reaction were a combination of AlB2, Al3BC and spinel MgAl2O4. Mechanical milling (MM) processing time and heating temperatures affect the characteristics of those intermetallic compounds. As the result of the solid-state reactions in MMed powders, a hardness increase was observed in MMed powders after heating at 573–873 K for 24 h. The full density was attained for the SPSed materials from 4 h or 8 h MMed powders in the Al–15 at.% MgB2 composite materials under an applied pressure of 49 MPa at 873 K for 1 h. The microstructure of the SPSed materials fabricated from the MMed powders presented the bimodal aluminium matrix grain structure with the randomly distributions. The Al–15 at.% MgB2SPSed material from powder MMed for 8 h exhibited the highest compressive 0.2% proof strength of 846 MPa at room temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current work used discrete event simulation techniques to model the economics of quality within an actual automotive stamping plant. Automotive stamping is a complex, capital intensive process requiring part-specific tooling and specialised machinery. Quality control and quality improvement is difficult in the stamping environment due to the general lack of process understanding and the large number to interacting variables. These factors have prevented the widespread use of statistical process control. In this work, a model of the quality control techniques used at the Ford Geelong Stamping plant is developed and indirectly validated against results from production. To date, most discrete event models are of systems where the quality control process is clearly defined by the rules of statistical process control. However, the quality control technique used within the stamping plant is for the operator to perform a 100% visual inspection while unloading the finished panels. In the developed model, control is enacted after a cumulative count of defective items is observed, thereby approximating the operator who allows a number of defective panels to accumulate before resetting the line. Analysis of this model found that the cost sensitivity to inspection error is dependent upon the level of control and that the level of control determines line utilisation. Additional analysis of this model demonstrated that additional inspection processes would lead to more stable cost structures but these structures many not necessarily be lower cost. The model was subsequently applied to investigate the economics of quality improvement. The quality problem of panel blemishes, induced by slivers (small metal fragments), was chosen as a case stuffy. Errors of 20-30% were observed during direct validation of the cost model and it was concluded that the use of discrete event simulation models for applications requiring high accuracy would not be possible unless the production system was of low complexity. However, the model could be used to evaluate the sensitivity of input factors and investigating the effects of a number of potential improvement opportunities. Therefore, the research concluded that it is possible to use discrete event simulation to determine the quality economics of an actual stamping plant. However, limitations imposed by inability of the model to consider a number of external factors, such as continuous improvement, operator working conditions or wear and the lack of reliable quality data, result in low cost accuracy. Despite this, it still can be demonstrated that discrete event simulation has significant benefits over the alternate modelling methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the fundamental machine learning tasks is that of predictive classification. Given that organisations collect an ever increasing amount of data, predictive classification methods must be able to effectively and efficiently handle large amounts of data. However, it is understood that present requirements push existing algorithms to, and sometimes beyond, their limits since many classification prediction algorithms were designed when currently common data set sizes were beyond imagination. This has led to a significant amount of research into ways of making classification learning algorithms more effective and efficient. Although substantial progress has been made, a number of key questions have not been answered. This dissertation investigates two of these key questions. The first is whether different types of algorithms to those currently employed are required when using large data sets. This is answered by analysis of the way in which the bias plus variance decomposition of predictive classification error changes as training set size is increased. Experiments find that larger training sets require different types of algorithms to those currently used. Some insight into the characteristics of suitable algorithms is provided, and this may provide some direction for the development of future classification prediction algorithms which are specifically designed for use with large data sets. The second question investigated is that of the role of sampling in machine learning with large data sets. Sampling has long been used as a means of avoiding the need to scale up algorithms to suit the size of the data set by scaling down the size of the data sets to suit the algorithm. However, the costs of performing sampling have not been widely explored. Two popular sampling methods are compared with learning from all available data in terms of predictive accuracy, model complexity, and execution time. The comparison shows that sub-sampling generally products models with accuracy close to, and sometimes greater than, that obtainable from learning with all available data. This result suggests that it may be possible to develop algorithms that take advantage of the sub-sampling methodology to reduce the time required to infer a model while sacrificing little if any accuracy. Methods of improving effective and efficient learning via sampling are also investigated, and now sampling methodologies proposed. These methodologies include using a varying-proportion of instances to determine the next inference step and using a statistical calculation at each inference step to determine sufficient sample size. Experiments show that using a statistical calculation of sample size can not only substantially reduce execution time but can do so with only a small loss, and occasional gain, in accuracy. One of the common uses of sampling is in the construction of learning curves. Learning curves are often used to attempt to determine the optimal training size which will maximally reduce execution time while nut being detrimental to accuracy. An analysis of the performance of methods for detection of convergence of learning curves is performed, with the focus of the analysis on methods that calculate the gradient, of the tangent to the curve. Given that such methods can be susceptible to local accuracy plateaus, an investigation into the frequency of local plateaus is also performed. It is shown that local accuracy plateaus are a common occurrence, and that ensuring a small loss of accuracy often results in greater computational cost than learning from all available data. These results cast doubt over the applicability of gradient of tangent methods for detecting convergence, and of the viability of learning curves for reducing execution time in general.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impacts on the environment from human activities are of increasing concern. The need to consider the reduction in energy consumption is of particular interest, especially in the construction and operation of buildings, which accounts for between 30 and 40% of Australia's national energy consumption. Much past and more recent emphasis has been placed on methods for reducing the energy consumed in the operation of buildings. With the energy embodied in these buildings having been shown to account for an equally large proportion of a building's life cycle energy consumption, there is a need to look at ways of reducing the embodied energy of buildings and related products. Life cycle assessment (LCA) is considered to be the most appropriate tool for assessing the life cycle energy consumption of buildings and their products. The life cycle inventory analysis (LCIA) step of a LCA, where an inventory of material and energy inputs is gathered, may currently suffer from several limitations, mainly concerned with the use of incomplete and unreliable data sources and LCIA methods. These traditional methods of LCIA include process-based and input-output-based LCIA. Process-based LCIA uses process specific data, whilst input-output-based LCIA uses data produced from an analysis of the flow of goods and services between sectors of the Australian economy, also known as input-output data. With the incompleteness and unreliability of these two respective methods in mind, hybrid LCIA methods have been developed to minimise the errors associated with traditional LCIA methods, combining both process and input-output data. Hybrid LCIA methods based on process data have shown to be incomplete. Hybrid LCIA methods based on input-output data involve substituting available process data into the input-output model minimising the errors associated with process-based hybrid LCIA methods. However, until now, this LCIA method had not been tested for its level of completeness and reliability. The aim of this study was to assess the reliability and completeness of hybrid life cycle inventory analysis, as applied to the Australian construction industry. A range of case studies were selected in order to apply the input-output-based hybrid LCIA method and evaluate the subsequent results as obtained from each case study. These case studies included buildings: two commercial office buildings, two residential buildings, a recreational building; and building related products: a solar hot water system, a building integrated photovoltaic system and a washing machine. The range of building types and products selected assisted in testing the input-output-based hybrid LCIA method for its applicability across a wide range of product types. The input-output-based hybrid LCIA method was applied to each of the selected case studies in order to obtain their respective embodied energy results. These results were then evaluated with the use of a number of evaluation methods. These evaluation methods included an analysis of the difference between the process-based and input-output-based hybrid LCIA results as an evaluation of the completeness of the process-based LCIA method. The second method of evaluation used was a comparison between equivalent process and input-output values used in the input-output-based hybrid LCIA method as a measure of reliability. It was found that the results from a typical process-based LCIA and process-based hybrid LCIA have a large gap when compared to input-output-based hybrid LCIA results (up to 80%). This gap has shown that the currently available quantity of process data in Australia is insufficient. The comparison between equivalent process-based and input-output-based LCIA values showed that the input-output data does not provide a reliable representation of the equivalent process values, for material energy intensities, material inputs and whole products. Therefore, the use of input-output data to account for inadequate or missing process data is not reliable. However, as there is currently no other method for filling the gaps in traditional process-based LCIA, and as input-output data is considered to be more complete than process data, and the errors may be somewhat lower, using input-output data to fill the gaps in traditional process-based LCIA appears to be better than not using any data at all. The input-output-based hybrid LCIA method evaluated in this study has shown to be the most sophisticated and complete currently available LCIA method for assessing the environmental impacts associated with buildings and building related products. This finding is significant as the construction and operation of buildings accounts for a large proportion of national energy consumption. The use of the input-output-based hybrid LCIA method for products other than those related to the Australian construction industry may be appropriate, especially if the material inputs of the product being assessed are similar to those typically used in the construction industry. The input-output-based hybrid LCIA method has been used to correct some of the errors and limitations associated with previous LCIA methods, without the introduction of any new errors. Improvements in current input-output models are also needed, particularly to account for the inclusion of capital equipment inputs (i.e. the energy required to manufacture the machinery and other equipment used in the production of building materials, products etc.). Although further improvements in the quantity of currently available process data are also needed, this study has shown that with the current available embodied energy data for LCIA, the input-output-based hybrid LCIA appears to provide the most reliable and complete method for use in assessing the environmental impacts of the Australian construction industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis reviews the literature relating to girls and computing within a framework which is structured around three specific questions. First, are there differences between girls and boys in their participation in class computing activities and/or in non-class computing activities? Second, do these differences in participation in computing activities have broader implications which justify the growing concern about the under-representation of girls? Third, wahy are girls under-represented in these activities? Although the available literature is predominantly descriptive, the underlying implicit theoretical model is essentially a social learning model. Girl's differential participation is attributed to learned attitudes towards computing rathan to differences between girls and boys in general ability. These attitudes, which stress the masculine, mathematical, technological aspects of computing are developed through modelling, direct experience, intrinsic and extrinsic reinforcement and generalisation from pre-existing, attitudes to related curriculum areas. In the literature it is implicitly assumed that these attitudes underlie girl's decisions to self-select out of computing activities. In this thesis predictions from a social learning model are complemented by predictions derived from expectancy-value, cognitive dissonance and self-perception theories. These are tested in three separate studies. Study one provides data from a pretest-posttest study of 24 children in a year four class learning BASIC. It examines pre- and posttest differences between girls and boys in computing experience, knowledge and achievement as well as the factors relating to computing achievement. Study two uses a pretest-posttest control group design to study the gender differences in the impact of the introduction of Logo into years 1, 3, 5 and 7 in both a coeducational and single-sex setting using a sample of 222 children from three schools. Study three utilises a larger sample of 1176 students, drawn from three secondary schools and five primary schools, enabling an evaluation of gender differences in relation to a wide range of class computing experiences and in a broader range of school contexts. The overall results are consistent across the three studies, supporting the contention that social factors, rather than ability differences influence girls' participation and achievement in computing. The more global theoretical framework, drawing on social learning, expectancy-value, cognitive dissonance and self-perception theories, provides a more adequate explanation of gender differences in participation than does any one of these models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the extended family's impact on microenterprise growth at the individual level, where microenterprise operators have some control over constraints affecting their operations. Beyond the individual level, microenterprise operators have little control over constraints such as government policies and regulations, competition from import-substitution industries and exploitation by corrupt officials. Therefore, it is at the individual level that the extended family serves as a crucial parameter of microenterprise growth and the success with which MEs graduate from the informal sector into the mainstreams of small business. Within this domain, this author has examined the extended family and found that there is a need for policy makers and MED administrators to adopt a more culturally sensitive approach to microenterprise growth if the extended family is to be engaged as a partner in their efforts to support microenterprises as a source of income and employment generation, A central question posed is why most writers on microenterprise activities in Ghana have neglected the extended family as a factor that should be considered in the design of microenterprise growth strategies and policies? The answer to this question was explored in the process of data gathering for this thesis and the results are presented here, especially in chapter 3 below. Suffice it to note here that this neglect has many roots, not least of which is the proclivity of mainstream economics, modern administration practice and the objectivity of double entry accounting based documentation procedures to focus on measurable growth in the formal sectors of the economy and structural constraints such as the lack of finance, lack of market demand, lack of access to technology and government regulations. Consequently, a noticeable trend among these writers is that they rightly advocate finance be made accessible to microenterprises, however, few question whether the finance is effectively used towards microenterprise growth. This issue is crucial in the face of evidence from this study which shows that finance accessed towards microenterprise growth is often put to other uses that negate growth thus keeping microenterprises within the bounds of the informal sector as against graduating out of the informal sector. As a result, these writers have neglected the intimate relations between the extended family and microenterprises, and most importantly, the constraint that the extended family inflicts on microenterprise growth at the individual level of activity. This study, by targeting the growth of the individual microenterprise in the socio-cultural context in which this growth must be achieved, has highlighted the constraint that the extended family does pose on MED. However, the study also shows that these constraints are important not because there is anything inherently wrong with the extended family, but because the socio-economic and policy environment is not consistent with the positive role that the extended family can and should play in the graduation of microenterprises from the informal to the formal sector of the economy in Ghana.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The refinement of microstructure is the most generally accepted approach to simultaneously improve the strength and toughness in steels. In the current study, the role of dynamic/static phase transformation on the ferrite grain refinement was investigated using different thermomechanical processing routes. A Ni-30Fe austenitic model alloy was also used to investigate the substructure character formed during deformation. It was revealed that the microstructure of steel could further be refined to the nanoscale through both the control of processing route and steel composition design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aluminium-rich ternary aluminium borocarbide, Al3BC was synthesised for the first time by solid-state reactions occurring during heat treatments after mechanical milling (MM) of pure aluminium with 15 or 50 at% MgB2 powder mixtures in the presence of the process control agent (PCA).

The solid-state reactions in the Al–15 and 50 at% MgB2 composite materials occurred between the MMed powders and process control agent (PCA) after heating at 773–873 K for 24 h. The products of the solid-state reaction induced Al3BC, AlB2, γ-Al2O3 and spinel MgAl2O4. MM processing time and heating temperatures in the Al–15 and 50 at% MgB2 composite materials affected the selection of those intermetallic compounds. When MM processing time was increased for a given composition, the formation of the Al3BC compound started at lower heat treatment temperatures. However, when the amount of MgB2 was increased in the 4 h MM processing regime, the formation of the Al3BC compound during heating was suppressed. As a result of the solid-state reactions in MMed powders the hardness was observed to increase after heating at 573–873 K for 24 h.

The fully dense bulk nano-composite materials have been successfully obtained through the combination of the MM and spark plasma sintering (SPS) processes for the 4 h or 8 h MMed powders of the Al–15 at% MgB2 composite materials sintered under an applied pressure of 49 MPa at 873 K for 1 h.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel agent-driven heuristic approach was developed to control the operational scheduling for a local manufacturer. This approach outperformed the traditional kanban control mechanism under numerous simulated benchmarking tests. Using this approach, the individual machine loading was reduced by, on average, 28%, with the loading spread reduced by 85%

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This action research project set out to develop the competence of senior personnal from a private vocational college in Thailand in the use of administrative computer systems. The findings demonstrate the critical significance of progressive incremental learning that is tailored to the professional personal needs of learners. Learner competence was found to be dependent upon the creation of an environment promoting learner confidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research was a detailed investigation into a challenging analytical chemistry problem for the alumina industry. The successful outcomes were derived through innovative reagent chemistry and novel instrumental development. The resultant methodology and instrumentation deployed on this most demanding sample matrix is more robust, reliable and less expensive than anything currently used in this industry worldwide.