60 resultados para Process control -- Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high-throughput experimental data from the new gene microarray technology has spurred numerous efforts to find effective ways of processing microarray data for revealing real biological relationships among genes. This work proposes an innovative data pre-processing approach to identify noise data in the data sets and eliminate or reduce the impact of the noise data on gene clustering, With the proposed algorithm, the pre-processed data sets make the clustering results stable across clustering algorithms with different similarity metrics, the important information of genes and features is kept, and the clustering quality is improved. The primary evaluation on real microarray data sets has shown the effectiveness of the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides a unified and comprehensive treatment of the fuzzy neural networks as the intelligent controllers. This work has been motivated by a need to develop the solid control methodologies capable of coping with the complexity, the nonlinearity, the interactions, and the time variance of the processes under control. In addition, the dynamic behavior of such processes is strongly influenced by the disturbances and the noise, and such processes are characterized by a large degree of uncertainty. Therefore, it is important to integrate an intelligent component to increase the control system ability to extract the functional relationships from the process and to change such relationships to improve the control precision, that is, to display the learning and the reasoning abilities. The objective of this thesis was to develop a self-organizing learning controller for above processes by using a combination of the fuzzy logic and the neural networks. An on-line, direct fuzzy neural controller using the process input-output measurement data and the reference model with both structural and parameter tuning has been developed to fulfill the above objective. A number of practical issues were considered. This includes the dynamic construction of the controller in order to alleviate the bias/variance dilemma, the universal approximation property, and the requirements of the locality and the linearity in the parameters. Several important issues in the intelligent control were also considered such as the overall control scheme, the requirement of the persistency of excitation and the bounded learning rates of the controller for the overall closed loop stability. Other important issues considered in this thesis include the dependence of the generalization ability and the optimization methods on the data distribution, and the requirements for the on-line learning and the feedback structure of the controller. Fuzzy inference specific issues such as the influence of the choice of the defuzzification method, T-norm operator and the membership function on the overall performance of the controller were also discussed. In addition, the e-completeness requirement and the use of the fuzzy similarity measure were also investigated. Main emphasis of the thesis has been on the applications to the real-world problems such as the industrial process control. The applicability of the proposed method has been demonstrated through the empirical studies on several real-world control problems of industrial complexity. This includes the temperature and the number-average molecular weight control in the continuous stirred tank polymerization reactor, and the torsional vibration, the eccentricity, the hardness and the thickness control in the cold rolling mills. Compared to the traditional linear controllers and the dynamically constructed neural network, the proposed fuzzy neural controller shows the highest promise as an effective approach to such nonlinear multi-variable control problems with the strong influence of the disturbances and the noise on the dynamic process behavior. In addition, the applicability of the proposed method beyond the strictly control area has also been investigated, in particular to the data mining and the knowledge elicitation. When compared to the decision tree method and the pruned neural network method for the data mining, the proposed fuzzy neural network is able to achieve a comparable accuracy with a more compact set of rules. In addition, the performance of the proposed fuzzy neural network is much better for the classes with the low occurrences in the data set compared to the decision tree method. Thus, the proposed fuzzy neural network may be very useful in situations where the important information is contained in a small fraction of the available data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overarching goal of this dissertation was to evaluate the contextual components of instructional strategies for the acquisition of complex programming concepts. A meta-knowledge processing model is proposed, on the basis of the research findings, thereby facilitating the selection of media treatment for electronic courseware. When implemented, this model extends the work of Smith (1998), as a front-end methodology, for his glass-box interpreter called Bradman, for teaching novice programmers. Technology now provides the means to produce individualized instructional packages with relative ease. Multimedia and Web courseware development accentuate a highly graphical (or visual) approach to instructional formats. Typically, little consideration is given to the effectiveness of screen-based visual stimuli, and curiously, students are expected to be visually literate, despite the complexity of human-computer interaction. Visual literacy is much harder for some people to acquire than for others! (see Chapter Four: Conditions-of-the-Learner) An innovative research programme was devised to investigate the interactive effect of instructional strategies, enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style, on the acquisition of a special category of abstract (process) programming concept. This type of concept was chosen to focus on the role of analogic knowledge involved in computer programming. The results are discussed within the context of the internal/external exchange process, drawing on Ritchey's (1980) concepts of within-item and between-item encoding elaborations. The methodology developed for the doctoral project integrates earlier research knowledge in a novel, interdisciplinary, conceptual framework, including: from instructional science in the USA, for the concept learning models; British cognitive psychology and human memory research, for defining the cognitive style construct; and Australian educational research, to provide the measurement tools for instructional outcomes. The experimental design consisted of a screening test to determine cognitive style, a pretest to determine prior domain knowledge in abstract programming knowledge elements, the instruction period, and a post-test to measure improved performance. This research design provides a three-level discovery process to articulate: 1) the fusion of strategic knowledge required by the novice learner for dealing with contexts within instructional strategies 2) acquisition of knowledge using measurable instructional outcome and learner characteristics 3) knowledge of the innate environmental factors which influence the instructional outcomes This research has successfully identified the interactive effect of instructional strategy, within an individual's cognitive style construct, in their acquisition of complex programming concepts. However, the significance of the three-level discovery process lies in the scope of the methodology to inform the design of a meta-knowledge processing model for instructional science. Firstly, the British cognitive style testing procedure, is a low cost, user friendly, computer application that effectively measures an individual's position on the two cognitive style continua (Riding & Cheema,1991). Secondly, the QUEST Interactive Test Analysis System (Izard,1995), allows for a probabilistic determination of an individual's knowledge level, relative to other participants, and relative to test-item difficulties. Test-items can be related to skill levels, and consequently, can be used by instructional scientists to measure knowledge acquisition. Finally, an Effect Size Analysis (Cohen,1977) allows for a direct comparison between treatment groups, giving a statistical measurement of how large an effect the independent variables have on the dependent outcomes. Combined with QUEST's hierarchical positioning of participants, this tool can assist in identifying preferred learning conditions for the evaluation of treatment groups. By combining these three assessment analysis tools into instructional research, a computerized learning shell, customised for individuals' cognitive constructs can be created (McKay & Garner,1999). While this approach has widespread application, individual researchers/trainers would nonetheless, need to validate with an extensive pilot study programme (McKay,1999a; McKay,1999b), the interactive effects within their specific learning domain. Furthermore, the instructional material does not need to be limited to a textual/graphical comparison, but could be applied to any two or more instructional treatments of any kind. For instance: a structured versus exploratory strategy. The possibilities and combinations are believed to be endless, provided the focus is maintained on linking of the front-end identification of cognitive style with an improved performance outcome. My in-depth analysis provides a better understanding of the interactive effects of the cognitive style construct and instructional format on the acquisition of abstract concepts, involving spatial relations and logical reasoning. In providing the basis for a meta-knowledge processing model, this research is expected to be of interest to educators, cognitive psychologists, communications engineers and computer scientists specialising in computer-human interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we provide the optimal data fusion filter for linear systems suffering from possible missing measurements. The noise covariance in the observation process is allowed to be singular which requires the use of generalized inverse. The data fusion process is made on the raw data provided by two sensors  observing the same entity. Each of the sensors is losing the measurements in its own data loss rate. The data fusion filter is provided in a recursive form for ease of implementation in real-world applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RFID is gaining significant thrust as the preferred choice of automatic identification and data collection system. However, there are various data processing and management problems such as missed readings and duplicate readings which hinder wide scale adoption of RFID systems. To this end we propose an approach that filters the captured data including both noise removal and duplicate elimination. Experimental results demonstrate that the proposed approach improves missed data restoration process when compared with the existing method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the explosion of big data, processing large numbers of continuous data streams, i.e., big data stream processing (BDSP), has become a crucial requirement for many scientific and industrial applications in recent years. By offering a pool of computation, communication and storage resources, public clouds, like Amazon's EC2, are undoubtedly the most efficient platforms to meet the ever-growing needs of BDSP. Public cloud service providers usually operate a number of geo-distributed datacenters across the globe. Different datacenter pairs are with different inter-datacenter network costs charged by Internet Service Providers (ISPs). While, inter-datacenter traffic in BDSP constitutes a large portion of a cloud provider's traffic demand over the Internet and incurs substantial communication cost, which may even become the dominant operational expenditure factor. As the datacenter resources are provided in a virtualized way, the virtual machines (VMs) for stream processing tasks can be freely deployed onto any datacenters, provided that the Service Level Agreement (SLA, e.g., quality-of-information) is obeyed. This raises the opportunity, but also a challenge, to explore the inter-datacenter network cost diversities to optimize both VM placement and load balancing towards network cost minimization with guaranteed SLA. In this paper, we first propose a general modeling framework that describes all representative inter-task relationship semantics in BDSP. Based on our novel framework, we then formulate the communication cost minimization problem for BDSP into a mixed-integer linear programming (MILP) problem and prove it to be NP-hard. We then propose a computation-efficient solution based on MILP. The high efficiency of our proposal is validated by extensive simulation based studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of the strong demands of physical resources of big data, it is an effective and efficient way to store and process big data in clouds, as cloud computing allows on-demand resource provisioning. With the increasing requirements for the resources provisioned by cloud platforms, the Quality of Service (QoS) of cloud services for big data management is becoming significantly important. Big data has the character of sparseness, which leads to frequent data accessing and processing, and thereby causes huge amount of energy consumption. Energy cost plays a key role in determining the price of a service and should be treated as a first-class citizen as other QoS metrics, because energy saving services can achieve cheaper service prices and environmentally friendly solutions. However, it is still a challenge to efficiently schedule Virtual Machines (VMs) for service QoS enhancement in an energy-aware manner. In this paper, we propose an energy-aware dynamic VM scheduling method for QoS enhancement in clouds over big data to address the above challenge. Specifically, the method consists of two main VM migration phases where computation tasks are migrated to servers with lower energy consumption or higher performance to reduce service prices and execution time. Extensive experimental evaluation demonstrates the effectiveness and efficiency of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: In profile monitoring, which is a growing research area in the field of statistical process control, the relationship between response and explanatory variables is monitored over time. The purpose of this paper is to focus on the process capability analysis of linear profiles. Process capability indices give a quick indication of the capability of a manufacturing process. Design/methodology/approach: In this paper, the proportion of the non-conformance criteria is employed to estimate process capability index. The paper has considered the cases where specification limits is constant or is a function of explanatory variable X. Moreover, cases where both equal and random design schemes in profile data acquisition is required (as the explanatory variable) is considered. Profiles with the assumption of deterministic design points are usually used in the calibration applications. However, there are other applications where design points within a profile would be i.i.d. random variables from a given distribution. Findings: Simulation studies using simple linear profile processes for both fixed and random explanatory variable with constant and functional specification limits are considered to assess the efficacy of the proposed method. Originality/value: There are many cases in industries such as semiconductor industries where quality characteristics are in form of profiles. There is no method in the literature to analyze process capability for theses processes, however recently quite a few methods have been presented in monitoring profiles. Proposed methods provide a framework for quality engineers and production engineers to evaluate and analyze capability of the profile processes. © Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exploration with formal design systems comprises an iterative process of specifying problems, finding plausible and alternative solutions, judging the validity of solutions relative to problems and reformulating problems and solutions. Recent advances in formal generative design have developed the mathematics and algorithms to describe and perform conceptual design tasks. However, design remains a human enterprise: formalisms are part of a larger equation comprising human computer interaction. To support the user in designing with formal systems, shared representations that interleave initiative of the designer and the design formalism are necessary. The problem of devising representational structures in which initiative is sometimes taken by the designer and sometimes by a computer in working on a shared design task is reported in this paper. To address this problem, the requirements, representation and
implementation of a shared interaction construct, the feature node, is
described. The feature node facilitates the sharing of initiative in formulating and reformulating problems, generating solutions, making
choices and navigating the history of exploration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The output of the sheet metal forming process is subject to much variation. This paper develops a method to measure shape variation in channel forming and relate this back to the corresponding process parameter levels of the manufacturing set-up to create an inverse model. The shape variation in the channels is measured using a modified form of the point distribution model (also known as the active shape model). This means that channels can be represented by a weighting vector of minimal linear dimension that contains all the shape variation information from the average formed channel.

The inverse models were created using classifiers that related the weighting vectors to the process parameter levels for the blank holder force (BHF), die radii (DR) and tool gap (TG) of the parameters. Several classifiers were tested: linear, quadratic Gaussian and artificial neural networks. The quadratic Gaussian classifiers were the most accurate and the most consistent type of classifier over all the parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many organizations struggle with the massive amount of data they collect. Today, data does more than serve as the ingredients for churning out statistical reports. They help support efficient operations in many organizations, and to some extent, data provide the competitive intelligence organizations need to survive in today's economy. Data mining can't always deliver timely and relevant results because data are constantly changing. However, stream-data processing might be more effective, judging by the Matrix project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recovering the control or implicit geometry underlying temple architecture requires bringing together fragments of evidence from field measurements, relating these to mathematical and geometric descriptions in canonical texts and proposing "best-fit" constructive models. While scholars in the field have traditionally used manual methods, the innovative application of niche computational techniques can help extend the study of artefact geometry. This paper demonstrates the application of a hybrid computational approach to the problem of recovering the surface geometry of early temple superstructures. The approach combines field measurements of temples, close-range architectural photogrammetry, rule-based generation and parametric modelling. The computing of surface geometry comprises a rule-based global model governing the overall form of the superstructure, several local models for individual motifs using photogrammetry and an intermediate geometry model that combines the two. To explain the technique and the different models, the paper examines an illustrative example of surface geometry reconstruction based on studies undertaken on a tenth century stone superstructure from western India. The example demonstrates that a combination of computational methods yields sophisticated models of the constructive geometry underlying temple form and that these digital artefacts can form the basis for in depth comparative analysis of temples, arising out of similar techniques, spread over geography, culture and time.