985 resultados para Machine-shop practice.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The determination of the overconsolidation ratio (OCR) of clay deposits is an important task in geotechnical engineering practice. This paper examines the potential of a support vector machine (SVM) for predicting the OCR of clays from piezocone penetration test data. SVM is a statistical learning theory based on a structural risk minimization principle that minimizes both error and weight terms. The five input variables used for the SVM model for prediction of OCR are the corrected cone resistance (qt), vertical total stress (sigmav), hydrostatic pore pressure (u0), pore pressure at the cone tip (u1), and the pore pressure just above the cone base (u2). Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. From the sensitivity analysis, it is clear that qt=primary in situ data influenced by OCR followed by sigmav, u0, u2, and u1. Comparison between SVM and some of the traditional interpretation methods is also presented. The results of this study have shown that the SVM approach has the potential to be a practical tool for determination of OCR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

(4pp.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The manufacturing industry is currently facing unprecedented challenges from changes and disturbances. The sources of these changes and disturbances are of different scope and magnitude. They can be of a commercial nature, or linked to fast product development and design, or purely operational (e.g. rush order, machine breakdown, material shortage etc.). In order to meet these requirements it is increasingly important that a production operation be flexible and is able to adapt to new and more suitable ways of operating. This paper focuses on a new strategy for enabling manufacturing control systems to adapt to changing conditions both in terms of product variation and production system upgrades. The approach proposed is based on two key concepts: (1) An autonomous and distributed approach to manufacturing control based on multi-agent methods in which so called operational agents represent the key physical and logical elements in the production environment to be controlled - for example, products and machines and the control strategies that drive them and (2) An adaptation mechanism based around the evolutionary concept of replicator dynamics which updates the behaviour of newly formed operational agents based on historical performance records in order to be better suited to the production environment. An application of this approach for route selection of similar products in manufacturing flow shops is developed and is illustrated in this paper using an example based on the control of an automobile paint shop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multi-channel complex machine tool (MCCM) is a versatile machining system equipped with more than two spindles and turrets for both turning and milling operations. Despite the potential of such a tool, the value of the hardware is largely dependent on how the machine tools are effectively programmed for machining. In this paper we consider a shop-floor programming system based on ISO 14649 (called e-CAM), the international standard for the interface between computer-aided manufacture (CAM) and computer numerical control (CNC). To be deployed in practical industrial usage a great deal of research has to be carried out. In this paper we present: 1) Design consideration for an e-CAM system, 2) The architecture design of e-CAM, 3) Major algorithms to fulfill the modules defined in the architecture, and 4) Implementation details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance criterion we can restrict our search for an optimal schedule to permutation schedules. Moreover, we show that the problem of minimizing total weighted completion time is solvable in O(n2) time. © 1998 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We survey recent results on the computational complexity of mixed shop scheduling problems. In a mixed shop, some jobs have fixed machine orders (as in the job shop), while the operations of the other jobs may be processed in arbitrary order (as in the open shop). The main attention is devoted to establishing the boundary between polynomially solvable and NP-hard problems. When the number of operations per job is unlimited, we focus on problems with a fixed number of jobs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a two-machine flow shop scheduling problem with no-wait in process, in which one of the machines is not available during a specified time interval. We consider three scenarios of handing the operation affected by the nonavailability interval. Its processing may (i) start from scratch after the interval, or (ii) be resumed from the point of interruption, or (iii) be partially restarted after the interval. The objective is to minimize the makespan. We present an approximation algorithm that for all these scenarios delivers a worst-case ratio of 3/2. For the second scenario, we offer a 4/3-approximation algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper considers the flow shop scheduling problems to minimize the makespan, provided that an individual precedence relation is specified on each machine. A fairly complete complexity classification of problems with two and three machines is obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Concept analysis has identified three domains in the competent use of birth technology â?? interpersonal skills, professional knowledge and clinical proficiency â?? and tentative criteria for birth technology competence. Aim. Fieldwork was undertaken to observe, confirm and explore pre-defined attributes of birth technology competence. Method. The Swartz-Barcott and Kim (2000) hybrid model of concept development was expanded to include an ethnographic observation of theory in action. Findings. Key attributes of birth technology competence found in â??real-worldâ?? midwifery practice were skills in using the machines, decision-making and traditional midwifery skills. Conclusions. The confusion surrounding the use of technology in midwifery practice needs to be addressed by both professionals and educationalists. Midwives should be taught to value traditional midwifery skills alongside those of machine skills. The identification of a model of appropriate technology use is needed in midwifery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I am a part-time graduate student who works in industry. This study is my narrative about how six workers and I describe shop-floor learning activities, that is learning activities that occur where work is done, outside a classroom. Because this study is narrative inquiry, you wilileam about me, the narrator, more than you would in a more conventional study. This is a common approach in narrative inquiry and it is important because my intentions shape the way that I tell these six workers' stories. I developed a typology of learning activities by synthesizing various theoretical frameworks. This typology categorizes shop-floor learning activities into five types: onthe- job training, participative learning, educational advertising, incidental learning, and self-directed learning. Although learning can occur in each of these activities in isolation, it is often comprised of a mixture of these activities. The literature review contains a number of cases that have been developed from situations described in the literature. These cases are here to make the similarities and differences between the types of learning activities that they represent more understandable to the reader and to ground the typology in practice as well as in theory. The findings are presented as reader's theatre, a dramatic presentation of these workers' narratives. The workers tell us that learning involves "being shown," and if this is not done properly they "learn the hard way." I found that many of their best case lean1ing activities involved on-the-job training, participative learning, incidentalleaming, and self-directed learning. Worst case examples were typically lacking in properly designed and delivered participative learning activities and to a lesser degree lacking carefully planned and delivered on-the-job training activities. Included are two reflective chapters that describe two cases: Learning "Engels" (English), and Learning to Write. In these chapters you will read about how I came to see that my own shop-floor learning-learning to write this thesis-could be enhanced through participative learning activities. I came to see my thesis supervisor as not only my instructor who directed and judged my learning activities, but also as a more experienced researcher who was there to participate in this process with me and to help me begin to enter the research community. Shop-floor learning involves learners and educators participating in multistranded learning activities, which require an organizational factor of careful planning and delivery. As with learning activities, which can be multi-stranded, so too, there can be multiple orientations to learning on the shop floor. In our stories, you will see that these six workers and I didn't exhibit just one orientation to learning in our stories. Our stories demonstrate that we could be behaviorist and cognitivist and humanist and social learners and constructivist in our orientation to learning. Our stories show that learning is complex and involves multiple strands, orientations, and factors. Our stories show that learning narratives capture the essence of learning-the learners, the educators, the learning activities, the organizational factors, and the learning orientations. Learning narratives can help learners and educators make sense of shop-floor learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malgré des progrès constants en termes de capacité de calcul, mémoire et quantité de données disponibles, les algorithmes d'apprentissage machine doivent se montrer efficaces dans l'utilisation de ces ressources. La minimisation des coûts est évidemment un facteur important, mais une autre motivation est la recherche de mécanismes d'apprentissage capables de reproduire le comportement d'êtres intelligents. Cette thèse aborde le problème de l'efficacité à travers plusieurs articles traitant d'algorithmes d'apprentissage variés : ce problème est vu non seulement du point de vue de l'efficacité computationnelle (temps de calcul et mémoire utilisés), mais aussi de celui de l'efficacité statistique (nombre d'exemples requis pour accomplir une tâche donnée). Une première contribution apportée par cette thèse est la mise en lumière d'inefficacités statistiques dans des algorithmes existants. Nous montrons ainsi que les arbres de décision généralisent mal pour certains types de tâches (chapitre 3), de même que les algorithmes classiques d'apprentissage semi-supervisé à base de graphe (chapitre 5), chacun étant affecté par une forme particulière de la malédiction de la dimensionalité. Pour une certaine classe de réseaux de neurones, appelés réseaux sommes-produits, nous montrons qu'il peut être exponentiellement moins efficace de représenter certaines fonctions par des réseaux à une seule couche cachée, comparé à des réseaux profonds (chapitre 4). Nos analyses permettent de mieux comprendre certains problèmes intrinsèques liés à ces algorithmes, et d'orienter la recherche dans des directions qui pourraient permettre de les résoudre. Nous identifions également des inefficacités computationnelles dans les algorithmes d'apprentissage semi-supervisé à base de graphe (chapitre 5), et dans l'apprentissage de mélanges de Gaussiennes en présence de valeurs manquantes (chapitre 6). Dans les deux cas, nous proposons de nouveaux algorithmes capables de traiter des ensembles de données significativement plus grands. Les deux derniers chapitres traitent de l'efficacité computationnelle sous un angle différent. Dans le chapitre 7, nous analysons de manière théorique un algorithme existant pour l'apprentissage efficace dans les machines de Boltzmann restreintes (la divergence contrastive), afin de mieux comprendre les raisons qui expliquent le succès de cet algorithme. Finalement, dans le chapitre 8 nous présentons une application de l'apprentissage machine dans le domaine des jeux vidéo, pour laquelle le problème de l'efficacité computationnelle est relié à des considérations d'ingénierie logicielle et matérielle, souvent ignorées en recherche mais ô combien importantes en pratique.