209 resultados para Archivos LOG
Resumo:
Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.
Resumo:
The late French philosopher Gilles Deleuze has enjoyed significant notoriety and acclaim in American academia over the last 20 years. The unique disciplinary focus of the contemporary discussion has derived from Deleuze the architectural possibilities of biotechnology, systems theory, and digital processualism. While the persistence of Deleuze’s theory of science and the formalist readings of Mille Plateaux and Le Bergsonisme have dominated the reception since the 1990s, few are aware of a much earlier encounter between Deleuze and architects, beginning at Columbia University in the 1970s, which converged on the radical politics of Anti-OEdipus and its American reception in the journal Semiotext(e), through which architecture engaged a much broader discourse alongside artists, musicians, filmmakers, and intellectuals in the New York aesthetic underground, of which Deleuze and Félix Guattari were themselves a part.
Resumo:
Information behaviour (IB) is an area within Library and Information Science that studies the totality of human behaviour in relation to information, both active and passive, along with the explicit and the tacit mental states related to information. This study reports on a recently completed dissertation research that integrates the different models of information behaviours using a diary study where 34 participants maintained a daily journal for two weeks through a web log or paper diary. This resulted in thick descriptions of IB, which were manually analysed using the Grounded Theory method of inquiry, and then cross-referenced through both text-analysis and statistical analysis programs. Among the many key findings of this study, one is the focus this paper: how participants express their feelings of the information seeking process and their mental and affective states related specifically to the sense-making component which co-occurs with almost every other aspect of information behaviour. The paper title – Down the Rabbit Hole and Through the Looking Glass – refers to an observation that some of the participants made in their journals when they searched for, or avoided information, and wrote that they felt like they have fallen into a rabbit hole where nothing made sense, and reported both positive feelings of surprise and amazement, and negative feelings of confusion, puzzlement, apprehensiveness, frustration, stress, ambiguity, and fatigue. The study situates this sense-making aspects of IB within an overarching model of information behaviour that includes IB concepts like monitoring information, encountering information, information seeking and searching, flow, multitasking, information grounds, information horizons, and more, and proposes an integrated model of information behaviour illuminating how these different concepts are interleaved and inter-connected with each other, along with it's implications for information services.
Resumo:
Purpose: To examine the ability of silver nano-particles to prevent the growth of Pseudomonas aeruginosa and Staphylococcus aureus in solution or when adsorbed into contact lenses. To examine the ability of silver nano-particles to prevent the growth of Acanthamoeba castellanii. ----- ----- Methods: Etafilcon A lenses were soaked in various concentrations of silver nano-particles. Bacterial cells were then exposed to these lenses, and numbers of viable cells on lens surface or in solution compared to etafilcon A lenses not soaked in silver. Acanthamoeba trophozoites were exposed to silver nano-particles and their ability to form tracks was examined. ----- ----- Results: Silver nano-particle containing lenses reduced bacterial viability and adhesion. There was a dose-dependent response curve, with 10 ppm or 20 ppm silver showing > 5 log reduction in bacterial viability in solution or on the lens surface. For Acanthamoeba, 20 ppm silver reduced the ability to form tracks by approximately 1 log unit. ----- ----- Conclusions: Silver nanoparticles are effective antimicrobial agents, and can reduce the ability of viable bacterial cells to colonise contact lenses once incorporated into the lens.----- ----- Resumen: Objetivos: Examinar la capacidad de las nanopartículas de plata para prevenir el crecimiento de Pseudomonas aeruginosa y Staphylococcus aureus en soluciones para lentes de contacto o cuando éstas las adsorben. Examinar la capacidad de las nanopartículas de plata para prevenir el crecimiento de Acanthamoeba castellanii.----- ----- Métodos: Se sumergieron lentes etafilcon A en diversas concentraciones de nanopartículas de plata. Las células bacterianas fueron posteriormente expuestas a dichas lentes, y se compararon cantidades de células viables en la superficie de la lente o en la solución con las presentes en lentes etafilcon A que no habían sido sumergidas en plata. Trofozoítos de Acanthamoeba fueron expuestos a nanopartículas de plata y se examinó su capacidad para formar quistes.----- ----- Resultados: Las lentes que contienen nanopartículas de plata redujeron la viabilidad bacteriana y la adhesión. Hubo una curva de respuesta dependiente de la dosis, en la que 10 ppm o 20 ppm de plata mostró una reducción logarítmica > 5 en la viabilidad bacteriana tanto en la solución como en la superficie de la lente. Para Acanthamoeba, 20 ppm de plata redujeron la capacidad de formar quistes en aproximadamente 1 unidad logarítmica.----- ----- Conclusiones: Las nanopartículas de plata son agentes antimicrobianos eficaces y pueden reducir la capacidad de células bacterianas viables para colonizar las lentes de contacto una vez que se han incorporado en la lente.
Resumo:
A number of instructors have recently adopted social network sites (SNSs) for learning. However, the learning design of SNSs often remains at a preliminary level similar to a personal log book because it does not properly include reflective learning elements such as individual reflection and collaboration. This article looks at the reflective learning process and the public writing process as a way of improving the quality of reflective learning on SNSs. It proposes a reflective learning model on SNSs based on two key pedagogical concepts for social networking: individual expression and collaborative connection. It is expected that the model would be helpful for instructors in designing a reflective learning process on SNSs in an effective and flexible way.
Resumo:
Throughout this workshop session we have looked at various configurations of Sage as well as using the Sage UI to run Sage applications (e.g. the image viewer). More advanced usage of Sage has been demonstrated using a Sage compatible version of Paraview highlighting the potential of parallel rendering. The aim of this tutorial session is to give a practical introduction to developing visual content for a tiled display using the Sage libraries. After completing this tutorial you should have the basic tools required to develop your own custom Sage applications. This tutorial is designed for software developers and intermediate programming knowledge is assumed, along with some introductory OpenGL . You will be required to write small portions of C/C++ code to complete this worksheet. However if you do not feel comfortable writing code (or have never written in C or C++), we will be on hand throughout this session so feel free to ask for some help. We have a number of machines in this lab running a VNC client to a virtual machine running Fedora 12. You should all be able to log in with the username “escience”, and password “escience10”. Some of the commands in this worksheet require you to run them as the root user, so note the password as you may need to use it a few times. If you need to access the Internet, then use the username “qpsf01”, password “escience10”
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. This paper proposes two inspection modules for an automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localisation and segmentation. The “back-end” inspection involves the classification of solder joints using the Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. The Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. This system could contribute to the development of automated non-contact, non-destructive and low cost solder joint quality inspection systems.
Resumo:
The importance of actively managing and analysing business processes is acknowledged more than ever in organisations nowadays. Business processes form an essential part of an organisation and their application areas are manifold. Most organisations keep records of various activities that have been carried out for auditing purposes, but they are rarely used for analysis purposes. This paper describes the design and implementation of a process analysis tool that replays, analyses and visualises a variety of performance metrics using a process definition and its corresponding execution logs. The replayer uses a YAWL process model example to demonstrate its capacity to support advanced language constructs.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.