971 resultados para intelligent e-mail analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of effective time analysis methods fast and accurately in the era of digital manufacturing has become a significant challenge for aerospace manufacturers hoping to build and maintain a competitive advantage. This paper proposes a structure oriented, knowledge-based approach for intelligent time analysis of aircraft assembly processes within a digital manufacturing framework. A knowledge system is developed so that the design knowledge can be intelligently retrieved for implementing assembly time analysis automatically. A time estimation method based on MOST, is reviewed and employed. Knowledge capture, transfer and storage within the digital manufacturing environment are extensively discussed. Configured plantypes, GUIs and functional modules are designed and developed for the automated time analysis. An exemplar study using an aircraft panel assembly from a regional jet is also presented. Although the method currently focuses on aircraft assembly, it can also be well utilized in other industry sectors, such as transportation, automobile and shipbuilding. The main contribution of the work is to present a methodology that facilitates the integration of time analysis with design and manufacturing using a digital manufacturing platform solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a real application of Web-content mining using an incremental FP-Growth approach. We firstly restructure the semi-structured data retrieved from the web pages of Chinese car market to fit into the local database, and then employ an incremental algorithm to discover the association rules for the identification of car preference. To find more general regularities, a method of attribute-oriented induction is also utilized to find customer’s consumption preferences. Experimental results show some interesting consumption preference patterns that may be beneficial for the government in making policy to encourage and guide car consumption.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart, by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computer-based intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are nonlinear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of nonlinear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and seven classes of arrhythmia. We present some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. We also extracted features from the HOS and performed an analysis of variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents a systematical analysis of biochemist Michael Behe's thinking. Behe is a prominent defender of the Intelligent Design Movement which has gaines influence particularly in the United States, but also in elsewhere. At the core of his thinking is the idea of intelligent design, according to which the order of the cosmos and of living things is the handiwork of a non-human intelligence. This "design argument" had previously been popular in the tradition of natural theology. Behe attempts to base his argument on the findings of 20th century biology, however. It has been revealed by biochemistry that cells, formerly thought to be simple, in fact contain complex structures, for instance the bacterial flagellum, which are reminiscent of the machines built by humans. According to Behe these can be believably explained only by referring to intelligent design, not by invoking darwinian natural laws. My analysis aims to understand Behe's thought on intelligent design, to bring forward its connections to intellectual history and worldviews, and to study whether Behe has formulated his argument so as to avoid common criticisms directed against design arguments. I use a large amount literature and refer to diverse writers participating in the intelligent design debate. The results of the analysis are as follows. Behe manages to avoid a large amount of classical criticisms against the design argument, and new criticisms have to be developed to meet his argument. Secondly, positions on intelligent design appear to be linked to larger philosophical and religious worldviews.vaan myös maailmankuvat ja uskonnolliset näkemykset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effectiveness of the last-level shared cache is crucial to the performance of a multi-core system. In this paper, we observe and make use of the DelinquentPC - Next-Use characteristic to improve shared cache performance. We propose a new PC-centric cache organization, NUcache, for the shared last level cache of multi-cores. NUcache logically partitions the associative ways of a cache set into MainWays and DeliWays. While all lines have access to the MainWays, only lines brought in by a subset of delinquent PCs, selected by a PC selection mechanism, are allowed to enter the DeliWays. The PC selection mechanism is an intelligent cost-benefit analysis based algorithm that utilizes Next-Use information to select the set of PCs that can maximize the hits experienced in DeliWays. Performance evaluation reveals that NUcache improves the performance over a baseline design by 9.6%, 30% and 33% respectively for dual, quad and eight core workloads comprised of SPEC benchmarks. We also show that NUcache is more effective than other well-known cache-partitioning algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This chapter describes an experimental system for the recognition of human faces from surveillance video. In surveillance applications, the system must be robust to changes in illumination, scale, pose and expression. The system must also be able to perform detection and recognition rapidly in real time. Our system detects faces using the Viola-Jones face detector, then extracts local features to build a shape-based feature vector. The feature vector is constructed from ratios of lengths and differences in tangents of angles, so as to be robust to changes in scale and rotations in-plane and out-of-plane. Consideration was given to improving the performance and accuracy of both the detection and recognition steps.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work proposes the development of an intelligent system for analysis of digital mammograms, capable to detect and to classify masses and microcalcifications. The digital mammograms will be pre-processed through techniques of digital processing of images with the purpose of adapting the image to the detection system and automatic classification of the existent calcifications in the suckles. The model adopted for the detection and classification of the mammograms uses the neural network of Kohonen by the algorithm Self Organization Map - SOM. The algorithm of Vector quantization, Kmeans it is also used with the same purpose of the SOM. An analysis of the performance of the two algorithms in the automatic classification of digital mammograms is developed. The developed system will aid the radiologist in the diagnosis and accompaniment of the development of abnormalities

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os motores de indução trifásicos são os principais elementos de conversão de energia elétrica em mecânica motriz aplicados em vários setores produtivos. Identificar um defeito no motor em operação pode fornecer, antes que ele falhe, maior segurança no processo de tomada de decisão sobre a manutenção da máquina, redução de custos e aumento de disponibilidade. Nesta tese são apresentas inicialmente uma revisão bibliográfica e a metodologia geral para a reprodução dos defeitos nos motores e a aplicação da técnica de discretização dos sinais de correntes e tensões no domínio do tempo. É também desenvolvido um estudo comparativo entre métodos de classificação de padrões para a identificação de defeitos nestas máquinas, tais como: Naive Bayes, k-Nearest Neighbor, Support Vector Machine (Sequential Minimal Optimization), Rede Neural Artificial (Perceptron Multicamadas), Repeated Incremental Pruning to Produce Error Reduction e C4.5 Decision Tree. Também aplicou-se o conceito de Sistemas Multiagentes (SMA) para suportar a utilização de múltiplos métodos concorrentes de forma distribuída para reconhecimento de padrões de defeitos em rolamentos defeituosos, quebras nas barras da gaiola de esquilo do rotor e curto-circuito entre as bobinas do enrolamento do estator de motores de indução trifásicos. Complementarmente, algumas estratégias para a definição da severidade dos defeitos supracitados em motores foram exploradas, fazendo inclusive uma averiguação da influência do desequilíbrio de tensão na alimentação da máquina para a determinação destas anomalias. Os dados experimentais foram adquiridos por meio de uma bancada experimental em laboratório com motores de potência de 1 e 2 cv acionados diretamente na rede elétrica, operando em várias condições de desequilíbrio das tensões e variações da carga mecânica aplicada ao eixo do motor.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The availability of innumerable intelligent building (IB) products, and the current dearth of inclusive building component selection methods suggest that decision makers might be confronted with the quandary of forming a particular combination of components to suit the needs of a specific IB project. Despite this problem, few empirical studies have so far been undertaken to analyse the selection of the IB systems, and to identify key selection criteria for major IB systems. This study is designed to fill these research gaps. Two surveys: a general survey and the analytic hierarchy process (AHP) survey are proposed to achieve these objectives. The first general survey aims to collect general views from IB experts and practitioners to identify the perceived critical selection criteria, while the AHP survey was conducted to prioritize and assign the important weightings for the perceived criteria in the general survey. Results generally suggest that each IB system was determined by a disparate set of selection criteria with different weightings. ‘Work efficiency’ is perceived to be most important core selection criterion for various IB systems, while ‘user comfort’, ‘safety’ and ‘cost effectiveness’ are also considered to be significant. Two sub-criteria, ‘reliability’ and ‘operating and maintenance costs’, are regarded as prime factors to be considered in selecting IB systems. The current study contributes to the industry and IB research in at least two aspects. First, it widens the understanding of the selection criteria, as well as their degree of importance, of the IB systems. It also adopts a multi-criteria AHP approach which is a new method to analyse and select the building systems in IB. Further research would investigate the inter-relationship amongst the selection criteria.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Machine downtime, whether planned or unplanned, is intuitively costly to manufacturing organisations, but is often very difficult to quantify. The available literature showed that costing processes are rarely undertaken within manufacturing organisations. Where cost analyses have been undertaken, they generally have only valued a small proportion of the affected costs, leading to an overly conservative estimate. This thesis aimed to develop a cost of downtime model, with particular emphasis on the application of the model to Australia Post’s Flat Mail Optical Character Reader (FMOCR). The costing analysis determined a cost of downtime of $5,700,000 per annum, or an average cost of $138 per operational hour. The second section of this work focused on the use of the cost of downtime to objectively determine areas of opportunity for cost reduction on the FMOCR. This was the first time within Post that maintenance costs were considered along side of downtime for determining machine performance. Because of this, the results of the analysis revealed areas which have historically not been targeted for cost reduction. Further exploratory work was undertaken on the Flats Lift Module (FLM) and Auto Induction Station (AIS) Deceleration Belts through the comparison of the results against two additional FMOCR analysis programs. This research has demonstrated the development of a methodical and quantifiable cost of downtime for the FMOCR. This has been the first time that Post has endeavoured to examine the cost of downtime. It is also one of the very few methodologies for valuing downtime costs that has been proposed in literature. The work undertaken has also demonstrated how the cost of downtime can be incorporated into machine performance analysis with specific application to identifying high costs modules. The outcome of this report has both been the methodology for costing downtime, as well as a list of areas for cost reduction. In doing so, this thesis has outlined the two key deliverables presented at the outset of the research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.