825 resultados para Modeling Non-Verbal Behaviors Using Machine Learning
Resumo:
Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.
Resumo:
A procedure is presented for obtaining conformational parameters from oriented but non-crystalline polymers. This is achieved by comparison of the experimental wide angle X-ray scattering with that calculated from models but in such a way that foreknowledge of the orientation distribution function is not required. X-ray scattering intensity values for glassy isotactic poly(methylmethacrylate) are analysed by these techniques. The method could be usefully applied to other oriented molecular systems such as liquid crystalline materials.
Resumo:
Fried products impose a health concerns due to considerable amount of oil they contain. Production of snack foods with minimal oil content and good management of oil during frying to minimise the production of toxic compounds continue to be challenging aims. This paper aims to investigate the possibility of producing a fat-free food snack by replacing frying oil with a non-fat medium. Glucose was melted and its temperature was then brought to 185°C and used to fry potato strips, to obtain a product referred here as glucose fries. The resulting product was compared with French fries prepared conventionally under conditions that resulted in similar final moisture content. The resulting products were also examined for crust formation, texture parameters, colour development and glucose content. Stereo microscope images showed that similar crusts were formed in the glucose fries and French fries. Texture parameters were found to be similar for both products at 5mm and 2 mm penetration depth. The maximum hardness at 2mm penetration depth was also similar for both products, but different from cooked potato. The colour development which characterised French fries was also observed in glucose fries. The glucose content in glucose fries was found to be twice the content of French fries, which is to be expected since glucose absorbed or adhered to the surface. In conclusion, glucose fries, with similar texture and colour characteristics to that of French fries, can be prepared by using a non-fat frying medium.
Resumo:
Species` potential distribution modelling consists of building a representation of the fundamental ecological requirements of a species from biotic and abiotic conditions where the species is known to occur. Such models can be valuable tools to understand the biogeography of species and to support the prediction of its presence/absence considering a particular environment scenario. This paper investigates the use of different supervised machine learning techniques to model the potential distribution of 35 plant species from Latin America. Each technique was able to extract a different representation of the relations between the environmental conditions and the distribution profile of the species. The experimental results highlight the good performance of random trees classifiers, indicating this particular technique as a promising candidate for modelling species` potential distribution. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This thesis aims to present a color segmentation approach for traffic sign recognition based on LVQ neural networks. The RGB images were converted into HSV color space, and segmented using LVQ depending on the hue and saturation values of each pixel in the HSV color space. LVQ neural network was used to segment red, blue and yellow colors on the road and traffic signs to detect and recognize them. LVQ was effectively applied to 536 sampled images taken from different countries in different conditions with 89% accuracy and the execution time of each image among 31 images was calculated in between 0.726sec to 0.844sec. The method was tested in different environmental conditions and LVQ showed its capacity to reasonably segment color despite remarkable illumination differences. The results showed high robustness.
Resumo:
In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Background: The genome-wide identification of both morbid genes, i.e., those genes whose mutations cause hereditary human diseases, and druggable genes, i.e., genes coding for proteins whose modulation by small molecules elicits phenotypic effects, requires experimental approaches that are time-consuming and laborious. Thus, a computational approach which could accurately predict such genes on a genome-wide scale would be invaluable for accelerating the pace of discovery of causal relationships between genes and diseases as well as the determination of druggability of gene products.Results: In this paper we propose a machine learning-based computational approach to predict morbid and druggable genes on a genome-wide scale. For this purpose, we constructed a decision tree-based meta-classifier and trained it on datasets containing, for each morbid and druggable gene, network topological features, tissue expression profile and subcellular localization data as learning attributes. This meta-classifier correctly recovered 65% of known morbid genes with a precision of 66% and correctly recovered 78% of known druggable genes with a precision of 75%. It was than used to assign morbidity and druggability scores to genes not known to be morbid and druggable and we showed a good match between these scores and literature data. Finally, we generated decision trees by training the J48 algorithm on the morbidity and druggability datasets to discover cellular rules for morbidity and druggability and, among the rules, we found that the number of regulating transcription factors and plasma membrane localization are the most important factors to morbidity and druggability, respectively.Conclusions: We were able to demonstrate that network topological features along with tissue expression profile and subcellular localization can reliably predict human morbid and druggable genes on a genome-wide scale. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing morbidity and druggability.
Resumo:
OBJETIVO: Descrever o perfil comunicativo de indivíduos com a síndrome de Williams-Beuren. MÉTODOS: A casuística foi composta por 12 indivíduos com a síndrome com idade cronológica entre 6;6 a 23;6 (Grupo 1) que foram comparados a outros 12 sem a síndrome e com idade mental semelhante e sem dificuldades de linguagem/aprendizagem (Grupo 2). Os indivíduos foram avaliados em situação de conversação para classificação dos comportamentos verbais e não-verbais, segundo critérios pragmáticos, levantamento do número de turnos por minuto, enunciados por turno, Extensão Média de Enunciados, levantamento quanto à freqüência e tipologia de disfluências da fala e classificação quanto ao tipo de pausas plenas do discurso. RESULTADOS: O perfil comunicativo do Grupo 1 mostrou facilidade para interagirem em situação de comunicação com a presença de limitações lingüísticas estruturais e funcionais variáveis, quando comparados aos indivíduos do Grupo 2. Os indivíduos do Grupo 1 freqüentemente utilizaram estratégias comunicativas, na tentativa de preencherem o espaço comunicativo, como o uso de clichês, efeitos sonoros, recursos entonacionais e as pausas plenas que mostraram ser favoráveis do ponto de vista sócio-comunicativo, enquanto que os comportamentos verbais ecolálicos e perseverativos prejudicam o desempenho comunicativo desses indivíduos. CONCLUSÃO: O desempenho comunicativo mais prejudicado do Grupo 1 permitiu especular que comprometimentos lingüísticos nesta condição podem estar presentes, independente da diferença entre idade cronológica e mental. Estudos mais abrangentes poderão responder ao questionamento da dissociação de habilidades cognitivas e lingüísticas nesta síndrome e também no que diz respeito à complexa esta correlação em meio aos distúrbios da comunicação humana.
Resumo:
The presence of precipitates in metallic materials affects its durability, resistance and mechanical properties. Hence, its automatic identification by image processing and machine learning techniques may lead to reliable and efficient assessments on the materials. In this paper, we introduce four widely used supervised pattern recognition techniques to accomplish metallic precipitates segmentation in scanning electron microscope images from dissimilar welding on a Hastelloy C-276 alloy: Support Vector Machines, Optimum-Path Forest, Self Organizing Maps and a Bayesian classifier. Experimental results demonstrated that all classifiers achieved similar recognition rates with good results validated by an expert in metallographic image analysis. © 2011 Springer-Verlag Berlin Heidelberg.
Resumo:
Piezoelectric array transducers applications are becoming usual in the ultrasonic non-destructive testing area. However, the number of elements can increase the system complexity, due to the necessity of multichannel circuitry and to the large amount of data to be processed. Synthetic aperture techniques, where one or few transmission and reception channels are necessary, and the data are post-processed, can be used to reduce the system complexity. Another possibility is to use sparse arrays instead of a full-populated array. In sparse arrays, there is a smaller number of elements and the interelement spacing is larger than half wavelength. In this work, results of ultrasonic inspection of an aluminum plate with artificial defects using guided acoustic waves and sparse arrays are presented. Synthetic aperture techniques are used to obtain a set of images that are then processed with an image compounding technique, which was previously evaluated only with full-populated arrays, in order to increase the resolution and contrast of the images. The results with sparse arrays are equivalent to the ones obtained with full-populated arrays in terms of resolution. Although there is an 8 dB contrast reduction when using sparse arrays, defect detection is preserved and there is the advantage of a reduction in the number of transducer elements and data volume. © 2013 Brazilian Society for Automatics - SBA.