899 resultados para Information Filtering, Pattern Mining, Relevance Feature Discovery, Text Mining


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The sexual system of the symbiotic shrimp Thor amboinensis is described, along with observations on sex ratio and host-use pattern of different populations. We used a comprehensive approach to elucidate the previously unknown sexual system of this shrimp. Dissections, scanning electron microscopy, size-frequency distribution analysis, and laboratory observations demonstrated that T amboinensis is a protandric hermaphrodite: shrimp first mature as males and change into females later in life. Thor amboinensis inhabited the large and structurally heterogeneous sea anemone Stichoclactyla helianthus in large groups (up to 11 individuals) more frequently than expected by chance alone. Groups exhibited no particularly complex social structure and showed male-biased sex ratios more frequently than expected by chance alone. The adult sex ratio was male-biased in the four separate populations studied, one of them being thousands of kilometers apart from the others. This study supports predictions central to theories of resource monopolization and sex allocation. Dissections demonstrated that unusually large males were parasitized by an undescribed species of isopod (family Entoniscidae). Infestation rates were similarly low in both sexes (approximate to 11%-12%). The available information suggests that T. amboinensis uses pure search promiscuity as a mating system. This hypothesis needs to be formally tested with mating behavior observations and field measurements on the movement pattern of both sexes of the species. Further detailed studies on the lifestyle and sexual system of all the species within this genus and the development of a molecular phylogeny are necessary to elucidate the evolutionary history of gender expression in the genus Thor.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Burst firing is ubiquitous in nervous systems and has been intensively studied in central pattern generators (CPGs). Previous works have described subtle intraburst spike patterns (IBSPs) that, despite being traditionally neglected for their lack of relation to CPG motor function, were shown to be cell-type specific and sensitive to CPG connectivity. Here we address this matter by investigating how a bursting motor neuron expresses information about other neurons in the network. We performed experiments on the crustacean stomatogastric pyloric CPG, both in control conditions and interacting in real-time with computer model neurons. The sensitivity of postsynaptic to presynaptic IBSPs was inferred by computing their average mutual information along each neuron burst. We found that details of input patterns are nonlinearly and inhomogeneously coded through a single synapse into the fine IBSPs structure of the postsynaptic neuron following burst. In this way, motor neurons are able to use different time scales to convey two types of information simultaneously: muscle contraction (related to bursting rhythm) and the behavior of other CPG neurons (at a much shorter timescale by using IBSPs as information carriers). Moreover, the analysis revealed that the coding mechanism described takes part in a previously unsuspected information pathway from a CPG motor neuron to a nerve that projects to sensory brain areas, thus providing evidence of the general physiological role of information coding through IBSPs in the regulation of neuronal firing patterns in remote circuits by the CNS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Condition monitoring of wooden railway sleepers applications are generallycarried out by visual inspection and if necessary some impact acoustic examination iscarried out intuitively by skilled personnel. In this work, a pattern recognition solutionhas been proposed to automate the process for the achievement of robust results. Thestudy presents a comparison of several pattern recognition techniques together withvarious nonstationary feature extraction techniques for classification of impactacoustic emissions. Pattern classifiers such as multilayer perceptron, learning cectorquantization and gaussian mixture models, are combined with nonstationary featureextraction techniques such as Short Time Fourier Transform, Continuous WaveletTransform, Discrete Wavelet Transform and Wigner-Ville Distribution. Due to thepresence of several different feature extraction and classification technqies, datafusion has been investigated. Data fusion in the current case has mainly beeninvestigated on two levels, feature level and classifier level respectively. Fusion at thefeature level demonstrated best results with an overall accuracy of 82% whencompared to the human operator.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The accurate measurement of a vehicle’s velocity is an essential feature in adaptive vehicle activated sign systems. Since the velocities of the vehicles are acquired from a continuous wave Doppler radar, the data collection becomes challenging. Data accuracy is sensitive to the calibration of the radar on the road. However, clear methodologies for in-field calibration have not been carefully established. The signs are often installed by subjective judgment which results in measurement errors. This paper develops a calibration method based on mining the data collected and matching individual vehicles travelling between two radars. The data was cleaned and prepared in two ways: cleaning and reconstructing. The results showed that the proposed correction factor derived from the cleaned data corresponded well with the experimental factor done on site. In addition, this proposed factor showed superior performance to the one derived from the reconstructed data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

http://digitalcommons.winthrop.edu/dacusdocsnews/1017/thumbnail.jpg

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The number of research papers available today is growing at a staggering rate, generating a huge amount of information that people cannot keep up with. According to a tendency indicated by the United States’ National Science Foundation, more than 10 million new papers will be published in the next 20 years. Because most of these papers will be available on the Web, this research focus on exploring issues on recommending research papers to users, in order to directly lead users to papers of their interest. Recommender systems are used to recommend items to users among a huge stream of available items, according to users’ interests. This research focuses on the two most prevalent techniques to date, namely Content-Based Filtering and Collaborative Filtering. The first explores the text of the paper itself, recommending items similar in content to the ones the user has rated in the past. The second explores the citation web existing among papers. As these two techniques have complementary advantages, we explored hybrid approaches to recommending research papers. We created standalone and hybrid versions of algorithms and evaluated them through both offline experiments on a database of 102,295 papers, and an online experiment with 110 users. Our results show that the two techniques can be successfully combined to recommend papers. The coverage is also increased at the level of 100% in the hybrid algorithms. In addition, we found that different algorithms are more suitable for recommending different kinds of papers. Finally, we verified that users’ research experience influences the way users perceive recommendations. In parallel, we found that there are no significant differences in recommending papers for users from different countries. However, our results showed that users’ interacting with a research paper Recommender Systems are much happier when the interface is presented in the user’s native language, regardless the language that the papers are written. Therefore, an interface should be tailored to the user’s mother language.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ao se reportar resultados voláteis e, sem a devida evidenciação contábil (disclosure), pode-se transmitir uma imagem negativa aos investidores e levantar dúvidas em relação aos resultados futuros, a transparência e a capacidade de gerenciamento do risco por parte dos gestores das instituições financeiras. Nas últimas décadas, a utilização da contabilidade de hedge para a gestão do risco e resultado tem estado em evidência nos grandes bancos do Brasil e do exterior. Isto ocorre pois é onde se dá a convergência das demonstrações financeiras tanto em 2005 na Europa quanto em 2010 no Brasil para o novo padrão contábil internacional (IFRS) aplicado pelo IASB. Este padrão tem exigido dos bancos grandes esforços para estar em conformidade com as novas regras estabelecidas. Nesta mesma lógica, enquanto a contabilidade de hedge nos bancos assume um papel de destaque na gestão dos riscos e resultados; a divulgação precisa e concisa das demonstrações financeiras fornece aos acionistas, investidores e demais usuários importantes informações sobre o desempenho e a condução do negócio. Isto proporciona ao mercado uma melhor condição de avaliar os riscos envolvidos e de estimar os resultados futuros para a tomada de decisão de investimento. Dentro deste contexto, foi avaliado a qualidade e o grau de evidenciação das demonstrações contábeis dos principais bancos brasileiros e europeus aos requisitos do IFRS 7, IFRS 9 e outros mais de elaboração do próprio autor. Todos esses requisitos referem-se à divulgação de informações qualitativas e quantitativas pertinentes a contabilidade de hedge. Portanto, estão associados a estratégias de gestão de risco e resultado. A avaliação do grau de evidenciação das demonstrações financeiras ao IFRS 7 e IFRS 9 foi feita através de um estudo exploratório onde se analisou as notas explicativas em IFRS dos dez maiores bancos no Brasil e na Europa pelo critério “tamanho dos ativos”. Os resultados obtidos neste estudo indicam que 59,6% das instituições analisadas cumprem as exigências do IFRS7. Outra descoberta é que o índice de cumprimento dos bancos brasileiros é maior que os bancos europeus; 68,3% vs. 50,8%. Em relação ao IFRS 9 o percentual é de apenas 23% o que é explicado pelo fato da norma ainda não estar em vigor em ambas as regiões onde poucas instituições tem se antecipado de forma voluntária para atendê-la. A avaliação da qualidade das notas explicativas referente ao hedge contábil foi feita de maneira discricionária através da observação das informações prestadas para atender aos requisitos do IFRS 7 e 9 e dos demais requisitos adicionados pelo autor. Os resultados obtidos indicam que as notas carecem de maior detalhamento dos instrumentos de hedge utilizados, bem como os objetivos de cada hedge, para dar maior transparência ao usuário da informação sobre os riscos protegidos nos respectivos balanços. O crescimento do volume de informações prestadas nas notas explicativas dos grandes bancos brasileiros e europeus após a adoção do IFRS não configurou um aumento proporcional do conteúdo informacional, prevalecendo, ainda, a forma sobre a essência. Este movimento abre espaço para discussões futuras com os agentes de mercado sobre o tamanho e o conteúdo informacional adequado nas notas explicativas, com o intuito de buscar um equilíbrio entre o custo e o benefício da divulgação da informação sob a ótica da relevância e da materialidade.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

EMAp - Escola de Matemática Aplicada

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The domain of Knowledge Discovery (KD) and Data Mining (DM) is of growing importance in a time where more and more data is produced and knowledge is one of the most precious assets. Having explored both the existing underlying theory, the results of the ongoing research in academia and the industry practices in the domain of KD and DM, we have found that this is a domain that still lacks some systematization. We also found that this systematization exists to a greater degree in the Software Engineering and Requirements Engineering domains, probably due to being more mature areas. We believe that it is possible to improve and facilitate the participation of enterprise stakeholders in the requirements engineering for KD projects by systematizing requirements engineering process for such projects. This will, in turn, result in more projects that end successfully, that is, with satisfied stakeholders, including in terms of time and budget constraints. With this in mind and based on all information found in the state-of-the art, we propose SysPRE - Systematized Process for Requirements Engineering in KD projects. We begin by proposing an encompassing generic description of the KD process, where the main focus is on the Requirements Engineering activities. This description is then used as a base for the application of the Design and Engineering Methodology for Organizations (DEMO) so that we can specify a formal ontology for this process. The resulting SysPRE ontology can serve as a base that can be used not only to make enterprises become aware of their own KD process and requirements engineering process in the KD projects, but also to improve such processes in reality, namely in terms of success rate.