899 resultados para information bottleneck method
Resumo:
This paper presents the main achievements of the author’s PhD dissertation. The work is dedicated to mathematical and semi-empirical approaches applied to the case of Bulgarian wildland fires. After the introductory explanations, short information from every chapter is extracted to cover the main parts of the obtained results. The methods used are described in brief and main outcomes are listed. ACM Computing Classification System (1998): D.1.3, D.2.0, K.5.1.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
The purpose of this article is to evaluate the effectiveness of learning by doing as a practical tool for managing the training of students in "Library Management" at the ULSIT, Sofia, Bulgaria, by using the creation of project 'Data Base “Bulgarian Revival Towns” (CD), financed by Bulgarian Ministry of Education, Youth and Science (1/D002/144/13.10.2011) headed by Prof. DSc Ivanka Yankova, which aims to create new information resource for the towns which will serve the needs of scientific researches. By participating in generating the an array in the database through searching, selection and digitization of documents from these period, at the same time students get an opportunity to expand their skills to work effectively in a team, finding the interdisciplinary, a causal connection between the studied items, objects and subjects and foremost – practical experience in the field of digitization, information behavior, strategies for information search, etc. This method achieves good results for the accumulation of sustainable knowledge and it generates motivation to work in the field of library and information professions.
Resumo:
This paper presents a new, dynamic feature representation method for high value parts consisting of complex and intersecting features. The method first extracts features from the CAD model of a complex part. Then the dynamic status of each feature is established between various operations to be carried out during the whole manufacturing process. Each manufacturing and verification operation can be planned and optimized using the real conditions of a feature, thus enhancing accuracy, traceability and process control. The dynamic feature representation is complementary to the design models used as underlining basis in current CAD/CAM and decision support systems. © 2012 CIRP.
Resumo:
In this paper we propose a prototype size selection method for a set of sample graphs. Our first contribution is to show how approximate set coding can be extended from the vector to graph domain. With this framework to hand we show how prototype selection can be posed as optimizing the mutual information between two partitioned sets of sample graphs. We show how the resulting method can be used for prototype graph size selection. In our experiments, we apply our method to a real-world dataset and investigate its performance on prototype size selection tasks. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
Augmented reality is the latest among information technologies in modern electronics industry. The essence is in the addition of advanced computer graphics in real and/or digitized images. This paper gives a brief analysis of the concept and the approaches to implementing augmented reality for an expanded presentation of a digitized object of national cultural and/or scientific heritage. ACM Computing Classification System (1998): H.5.1, H.5.3, I.3.7.
Resumo:
The aim of this paper is to explore the management of information in an aerospace manufacturer's supply chain by analysing supply chain disruption risks. The social network perspective will be used to examine the flows of information in the supply chain. The examination of information flows will also be explored in terms of push and pull information management. The supply chain risk management (SCRM) strategy is to assess the management of information that allows companies to gather information which will allow them to mitigate that risk before any disruption to the supply chain occurs. There is a shortage of models in analysing the supply chain risk associated with information flows, possibly due to the omission of appropriate modelling techniques in this area (Tang and Nurmaya, 2011). This paper uses an exploratory case study consisting of a multi method qualitative approach using fifteen interviews and four focus groups.
Resumo:
In many e-commerce Web sites, product recommendation is essential to improve user experience and boost sales. Most existing product recommender systems rely on historical transaction records or Web-site-browsing history of consumers in order to accurately predict online users’ preferences for product recommendation. As such, they are constrained by limited information available on specific e-commerce Web sites. With the prolific use of social media platforms, it now becomes possible to extract product demographics from online product reviews and social networks built from microblogs. Moreover, users’ public profiles available on social media often reveal their demographic attributes such as age, gender, and education. In this paper, we propose to leverage the demographic information of both products and users extracted from social media for product recommendation. In specific, we frame recommendation as a learning to rank problem which takes as input the features derived from both product and user demographics. An ensemble method based on the gradient-boosting regression trees is extended to make it suitable for our recommendation task. We have conducted extensive experiments to obtain both quantitative and qualitative evaluation results. Moreover, we have also conducted a user study to gauge the performance of our proposed recommender system in a real-world deployment. All the results show that our system is more effective in generating recommendation results better matching users’ preferences than the competitive baselines.
Resumo:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.
Resumo:
This study suggests a novel application of Inverse Data Envelopment Analysis (InvDEA) in strategic decision making about mergers and acquisitions in banking. The conventional DEA assesses the efficiency of banks based on the information gathered about the quantities of inputs used to realize the observed level of outputs produced. The decision maker of a banking unit willing to merge/acquire another banking unit needs to decide about the inputs and/or outputs level if an efficiency target for the new banking unit is set. In this paper, a new InvDEA-based approach is developed to suggest the required level of the inputs and outputs for the merged bank to reach a predetermined efficiency target. This study illustrates the novelty of the proposed approach through the case of a bank considering merging with or acquiring one of its competitors to synergize and realize higher level of efficiency. A real data set of 42 banking units in Gulf Corporation Council countries is used to show the practicality of the proposed approach.
Resumo:
The availability of the sheer volume of online product reviews makes it possible to derive implicit demographic information of product adopters from review documents. This paper proposes a novel approach to the extraction of product adopter mentions from online reviews. The extracted product adopters are the ncategorise into a number of different demographic user groups. The aggregated demographic information of many product adopters can be used to characterize both products and users, which can be incorporated into a recommendation method using weighted regularised matrix factorisation. Our experimental results on over 15 million reviews crawled from JINGDONG, the largest B2C e-commerce website in China, show the feasibility and effectiveness of our proposed frame work for product recommendation.
Resumo:
In recent years, the boundaries between e-commerce and social networking have become increasingly blurred. Many e-commerce websites support the mechanism of social login where users can sign on the websites using their social network identities such as their Facebook or Twitter accounts. Users can also post their newly purchased products on microblogs with links to the e-commerce product web pages. In this paper, we propose a novel solution for cross-site cold-start product recommendation, which aims to recommend products from e-commerce websites to users at social networking sites in 'cold-start' situations, a problem which has rarely been explored before. A major challenge is how to leverage knowledge extracted from social networking sites for cross-site cold-start product recommendation. We propose to use the linked users across social networking sites and e-commerce websites (users who have social networking accounts and have made purchases on e-commerce websites) as a bridge to map users' social networking features to another feature representation for product recommendation. In specific, we propose learning both users' and products' feature representations (called user embeddings and product embeddings, respectively) from data collected from e-commerce websites using recurrent neural networks and then apply a modified gradient boosting trees method to transform users' social networking features into user embeddings. We then develop a feature-based matrix factorization approach which can leverage the learnt user embeddings for cold-start product recommendation. Experimental results on a large dataset constructed from the largest Chinese microblogging service Sina Weibo and the largest Chinese B2C e-commerce website JingDong have shown the effectiveness of our proposed framework.
Resumo:
We present in this article an automated framework that extracts product adopter information from online reviews and incorporates the extracted information into feature-based matrix factorization formore effective product recommendation. In specific, we propose a bootstrapping approach for the extraction of product adopters from review text and categorize them into a number of different demographic categories. The aggregated demographic information of many product adopters can be used to characterize both products and users in the form of distributions over different demographic categories. We further propose a graphbased method to iteratively update user- and product-related distributions more reliably in a heterogeneous user-product graph and incorporate them as features into the matrix factorization approach for product recommendation. Our experimental results on a large dataset crawled from JINGDONG, the largest B2C e-commerce website in China, show that our proposed framework outperforms a number of competitive baselines for product recommendation.
Resumo:
In this paper, we focus on the design of bivariate EDAs for discrete optimization problems and propose a new approach named HSMIEC. While the current EDAs require much time in the statistical learning process as the relationships among the variables are too complicated, we employ the Selfish gene theory (SG) in this approach, as well as a Mutual Information and Entropy based Cluster (MIEC) model is also set to optimize the probability distribution of the virtual population. This model uses a hybrid sampling method by considering both the clustering accuracy and clustering diversity and an incremental learning and resample scheme is also set to optimize the parameters of the correlations of the variables. Compared with several benchmark problems, our experimental results demonstrate that HSMIEC often performs better than some other EDAs, such as BMDA, COMIT, MIMIC and ECGA. © 2009 Elsevier B.V. All rights reserved.
Resumo:
The Accounting Information System (AIS) is an important course in the Department of Accounting (DoAc) of universities in Taiwan. This course is required for seniors not only because it meets the needs of the profession, but also because it provides continual study for the department's students.^ The scores of The National College and University Joint Entrance Examination (NUEE) show that students with high learning ability are admitted to public universities with high scores, while those with low learning ability are admitted only to private universities. The same situation has been found by the researcher while teaching an AIS course in DoAc of The Public Chun Shin University (CSU) and The Private Chinese Culture University (CCU).^ The purpose of this study was to determine whether low ability students enrolled in private universities in Taiwan in a mastery learning program could attain the same level as high ability students from public universities enrolled in a traditional program. An experimental design was used. The mastery learning method was used to teach three groups of seniors with low learning ability studying in the DoAc at CCU. The traditional method was used to teach the control group which consisted of senior students of DoAc of CSU with high learning ability. As a part of the mastery learning strategy, a formative test, quizzes, and homework were completed by the experimental group only, while the mid-term examination was completed by both groups as part of the course. The dependent variable was the summative test, the final examination. It was completed by both groups upon the course's completion.^ As predicted, there were significant differences between the two groups' results on the pretest. There were no significant differences between the two groups' results on the posttest. These findings support the hypothesis of the study and reveal the effectiveness of mastery learning strategies with low learning ability students. ^