35 resultados para non-negative matrix factorization
em Aston University Research Archive
Resumo:
This paper presents a fast part-based subspace selection algorithm, termed the binary sparse nonnegative matrix factorization (B-SNMF). Both the training process and the testing process of B-SNMF are much faster than those of binary principal component analysis (B-PCA). Besides, B-SNMF is more robust to occlusions in images. Experimental results on face images demonstrate the effectiveness and the efficiency of the proposed B-SNMF.
Resumo:
This paper is drawn from the use of data envelopment analysis (DEA) in helping a Portuguese bank to manage the performance of its branches. The bank wanted to set targets for the branches on such variables as growth in number of clients, growth in funds deposited and so on. Such variables can take positive and negative values but apart from some exceptions, traditional DEA models have hitherto been restricted to non-negative data. We report on the development of a model to handle unrestricted data in a DEA framework and illustrate the use of this model on data from the bank concerned.
Resumo:
Over the last few years Data Envelopment Analysis (DEA) has been gaining increasing popularity as a tool for measuring efficiency and productivity of Decision Making Units (DMUs). Conventional DEA models assume non-negative inputs and outputs. However, in many real applications, some inputs and/or outputs can take negative values. Recently, Emrouznejad et al. [6] introduced a Semi-Oriented Radial Measure (SORM) for modelling DEA with negative data. This paper points out some issues in target setting with SORM models and introduces a modified SORM approach. An empirical study in bank sector demonstrates the applicability of the proposed model. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.
Resumo:
Supply chain formation is the process by which a set of producers within a network determine the subset of these producers able to form a chain to supply goods to one or more consumers at the lowest cost. This problem has been tackled in a number of ways, including auctions, negotiations, and argumentation-based approaches. In this paper we show how this problem can be cast as an optimization of a pairwise cost function. Optimizing this class of energy functions is NP-hard but efficient approximations to the global minimum can be obtained using loopy belief propagation (LBP). Here we detail a max-sum LBP-based approach to the supply chain formation problem, involving decentralized message-passing between supply chain participants. Our approach is evaluated against a well-known decentralized double-auction method and an optimal centralized technique, showing several improvements on the auction method: it obtains better solutions for most network instances which allow for competitive equilibrium (Competitive equilibrium in Walsh and Wellman is a set of producer costs which permits a Pareto optimal state in which agents in the allocation receive non-negative surplus and agents not in the allocation would acquire non-positive surplus by participating in the supply chain) while also optimally solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions. © 2012 Wiley Periodicals, Inc.
Resumo:
With the reformation of spectrum policy and the development of cognitive radio, secondary users will be allowed to access spectrums licensed to primary users. Spectrum auctions can facilitate this secondary spectrum access in a market-driven way. To design an efficient auction framework, we first study the supply and demand pressures and the competitive equilibrium of the secondary spectrum market, considering the spectrum reusability. In well-designed auctions, competition among participants should lead to the competitive equilibrium according to the traditional economic point of view. Then, a discriminatory price spectrum double auction framework is proposed for this market. In this framework, rational participants compete with each other by using bidding prices, and their profits are guaranteed to be non-negative. A near-optimal heuristic algorithm is also proposed to solve the auction clearing problem of the proposed framework efficiently. Experimental results verify the efficiency of the proposed auction clearing algorithm and demonstrate that competition among secondary users and primary users can lead to the competitive equilibrium during auction iterations using the proposed auction framework. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
In recent years, the boundaries between e-commerce and social networking have become increasingly blurred. Many e-commerce websites support the mechanism of social login where users can sign on the websites using their social network identities such as their Facebook or Twitter accounts. Users can also post their newly purchased products on microblogs with links to the e-commerce product web pages. In this paper, we propose a novel solution for cross-site cold-start product recommendation, which aims to recommend products from e-commerce websites to users at social networking sites in 'cold-start' situations, a problem which has rarely been explored before. A major challenge is how to leverage knowledge extracted from social networking sites for cross-site cold-start product recommendation. We propose to use the linked users across social networking sites and e-commerce websites (users who have social networking accounts and have made purchases on e-commerce websites) as a bridge to map users' social networking features to another feature representation for product recommendation. In specific, we propose learning both users' and products' feature representations (called user embeddings and product embeddings, respectively) from data collected from e-commerce websites using recurrent neural networks and then apply a modified gradient boosting trees method to transform users' social networking features into user embeddings. We then develop a feature-based matrix factorization approach which can leverage the learnt user embeddings for cold-start product recommendation. Experimental results on a large dataset constructed from the largest Chinese microblogging service Sina Weibo and the largest Chinese B2C e-commerce website JingDong have shown the effectiveness of our proposed framework.
Resumo:
We present in this article an automated framework that extracts product adopter information from online reviews and incorporates the extracted information into feature-based matrix factorization formore effective product recommendation. In specific, we propose a bootstrapping approach for the extraction of product adopters from review text and categorize them into a number of different demographic categories. The aggregated demographic information of many product adopters can be used to characterize both products and users in the form of distributions over different demographic categories. We further propose a graphbased method to iteratively update user- and product-related distributions more reliably in a heterogeneous user-product graph and incorporate them as features into the matrix factorization approach for product recommendation. Our experimental results on a large dataset crawled from JINGDONG, the largest B2C e-commerce website in China, show that our proposed framework outperforms a number of competitive baselines for product recommendation.
Resumo:
Conventional differential scanning calorimetry (DSC) techniques are commonly used to quantify the solubility of drugs within polymeric-controlled delivery systems. However, the nature of the DSC experiment, and in particular the relatively slow heating rates employed, limit its use to the measurement of drug solubility at the drug's melting temperature. Here, we describe the application of hyper-DSC (HDSC), a variant of DSC involving extremely rapid heating rates, to the calculation of the solubility of a model drug, metronidazole, in silicone elastomer, and demonstrate that the faster heating rates permit the solubility to be calculated under non-equilibrium conditions such that the solubility better approximates that at the temperature of use. At a heating rate of 400°C/min (HDSC), metronidazole solubility was calculated to be 2.16 mg/g compared with 6.16 mg/g at 20°C/min. © 2005 Elsevier B.V. All rights reserved.
Resumo:
The accounting profession has come under increased scrutiny over recent years about the growing number of non-audit fees received from audit clients and the possible negative impact of such fees on auditor independence. The argument advanced is that providing substantial amounts of non-audit services to clients may make it more likely that auditors concede to the wishes of the client management when difficult judgments are made. Such concerns are particularly salient in the case of reporting decisions related to going-concern uncertainties for financially stressed clients. This study empirically examines audit reports provided to financially stressed companies in the United Kingdom and the magnitude of audit and non-audit service fees paid to the company’s auditors. We find that the magnitude of both audit fees and non-audit fees are significantly associated with the issuance of a going-concern modified audit opinion. In particular, financially stressed companies with high audit fees are more likely to receive a going-concern modified audit opinion, whereas companies with high non-audit fees are less likely to receive a goingconcern modified audit opinion. Additional analyses indicate that the results are generally robust across alternative model and variable specifications. Overall, evidence supports the contention that high non-audit fees have a detrimental effect on going-concern reporting judgments for financially stressed U.K. companies.
Resumo:
Many organisations are encouraging their staff to integrate work and non-work, but a qualitative study of young professionals found that many crave greater segregation rather than more integration. Most wished to build boundaries to separate the two and simplify a complex world. Where working practices render traditional boundaries of time and space ineffective, this population seems to create new idiosyncratic boundaries to segregate work from non-work. These idiosyncratic boundaries depended on age, culture and life-stage though for most of this population there was no appreciable gender difference in attitudes to segregating work and non-work. Gender differences only became noticeable for parents. A matrix defining the dimensions to these boundaries is proposed that may advance understanding of how individuals separate their work and personal lives. In turn, this may facilitate the development of policies and practices to integrate work and non-work that meet individual as well as organisational needs.
Resumo:
Patients with non-erosive reflux disease (NERD) report symptoms which commonly fail to improve on conventional antireflux therapies. Oesophageal visceral hyperalgaesia may contribute to symptom generation in NERD and we explore this hypothesis using oesophageal evoked potentials. Fifteen endoscopically confirmed NERD patients (four female, 29–56 years) plus 15 matched healthy volunteers (four female, 23–56 years) were studied. All patients had oesophageal manometry/24-h pH monitoring and all subjects underwent evoked potential and sensory testing, using electrical stimulation of the distal oesophagus. Cumulatively, NERD patients had higher sensory thresholds and increased evoked potential latencies when compared to controls (P = 0.01). In NERD patients, there was a correlation between pain threshold and acid exposure as determined by DeMeester score (r = 0.63, P = 0.02), with increased oesophageal sensitivity being associated with lower DeMeester score. Reflux negative patients had lower pain thresholds when compared to both reflux positive patients and controls. Evoked potentials were normal in reflux negative patients but significantly delayed in the reflux positive group (P = 0.01). We demonstrate that NERD patients form a continuum of oesophageal afferent sensitivity with a correlation between the degree of acid exposure and oesophageal pain thresholds. We provide objective evidence that increased oesophageal pain sensitivity in reflux negative NERD is associated with heightened afferent sensitivity as normal latency evoked potential responses could be elicited with reduced afferent input. Increased oesophageal afferent pain sensitivity may play an important role in a subset of NERD and could offer an alternate therapeutic target.
Resumo:
Biocomposite films comprising a non-crosslinked, natural polymer (collagen) and a synthetic polymer, poly(var epsilon-caprolactone) (PCL), have been produced by impregnation of lyophilised collagen mats with a solution of PCL in dichloromethane followed by solvent evaporation. This approach avoids the toxicity problems associated with chemical crosslinking. Distinct changes in film morphology, from continuous surface coating to open porous format, were achieved by variation of processing parameters such as collagen:PCL ratio and the weight of the starting lyophilised collagen mat. Collagenase digestion indicated that the collagen content of 1:4 and 1:8 collagen:PCL biocomposites was almost totally accessible for enzymatic digestion indicating a high degree of collagen exposure for interaction with other ECM proteins or cells contacting the biomaterial surface. Much reduced collagen exposure (around 50%) was measured for the 1:20 collagen:PCL materials. These findings were consistent with the SEM examination of collagen:PCL biocomposites which revealed a highly porous morphology for the 1:4 and 1:8 blends but virtually complete coverage of the collagen component by PCL in the1:20 samples. Investigations of the attachment and spreading characteristics of human osteoblast (HOB) cells on PCL films and collagen:PCL materials respectively, indicated that HOB cells poorly recognised PCL but attachment and spreading were much improved on the biocomposites. The non-chemically crosslinked, collagen:PCL biocomposites described are expected to provide a useful addition to the range of biomaterials and matrix systems for tissue engineering.
Resumo:
Tissue transglutaminase (TG2) is a multifunctional Ca2+ activated protein crosslinking enzyme secreted into the extracellular matrix (ECM), where it is involved in wound healing and scarring, tissue fibrosis, celiac disease and metastatic cancer. Extracellular TG2 can also facilitate cell adhesion important in wound healing through a non-transamidating mechanism via its association with fibronectin (FN), heparan sulphates (HS) and integrins. Regulating the mechanism how TG2 is translocated into the ECM therefore provides a strategy for modulating these physiological and pathological functions of the enzyme. Here, through molecular modelling and mutagenesis we have identified the HS binding site of TG2 202KFLKNAGRDCSRRSSPVYVGR222. We demonstrate the requirement of this binding site for translocation of TG2 into the ECM through a mechanism involving cell surface shedding of HS. By synthesizing a peptide NPKFLKNAGRDCSRRSS corresponding to the HS binding site within TG2, we also demonstrate how this mimicking peptide can in isolation compensate the RGD-induced loss of cell adhesion on FN via binding to syndecan-4, leading to activation of PKCa, pFAK-397 and ERK1/2 and the subsequent formation of focal adhesions and actin cytoskeleton organization. A novel regulatory mechanism for TG2 translocation into the extracellular compartment that depends upon TG2 conformation and the binding of HS is proposed.