936 resultados para Vector Space Model
Resumo:
Paraconsistent logic admits that the contradiction can be true. Let p be the truth values and P be a proposition. In paraconsistent logic the truth values of contradiction is . This equation has no real roots but admits complex roots . This is the result which leads to develop a multivalued logic to complex truth values. The sum of truth values being isomorphic to the vector of the plane, it is natural to relate the function V to the metric of the vector space R2. We will adopt as valuations the norms of vectors. The main objective of this paper is to establish a theory of truth-value evaluation for paraconsistent logics with the goal of using in analyzing ideological, mythical, religious and mystic belief systems.
Resumo:
Given a convex optimization problem (P) in a locally convex topological vector space X with an arbitrary number of constraints, we consider three possible dual problems of (P), namely, the usual Lagrangian dual (D), the perturbational dual (Q), and the surrogate dual (Δ), the last one recently introduced in a previous paper of the authors (Goberna et al., J Convex Anal 21(4), 2014). As shown by simple examples, these dual problems may be all different. This paper provides conditions ensuring that inf(P)=max(D), inf(P)=max(Q), and inf(P)=max(Δ) (dual equality and existence of dual optimal solutions) in terms of the so-called closedness regarding to a set. Sufficient conditions guaranteeing min(P)=sup(Q) (dual equality and existence of primal optimal solutions) are also provided, for the nominal problems and also for their perturbational relatives. The particular cases of convex semi-infinite optimization problems (in which either the number of constraints or the dimension of X, but not both, is finite) and linear infinite optimization problems are analyzed. Finally, some applications to the feasibility of convex inequality systems are described.
Resumo:
This note provides an approximate version of the Hahn–Banach theorem for non-necessarily convex extended-real valued positively homogeneous functions of degree one. Given p : X → R∪{+∞} such a function defined on the real vector space X, and a linear function defined on a subspace V of X and dominated by p (i.e. (x) ≤ p(x) for all x ∈ V), we say that can approximately be p-extended to X, if is the pointwise limit of a net of linear functions on V, every one of which can be extended to a linear function defined on X and dominated by p. The main result of this note proves that can approximately be p-extended to X if and only if is dominated by p∗∗, the pointwise supremum over the family of all the linear functions on X which are dominated by p.
Resumo:
This paper explores the effects of non-standard monetary policies on international yield relationships. Based on a descriptive analysis of international long-term yields, we find evidence that long-term rates followed a global downward trend prior to as well as during the financial crisis. Comparing interest rate developments in the US and the eurozone, it is difficult to detect a distinct impact of the first round of the Fed’s quantitative easing programme (QE1) on US interest rates for which the global environment – the global downward trend in interest rates – does not account. Motivated by these findings, we analyse the impact of the Fed’s QE1 programme on the stability of the US-euro long-term interest rate relationship by using a CVAR (cointegrated vector autoregressive) model and, in particular, recursive estimation methods. Using data gathered between 2002 and 2014, we find limited evidence that QE1 caused the break-up or destabilised the transatlantic interest rate relationship. Taking global interest rate developments into account, we thus find no significant evidence that QE had any independent, distinct impact on US interest rates.
The North Sea autumn spawning Herring (Clupea harengus L.) Spawning Component Abundance Index (SCAI)
Resumo:
The North Sea autumn-spawning herring (Clupea harengus) stock consists of a set of different spawning components. The dynamics of the entire stock have been well characterized, but although time-series of larval abundance indices are available for the individual components, study of the dynamics at the component level has historically been hampered by missing observations and high sampling noise. A simple state-space statistical model is developed that is robust to these problems, gives a good fit to the data, and proves capable of both handling and predicting missing observations well. Furthermore, the sum of the fitted abundance indices across all components proves an excellent proxy for the biomass of the total stock, even though the model utilizes information at the individual-component level. The Orkney-Shetland component appears to have recovered faster from historic depletion events than the other components, whereas the Downs component has been the slowest. These differences give rise to changes in stock composition, which are shown to vary widely within a relatively short time. The modelling framework provides a valuable tool for studying and monitoring the dynamics of the individual components of the North Sea herring stock.
Resumo:
We present two methods of estimating the trend, seasonality and noise in time series of coronary heart disease events. In contrast to previous work we use a non-linear trend, allow multiple seasonal components, and carefully examine the residuals from the fitted model. We show the importance of estimating these three aspects of the observed data to aid insight of the underlying process, although our major focus is on the seasonal components. For one method we allow the seasonal effects to vary over time and show how this helps the understanding of the association between coronary heart disease and varying temperature patterns. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
In this paper, we present ICICLE (Image ChainNet and Incremental Clustering Engine), a prototype system that we have developed to efficiently and effectively retrieve WWW images based on image semantics. ICICLE has two distinguishing features. First, it employs a novel image representation model called Weight ChainNet to capture the semantics of the image content. A new formula, called list space model, for computing semantic similarities is also introduced. Second, to speed up retrieval, ICICLE employs an incremental clustering mechanism, ICC (Incremental Clustering on ChainNet), to cluster images with similar semantics into the same partition. Each cluster has a summary representative and all clusters' representatives are further summarized into a balanced and full binary tree structure. We conducted an extensive performance study to evaluate ICICLE. Compared with some recently proposed methods, our results show that ICICLE provides better recall and precision. Our clustering technique ICC facilitates speedy retrieval of images without sacrificing recall and precision significantly.
Resumo:
For determining functionality dependencies between two proteins, both represented as 3D structures, it is an essential condition that they have one or more matching structural regions called patches. As 3D structures for proteins are large, complex and constantly evolving, it is computationally expensive and very time-consuming to identify possible locations and sizes of patches for a given protein against a large protein database. In this paper, we address a vector space based representation for protein structures, where a patch is formed by the vectors within the region. Based on our previews work, a compact representation of the patch named patch signature is applied here. A similarity measure of two patches is then derived based on their signatures. To achieve fast patch matching in large protein databases, a match-and-expand strategy is proposed. Given a query patch, a set of small k-sized matching patches, called candidate patches, is generated in match stage. The candidate patches are further filtered by enlarging k in expand stage. Our extensive experimental results demonstrate encouraging performances with respect to this biologically critical but previously computationally prohibitive problem.
Resumo:
Influential models of edge detection have generally supposed that an edge is detected at peaks in the 1st derivative of the luminance profile, or at zero-crossings in the 2nd derivative. However, when presented with blurred triangle-wave images, observers consistently marked edges not at these locations, but at peaks in the 3rd derivative. This new phenomenon, termed ‘Mach edges’ persisted when a luminance ramp was added to the blurred triangle-wave. Modelling of these Mach edge detection data required the addition of a physiologically plausible filter, prior to the 3rd derivative computation. A viable alternative model was examined, on the basis of data obtained with short-duration, high spatial-frequency stimuli. Detection and feature-making methods were used to examine the perception of Mach bands in an image set that spanned a range of Mach band detectabilities. A scale-space model that computed edge and bar features in parallel provided a better fit to the data than 4 competing models that combined information across scale in a different manner, or computed edge or bar features at a single scale. The perception of luminance bars was examined in 2 experiments. Data for one image-set suggested a simple rule for perception of a small Gaussian bar on a larger inverted Gaussian bar background. In previous research, discriminability (d’) has typically been reported to be a power function of contrast, where the exponent (p) is 2 to 3. However, using bar, grating, and Gaussian edge stimuli, with several methodologies, values of p were obtained that ranged from 1 to 1.7 across 6 experiments. This novel finding was explained by appealing to low stimulus uncertainty, or a near-linear transducer.
Resumo:
Text classification is essential for narrowing down the number of documents relevant to a particular topic for further pursual, especially when searching through large biomedical databases. Protein-protein interactions are an example of such a topic with databases being devoted specifically to them. This paper proposed a semi-supervised learning algorithm via local learning with class priors (LL-CP) for biomedical text classification where unlabeled data points are classified in a vector space based on their proximity to labeled nodes. The algorithm has been evaluated on a corpus of biomedical documents to identify abstracts containing information about protein-protein interactions with promising results. Experimental results show that LL-CP outperforms the traditional semisupervised learning algorithms such as SVMand it also performs better than local learning without incorporating class priors.
Resumo:
While many offline retailers have developed informational websites that offer information on products and prices, the key question for such informational websites is whether they can increase revenues via web-to-store shopping. The current paper draws on the information search literature to specify and test hypotheses regarding the offline revenue impact of adding an informational website. Explicitly considering marketing efforts, a latent class model distinguishes consumer segments with different short-term revenue effects, while a Vector Autoregressive model on these segments reveals different long-term marketing response. We find that the offline revenue impact of the informational website critically depends on the product category and customer segment. The lower online search costs are especially beneficial for sensory products and for customers distant from the store. Moreover, offline revenues increase most for customers with high web visit frequency. We find that customers in some segments buy more and more expensive products, suggesting that online search and offline purchases are complements. In contrast, customers in a particular segment reduce their shopping trips, suggesting their online activities partially substitute for experiential shopping in the physical store. Hence, offline retailers should use specific online activities to target specific product categories and customer segments.
Resumo:
The aim of this paper is to examine the short term dynamics of foreign exchange rate spreads. Using a vector autoregressive model (VAR) we show that most of the variation in the spread comes from the long run dependencies between past and future spreads rather than being caused by changes in inventory, adverse selection, cost of carry or order processing costs. We apply the Integrated Cumulative Sum of Squares (ICSS) algorithm of Inclan and Tiao (1994) to discover how often spread volatility changes. We find that spread volatility shifts are relatively uncommon and shifts in one currency spread tend not to spillover to other currency spreads. © 2013.
Resumo:
Given a differentiable action of a compact Lie group G on a compact smooth manifold V , there exists [3] a closed embedding of V into a finite-dimensional real vector space E so that the action of G on V may be extended to a differentiable linear action (a linear representation) of G on E. We prove an analogous equivariant embedding theorem for compact differentiable spaces (∞-standard in the sense of [6, 7, 8]).
Resumo:
As is well known, the Convergence Theorem for the Recurrent Neural Networks, is based in Lyapunov ́s second method, which states that associated to any one given net state, there always exist a real number, in other words an element of the one dimensional Euclidean Space R, in such a way that when the state of the net changes then its associated real number decreases. In this paper we will introduce the two dimensional Euclidean space R2, as the space associated to the net, and we will define a pair of real numbers ( x, y ) , associated to any one given state of the net. We will prove that when the net change its state, then the product x ⋅ y will decrease. All the states whose projection over the energy field are placed on the same hyperbolic surface, will be considered as points with the same energy level. On the other hand we will prove that if the states are classified attended to their distances to the zero vector, only one pattern in each one of the different classes may be at the same energy level. The retrieving procedure is analyzed trough the projection of the states on that plane. The geometrical properties of the synaptic matrix W may be used for classifying the n-dimensional state- vector space in n classes. A pattern to be recognized is seen as a point belonging to one of these classes, and depending on the class the pattern to be retrieved belongs, different weight parameters are used. The capacity of the net is improved and the spurious states are reduced. In order to clarify and corroborate the theoretical results, together with the formal theory, an application is presented.
Resumo:
Neural Networks have been successfully employed in different biomedical settings. They have been useful for feature extractions from images and biomedical data in a variety of diagnostic applications. In this paper, they are applied as a diagnostic tool for classifying different levels of gastric electrical uncoupling in controlled acute experiments on dogs. Data was collected from 16 dogs using six bipolar electrodes inserted into the serosa of the antral wall. Each dog underwent three recordings under different conditions: (1) basal state, (2) mild surgically-induced uncoupling, and (3) severe surgically-induced uncoupling. For each condition half-hour recordings were made. The neural network was implemented according to the Learning Vector Quantization model. This is a supervised learning model of the Kohonen Self-Organizing Maps. Majority of the recordings collected from the dogs were used for network training. Remaining recordings served as a testing tool to examine the validity of the training procedure. Approximately 90% of the dogs from the neural network training set were classified properly. However, only 31% of the dogs not included in the training process were accurately diagnosed. The poor neural-network based diagnosis of recordings that did not participate in the training process might have been caused by inappropriate representation of input data. Previous research has suggested characterizing signals according to certain features of the recorded data. This method, if employed, would reduce the noise and possibly improve the diagnostic abilities of the neural network.