86 resultados para Classification Rules
Resumo:
Landscape classification tackles issues related to the representation and analysis of continuous and variable ecological data. In this study, a methodology is created in order to define topo-climatic landscapes (TCL) in the north-west of Catalonia (north-east of the Iberian Peninsula). TCLs relate the ecological behaviour of a landscape in terms of topography, physiognomy and climate, which compound the main drivers of an ecosystem. Selected variables are derived from different sources such as remote sensing and climatic atlas. The proposed methodology combines unsupervised interative cluster classification with a supervised fuzzy classification. As a result, 28 TCLs have been found for the study area which may be differentiated in terms of vegetation physiognomy and vegetation altitudinal range type. Furthermore a hierarchy among TCLs is set, enabling the merging of clusters and allowing for changes of scale. Through the topo-climatic landscape map, managers may identify patches with similar environmental conditions and asses at the same time the uncertainty involved.
Resumo:
The objective of this paper is to identify empirically the logic behind short-term interest rates setting
Resumo:
We consider negotiations selecting one-dimensional policies. Individuals have single-peaked preferences, and they are impatient. Decisions arise from a bargaining game with random proposers and (super) majority approval, ranging from the simple majority up to unanimity. The existence and uniqueness of stationary subgame perfect equilibrium is established, and its explicit characterization provided. We supply an explicit formula to determine the unique alternative that prevails, as impatience vanishes, for each majority. As an application, we examine the efficiency of majority rules. For symmetric distributions of peaks unanimity is the unanimously preferred majority rule. For asymmetric populations rules maximizing social surplus are characterized.
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
This paper has three objectives. First, it aims at revealing the logic of interest rate setting pursued by monetary authorities of 12 new EU members. Using estimation of an augmented Taylor rule, we find that this setting was not always consistent with the official monetary policy. Second, we seek to shed light on the inflation process of these countries. To this end, we carry out an estimation of an open economy Philips curve (PC). Our main finding is that inflation rates were not only driven by backward persistency but also held a forward-looking component. Finally, we assess the viability of existing monetary arrangements for price stability. The analysis of the conditional inflation variance obtained from GARCH estimation of PC is used for this purpose. We conclude that inflation targeting is preferable to an exchange rate peg because it allowed decreasing the inflation rate and anchored its volatility.
Resumo:
Es va realitzar el II Workshop en Tomografia Computeritzada (TC) a Monells. El primer dia es va dedicar íntegrament a la utilització del TC en temes de classificació de canals porcines, i el segon dia es va obrir a altres aplicacions del TC, ja sigui en animals vius o en diferents aspectes de qualitat de la carn o els productes carnis. Al workshop hi van assistir 45 persones de 12 països de la UE. The II workshop on the use of Computed Tomography (CT) in pig carcass classification. Other CT applications: live animals and meat technology was held in Monells. The first day it was dedicated to the use of CT in pig carcass classification. The segond day it was open to otehr CT applications, in live animals or in meat and meat products quality. There were 45 assistants of 12 EU countries.
Resumo:
We examine whether and how main central banks responded to episodes of financial stress over the last three decades. We employ a new methodology for monetary policy rules estimation, which allows for time-varying response coefficients as well as corrects for endogeneity. This flexible framework applied to the U.S., U.K., Australia, Canada and Sweden together with a new financial stress dataset developed by the International Monetary Fund allows not only testing whether the central banks responded to financial stress but also detects the periods and type of stress that were the most worrying for monetary authorities and to quantify the intensity of policy response. Our findings suggest that central banks often change policy
Resumo:
Descriptive set theory is mainly concerned with studying subsets of the space of all countable binary sequences. In this paper we study the generalization where countable is replaced by uncountable. We explore properties of generalized Baire and Cantor spaces, equivalence relations and their Borel reducibility. The study shows that the descriptive set theory looks very different in this generalized setting compared to the classical, countable case. We also draw the connection between the stability theoretic complexity of first-order theories and the descriptive set theoretic complexity of their isomorphism relations. Our results suggest that Borel reducibility on uncountable structures is a model theoretically natural way to compare the complexity of isomorphism relations.
Resumo:
This paper provides a natural way of reaching an agreement between two prominent proposals in a bankruptcy problem. Particularly, using the fact that such problems can be faced from two different points of views, awards and losses, we justify the average of any pair of dual bankruptcy rules through the definition a double recursive process. Finally, by considering three posible sets of equity principles that a particular society may agree on, we retrieve the average of old and well known bankruptcy rules, the Constrained Equal Awards and the Constrained Equal Losses rules, Piniles’ rule and its dual rule, and the Constrained Egalitarian rule and its dual rule. Keywords: Bankruptcy problems, Midpoint, Bounds, Duality, Recursivity. JEL classification: C71, D63, D71.
Resumo:
The commitment among agents has always been a difficult task, especially when they have to decide how to distribute the available amount of a scarce resource among all. On the one hand, there are a multiplicity of possible ways for assigning the available amount; and, on the other hand, each agent is going to propose that distribution which provides her the highest possible award. In this paper, with the purpose of making this agreement easier, firstly we use two different sets of basic properties, called Commonly Accepted Equity Principles, to delimit what agents can propose as reasonable allocations. Secondly, we extend the results obtained by Chun (1989) and Herrero (2003), obtaining new characterizations of old and well known bankruptcy rules. Finally, using the fact that bankruptcy problems can be analyzed from awards and losses, we define a mechanism which provides a new justification of the convex combinations of bankruptcy rules. Keywords: Bankruptcy problems, Unanimous Concessions procedure, Diminishing Claims mechanism, Piniles’ rule, Constrained Egalitarian rule. JEL classification: C71, D63, D71.
Resumo:
Land cover classification is a key research field in remote sensing and land change science as thematic maps derived from remotely sensed data have become the basis for analyzing many socio-ecological issues. However, land cover classification remains a difficult task and it is especially challenging in heterogeneous tropical landscapes where nonetheless such maps are of great importance. The present study aims to establish an efficient classification approach to accurately map all broad land cover classes in a large, heterogeneous tropical area of Bolivia, as a basis for further studies (e.g., land cover-land use change). Specifically, we compare the performance of parametric (maximum likelihood), non-parametric (k-nearest neighbour and four different support vector machines - SVM), and hybrid classifiers, using both hard and soft (fuzzy) accuracy assessments. In addition, we test whether the inclusion of a textural index (homogeneity) in the classifications improves their performance. We classified Landsat imagery for two dates corresponding to dry and wet seasons and found that non-parametric, and particularly SVM classifiers, outperformed both parametric and hybrid classifiers. We also found that the use of the homogeneity index along with reflectance bands significantly increased the overall accuracy of all the classifications, but particularly of SVM algorithms. We observed that improvements in producer’s and user’s accuracies through the inclusion of the homogeneity index were different depending on land cover classes. Earlygrowth/degraded forests, pastures, grasslands and savanna were the classes most improved, especially with the SVM radial basis function and SVM sigmoid classifiers, though with both classifiers all land cover classes were mapped with producer’s and user’s accuracies of around 90%. Our approach seems very well suited to accurately map land cover in tropical regions, thus having the potential to contribute to conservation initiatives, climate change mitigation schemes such as REDD+, and rural development policies.
Resumo:
A table showing a comparison and classification of tools (intelligent tutoring systems) for e-learning of Logic at a college level.
Resumo:
We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos
Resumo:
A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques