199 resultados para Curse
Resumo:
A presente pesquisa busca avaliar exegeticamente o texto que se encontra na Bíblia, especificamente no livro de Números capítulos 22-24 que relata sobre um personagem conhecido como Balaão. A pesquisa tem também como objeto o estudo sobre o panteão de divindades relatado no mesmo texto, assim como também o estudo dos textos descobertos em Deir Alla, na Jordânia, que apresentam um personagem designado como Balaão, possivelmente o mesmo personagem de Nm 22-24. A motivação que levou ao desenvolvimento dessa pesquisa foi o fato de se ter deparado com os conceitos dos diversos nomes divinos exibidos no texto, além da questão do profetismo fora de Israel, assim como as possibilidades hermenêuticas que se abrem para a leitura desse texto bíblico. O conceito geral sempre foi o de que Israel era a única nação onde existiam “verdadeiros” profetas e uma adoração a um único Deus, o “monoteísmo”. O que despertou interesse foi perceber, especialmente por meio da leitura dos livros bíblicos, que o profetismo não se restringiu somente a Israel. Ele antecede à formação do antigo Israel e já existia no âmbito das terras do antigo Oriente Médio, e que Israel ainda demorou muito tempo para ser monoteísta. Quem é esse Balaão, filho de Beor? Estudaremos sobre sua pessoa e sua missão. Examinaremos os textos de Deir Alla sobre Balaão e sua natureza de personagem mediador entre o divino e o humano. Esse personagem é apresentado como um grande profeta e que era famoso como intérprete de presságios divinos. Analisaremos a importante questão sobre o panteão de deuses que são apresentados na narrativa de Balaão nomeados como: El, Elyon Elohim e Shaddai, além de Yahweh. Entendemos, a princípio, que o texto possui uma conexão com a sociedade na qual foi criado e usando da metodologia exegética, faremos uma análise da narrativa em questão, buscando compreender o sentido do texto, dentro de seu cenário histórico e social. Cenário este, que nos apresentou esse profeta, não israelita, que profere bênçãos dos deuses sobre Israel e que, além disso, pronuncia maldições sobre os inimigos desse mesmo Israel. Percebemos que, parte do texto pesquisado é apresentado sob a ótica de Israel sobre as outras nações. A pesquisa defende, portanto, que o texto de Nm 22-24, além de nos apresentar um profeta fora de Israel igual aos profetas da Bíblia, defende que, o panteão de divindades também era adorado por Israel e que tais nomes são epítetos de uma mesma divindade, no caso YHWH. Defende, também, um delineamento de um projeto de domínio político e militar de Israel sobre as nações circunvizinhas.
Resumo:
In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.
Resumo:
The knowledge of the current state of the economy is crucial for policy makers, economists and analysts. However, a key economic variable, the gross domestic product (GDP), are typically colected on a quartely basis and released with substancial delays by the national statistical agencies. The first aim of this paper is to use a dynamic factor model to forecast the current russian GDP, using a set of timely monthly information. This approach can cope with the typical data flow problems of non-synchronous releases, mixed frequency and the curse of dimensionality. Given that Russian economy is largely dependent on the commodity market, our second motivation relates to study the effects of innovations in the russian macroeconomic fundamentals on commodity price predictability. We identify these innovations through a news index which summarizes deviations of offical data releases from the expectations generated by the DFM and perform a forecasting exercise comparing the performance of different models.
Resumo:
Anhang. Verzeichniss der Curse aller Bahnen, welche in den ofiziellen Curszetteln zur Notirung kasen.
Resumo:
Behind the times -- His first operation -- A straggler of '15 -- The third generation -- A false start -- The curse of Eve -- Sweethearts -- A physiologist's wife -- The case of Lady Sannox -- A question of diplomacy -- A medical document -- Lot no. 249 -- The Los Amigos fiasco -- The doctors of Hoyland -- The surgeon talks.
Resumo:
--v. 19 Periodical criticism:-v. 3, Miscellaneous: Tales of my landlord; Thornton's Sporting tour; Two cookery books; Johne's Translation of Froissart; Miseries of human life; Carr's Caledonian sketches; Lady Suffolk's correspondence; Kirkton's church history; Life & works of John Home.--v. 20 Periodical criticism:-v. 4 Miscellaneous: The Culloden papers; Pepy's Memoirs; Life of Kemble; Kelly's Reminiscences; Davy's Salmonia; Ancient history of Scotland.--v. 21 Periodical criticism:-v. 5 Miscellaneous: On planting waste lands--Monteath's Forester's guide; On landscape gardening--Sir H. Steuart's Planter's guide; Tytler's History of Scotland: Pitcairn's Criminal trials; Letters of Malachi Malagrowther on the currency.--v. 22-26 Tales of a grandfather: v. 1-5 Scotland.--v. 27-28 Tales of a grandfater: v. 6-7 France.
Resumo:
Added title pages, engraved.
Resumo:
With this is bound, as issued, the author's The curse and the cross ... Baltimore, 1887.
Resumo:
"Author's edition. This volume is published in England under the title of 'Poems before Congress'."
Resumo:
Vols. 1-4 are reissues of the four volumes of the edition of 1830.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.
Resumo:
One of the most pressing issues facing the global conservation community is how to distribute limited resources between regions identified as priorities for biodiversity conservation(1-3). Approaches such as biodiversity hotspots(4), endemic bird areas(5) and ecoregions(6) are used by international organizations to prioritize conservation efforts globally(7). Although identifying priority regions is an important first step in solving this problem, it does not indicate how limited resources should be allocated between regions. Here we formulate how to allocate optimally conservation resources between regions identified as priorities for conservation - the 'conservation resource allocation problem'. Stochastic dynamic programming is used to find the optimal schedule of resource allocation for small problems but is intractable for large problems owing to the curse of dimensionality(8). We identify two easy- to- use and easy- to- interpret heuristics that closely approximate the optimal solution. We also show the importance of both correctly formulating the problem and using information on how investment returns change through time. Our conservation resource allocation approach can be applied at any spatial scale. We demonstrate the approach with an example of optimal resource allocation among five priority regions in Wallacea and Sundaland, the transition zone between Asia and Australasia.
Resumo:
Indexing high dimensional datasets has attracted extensive attention from many researchers in the last decade. Since R-tree type of index structures are known as suffering curse of dimensionality problems, Pyramid-tree type of index structures, which are based on the B-tree, have been proposed to break the curse of dimensionality. However, for high dimensional data, the number of pyramids is often insufficient to discriminate data points when the number of dimensions is high. Its effectiveness degrades dramatically with the increase of dimensionality. In this paper, we focus on one particular issue of curse of dimensionality; that is, the surface of a hypercube in a high dimensional space approaches 100% of the total hypercube volume when the number of dimensions approaches infinite. We propose a new indexing method based on the surface of dimensionality. We prove that the Pyramid tree technology is a special case of our method. The results of our experiments demonstrate clear priority of our novel method.
Resumo:
Conventionally, document classification researches focus on improving the learning capabilities of classifiers. Nevertheless, according to our observation, the effectiveness of classification is limited by the suitability of document representation. Intuitively, the more features that are used in representation, the more comprehensive that documents are represented. However, if a representation contains too many irrelevant features, the classifier would suffer from not only the curse of high dimensionality, but also overfitting. To address this problem of suitableness of document representations, we present a classifier-independent approach to measure the effectiveness of document representations. Our approach utilises a labelled document corpus to estimate the distribution of documents in the feature space. By looking through documents in this way, we can clearly identify the contributions made by different features toward the document classification. Some experiments have been performed to show how the effectiveness is evaluated. Our approach can be used as a tool to assist feature selection, dimensionality reduction and document classification.