949 resultados para Symbolic Computation
Resumo:
This thesis examines the manufacture, use, exchange (including gift exchange), collecting and commodification of German medals and badges from the early 18th century until the present-day, with particular attention being given to the symbols that were deployed by the National Socialist German Workers’ Party (NSDAP) between 1919 and 1945. It does so by focusing in particular on the construction of value through insignia, and how such badges and their symbolic and monetary value changed over time. In order to achieve this, the thesis adopts a chronological structure, which encompasses the creation of Prussia in 1701, the Napoleonic wars and the increased democratisation of military awards such as the Iron Cross during the Great War. The collapse of the Kaiserreich in 1918 was the major factor that led to the creation of the NSDAP under the eventual strangle-hold of Hitler, a fundamentally racist and anti-Semitic movement that continued the German tradition of awarding and wearing badges. The traditional symbols of Imperial Germany, such as the eagle, were then infused with the swastika, an emblem that was meant to signify anti-Semitism, thus creating a hybrid identity. This combination was then replicated en-masse, and eventually eclipsed all the symbols that had possessed symbolic significance in Germany’s past. After Hitler was appointed Chancellor in 1933, millions of medals and badges were produced in an effort to create a racially based “People’s Community”, but the steel and iron that were required for munitions eventually led to substitute materials being utilised and developed in order to manufacture millions of politically oriented badges. The Second World War unleashed Nazi terror across Europe, and the conscripts and volunteers who took part in this fight for living-space were rewarded with medals that were modelled on those that had been instituted during Imperial times. The colonial conquest and occupation of the East by the Wehrmacht, the Order Police and the Waffen-SS surpassed the brutality of former wars that finally culminated in the Holocaust, and some of these horrific crimes and the perpetrators of them were perversely rewarded with medals and badges. Despite Nazism being thoroughly discredited, many of the Allied soldiers who occupied Germany took part in the age-old practice of obtaining trophies of war, which reconfigured the meaning of Nazi badges as souvenirs, and began the process of their increased commodification on an emerging secondary collectors’ market. In order to analyse the dynamics of this market, a “basket” of badges is examined that enables a discussion of the role that aesthetics, scarcity and authenticity have in determining the price of the artefacts. In summary, this thesis demonstrates how the symbolic, socio-economic and exchange value of German military and political medals and badges has changed substantially over time, provides a stimulus for scholars to conduct research in this under-developed area, and encourages collectors to investigate the artefacts that they collect in a more historically contextualised manner.
Resumo:
Within the last few years, disabled people have become the target of government austerity measures through drastic cuts to welfare justified through the portrayal of benefit claimants as inactive, problem citizens who are wilfully unemployed. For all that is wrong with these cuts, they are one of many aspects of exclusion that disabled people face. Attitudes towards disability are deteriorating (Scope, 2011) and disabled people are devalued and negatively positioned in a myriad of ways, meaning that an understanding of the perceptions and positioning of disability and the power of disabling practices is critical. This thesis will examine how Bourdieu’s theoretical repertoire may be applied to the area of Disability Studies in order to discern how society produces oppressive and exclusionary systems of classification which structures the social position and perceptions of disability. The composite nature of disability and multiple forms of exclusion and inequality associated with it benefits from a multipronged approach which acknowledges personal, embodied and psychological aspects of disability alongside socio-political and cultural conceptualisations. Bourdieu’s approach is one in which the micro and macro aspects of social life are brought together through their meso interplay and provides a thorough analysis of the many aspects of disability.
Resumo:
Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.
Resumo:
International audience
Resumo:
We here present a sample MATLAB program for the numerical evaluation of the confluent hypergeometric function Φ2. This program is based on the calculation of the inverse Laplace transform using the algorithm suggested by Simon and Alouini in their reference textbook [1].
Resumo:
Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.
Resumo:
We consider a system described by the linear heat equation with adiabatic boundary conditions which is perturbed periodicaly. This perturbation is nonlinear and is characterized by a one-parameter family of quadratic maps. The system, depending on the parameters, presents very complex behaviour. We introduce a symbolic framework to analyze the system and resume its most important features.
Resumo:
We consider piecewise defined differential dynamical systems which can be analysed through symbolic dynamics and transition matrices. We have a continuous regime, where the time flow is characterized by an ordinary differential equation (ODE) which has explicit solutions, and the singular regime, where the time flow is characterized by an appropriate transformation. The symbolic codification is given through the association of a symbol for each distinct regular system and singular system. The transition matrices are then determined as linear approximations to the symbolic dynamics. We analyse the dependence on initial conditions, parameter variation and the occurrence of global strange attractors.
Resumo:
In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions.
Resumo:
My doctoral research is about the modelling of symbolism in the cultural heritage domain, and on connecting artworks based on their symbolism through knowledge extraction and representation techniques. In particular, I participated in the design of two ontologies: one models the relationships between a symbol, its symbolic meaning, and the cultural context in which the symbol symbolizes the symbolic meaning; the second models artistic interpretations of a cultural heritage object from an iconographic and iconological (thus also symbolic) perspective. I also converted several sources of unstructured data, a dictionary of symbols and an encyclopaedia of symbolism, and semi-structured data, DBpedia and WordNet, to create HyperReal, the first knowledge graph dedicated to conventional cultural symbolism. By making use of HyperReal's content, I showed how linked open data about cultural symbolism could be utilized to initiate a series of quantitative studies that analyse (i) similarities between cultural contexts based on their symbologies, (ii) broad symbolic associations, (iii) specific case studies of symbolism such as the relationship between symbols, their colours, and their symbolic meanings. Moreover, I developed a system that can infer symbolic, cultural context-dependent interpretations from artworks according to what they depict, envisioning potential use cases for museum curation. I have then re-engineered the iconographic and iconological statements of Wikidata, a widely used general-domain knowledge base, creating ICONdata: an iconographic and iconological knowledge graph. ICONdata was then enriched with automatic symbolic interpretations. Subsequently, I demonstrated the significance of enhancing artwork information through alignment with linked open data related to symbolism, resulting in the discovery of novel connections between artworks. Finally, I contributed to the creation of a software application. This application leverages established connections, allowing users to investigate the symbolic expression of a concept across different cultural contexts through the generation of a three-dimensional exhibition of artefacts symbolising the chosen concept.
Resumo:
This dissertation investigates the relations between logic and TCS in the probabilistic setting. It is motivated by two main considerations. On the one hand, since their appearance in the 1960s-1970s, probabilistic models have become increasingly pervasive in several fast-growing areas of CS. On the other, the study and development of (deterministic) computational models has considerably benefitted from the mutual interchanges between logic and CS. Nevertheless, probabilistic computation was only marginally touched by such fruitful interactions. The goal of this thesis is precisely to (start) bring(ing) this gap, by developing logical systems corresponding to specific aspects of randomized computation and, therefore, by generalizing standard achievements to the probabilistic realm. To do so, our key ingredient is the introduction of new, measure-sensitive quantifiers associated with quantitative interpretations. The dissertation is tripartite. In the first part, we focus on the relation between logic and counting complexity classes. We show that, due to our classical counting propositional logic, it is possible to generalize to counting classes, the standard results by Cook and Meyer and Stockmeyer linking propositional logic and the polynomial hierarchy. Indeed, we show that the validity problem for counting-quantified formulae captures the corresponding level in Wagner's hierarchy. In the second part, we consider programming language theory. Type systems for randomized \lambda-calculi, also guaranteeing various forms of termination properties, were introduced in the last decades, but these are not "logically oriented" and no Curry-Howard correspondence is known for them. Following intuitions coming from counting logics, we define the first probabilistic version of the correspondence. Finally, we consider the relationship between arithmetic and computation. We present a quantitative extension of the language of arithmetic able to formalize basic results from probability theory. This language is also our starting point to define randomized bounded theories and, so, to generalize canonical results by Buss.
Resumo:
In this work, we develop a randomized bounded arithmetic for probabilistic computation, following the approach adopted by Buss for non-randomized computation. This work relies on a notion of representability inspired by of Buss' one, but depending on a non-standard quantitative and measurable semantic. Then, we establish that the representable functions are exactly the ones in PPT. Finally, we extend the language of our arithmetic with a measure quantifier, which is true if and only if the quantified formula's semantic has measure greater than a given threshold. This allows us to define purely logical characterizations of standard probabilistic complexity classes such as BPP, RP, co-RP and ZPP.
Biased Random-key Genetic Algorithms For The Winner Determination Problem In Combinatorial Auctions.
Resumo:
Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Resumo:
This work approaches the forced air cooling of strawberry by numerical simulation. The mathematical model that was used describes the process of heat transfer, based on the Fourier's law, in spherical coordinates and simplified to describe the one-dimensional process. For the resolution of the equation expressed for the mathematical model, an algorithm was developed based on the explicit scheme of the numerical method of the finite differences and implemented in the scientific computation program MATLAB 6.1. The validation of the mathematical model was made by the comparison between theoretical and experimental data, where strawberries had been cooled with forced air. The results showed to be possible the determination of the convective heat transfer coefficient by fitting the numerical and experimental data. The methodology of the numerical simulations was showed like a promising tool in the support of the decision to use or to develop equipment in the area of cooling process with forced air of spherical fruits.
Resumo:
This paper discusses theoretical results of the research project Linguistic Identity and Identification: A Study of Functions of Second Language in Enunciating Subject Constitution. Non-cognitive factors that have a crucial incidence in the degree of success and ways of accomplishment of second language acquisition process are focused. A transdisciplinary perspective is adopted, mobilising categories from Discourse Analysis and Psychoanalysis. The most relevant ones are: discursive formation, intradiscourse, interdiscourse, forgetting n° 1, forgetting n° 2 (Pêcheux, 1982), identity, identification (Freud, 1966; Lacan, 1977; Nasio, 1995). Revuz s views (1991) are discussed. Her main claim is that during the process of learning a foreign language, the foundations of psychical structure, and consequently first language, are required. After examining how nomination and predication processes work in first and second languages, components of identity and identification processes are focused on, in an attempt to show how second language acquisition strategies depend on them. It is stated that methodological affairs of language teaching, learner s explicit motivation and the like are subordinated to the comprehension of deeper non-cognitive factors that determine the accomplishment of the second language acquisition process. It is also pointed out that those factors are to be approached, questioning the bipolar biological-social conception of subjectivity in the study of language acquisition and use and including in the analysis symbolic and significant dimensions of the discourse constitution process.