922 resultados para logic formula


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to show how the self-archiving of journal papers is a major step towards providing open access to research. However, copyright transfer agreements (CTAs) that are signed by an author prior to publication often indicate whether, and in what form, self-archiving is allowed. The SHERPA/RoMEO database enables easy access to publishers' policies in this area and uses a colour-coding scheme to classify publishers according to their self-archiving status. The database is currently being redeveloped and renamed the Copyright Knowledge Bank. However, it will still assign a colour to individual publishers indicating whether pre-prints can be self-archived (yellow), post-prints can be self-archived (blue), both pre-print and post-print can be archived (green) or neither (white). The nature of CTAs means that these decisions are rarely as straightforward as they may seem, and this paper describes the thinking and considerations that were used in assigning these colours in the light of the underlying principles and definitions of open access. Approach – Detailed analysis of a large number of CTAs led to the development of controlled vocabulary of terms which was carefully analysed to determine how these terms equate to the definition and “spirit” of open access. Findings – The paper reports on how conditions outlined by publishers in their CTAs, such as how or where a paper can be self-archived, affect the assignment of a self-archiving colour to the publisher. Value – The colour assignment is widely used by authors and repository administrators in determining whether academic papers can be self-archived. This paper provides a starting-point for further discussion and development of publisher classification in the open access environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fondo Margaritainés Restrepo

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A lógica fuzzy admite infinitos valores lógicos intermediários entre o falso e o verdadeiro. Com esse princípio, foi elaborado neste trabalho um sistema baseado em regras fuzzy, que indicam o índice de massa corporal de animais ruminantes com objetivo de obter o melhor momento para o abate. O sistema fuzzy desenvolvido teve como entradas as variáveis massa e altura, e a saída um novo índice de massa corporal, denominado Índice de Massa Corporal Fuzzy (IMC Fuzzy), que poderá servir como um sistema de detecção do momento de abate de bovinos, comparando-os entre si através das variáveis linguísticas )Muito BaixaM, ,BaixaB, ,MédiaM, ,AltaA e Muito AltaM. Para a demonstração e aplicação da utilização deste sistema fuzzy, foi feita uma análise de 147 vacas da raça Nelore, determinando os valores do IMC Fuzzy para cada animal e indicando a situação de massa corpórea de todo o rebanho. A validação realizada do sistema foi baseado em uma análise estatística, utilizando o coeficiente de correlação de Pearson 0,923, representando alta correlação positiva e indicando que o método proposto está adequado. Desta forma, o presente método possibilita a avaliação do rebanho, comparando cada animal do rebanho com seus pares do grupo, fornecendo desta forma um método quantitativo de tomada de decisão para o pecuarista. Também é possível concluir que o presente trabalho estabeleceu um método computacional baseado na lógica fuzzy capaz de imitar parte do raciocínio humano e interpretar o índice de massa corporal de qualquer tipo de espécie bovina e em qualquer região do País.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Repercussions of innovation adoption and diffusion studies have long been imperative to the success of novel introductions. However, perceptions and deductions of current innovation understandings have been changing over time. The paradigm shift from the goods-dominant (G-D) logic to the service-dominant (S-D) logic potentially makes the distinction between product (goods) innovation and service innovation redundant as the S-D logic lens views all innovations as service innovations (Vargo and Lusch, 2004; 2008; Lusch and Nambisan, 2015). From this perspective, product innovations are in essence service innovations, as goods serve as mere distribution mechanisms to deliver service. Nonetheless, the transition to such a broadened and transcending view of service innovation necessitates concurrently a change in the underlying models used to investigate innovation and its subsequent adoption. The present research addresses this gap by engendering a novel model for the most crucial period of service diffusion within the S-D logic context – the post-initial adoption phase, which demarcates an individual’s behavior after the initial adoption decision of a service. As a wellfounded understanding of service diffusion and the complementary innovation adoption still lingers in its infancy, the current study develops a model based on interdisciplinary domains mapping. Here fore, knowledge of the relatively established viral source domain is mapped to the comparatively undetermined target domain of service innovation adoption. To assess the model and test the importance of the explanatory variables, survey data from 750 respondents of a bank in Northern Germany is scrutinized by means of Structural Equation Modeling (SEM). The findings reveal that the continuance intention of a customer, actual usage of the service and the customer influencer value all constitute important postinitial adoption behavior that have meaningful implications for a successful service adoption. Second, the four constructs customer influencer value, organizational commitment, perceived usefulness and service customization are evidenced to have a differential impact on a iv customer’s post-initial adoption behavior. Third, this study indicates that post-initial adoption behavior further underlies the influence of a user’s age and besides that is also provoked by the internal and external environments of service adoption. Finally, this research amalgamates the broad view of service innovation by Nambisan and Lusch (2015) with the findings ensuing this enquiry’s model to arrive at a framework that it both, generalizable and practically applicable. Implications for academia and practitioners are captured along with avenues for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we present a sound and complete axiomatic system for conditional attribute implications (CAIs) in Triadic Concept Analysis (TCA). Our approach is strongly based on the Simplification paradigm which offers a more suitable way for automated reasoning than the one based on Armstrong’s Axioms. We also present an automated method to prove the derivability of a CAI from a set of CAI s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ecological models written in a mathematical language L(M) or model language, with a given style or methodology can be considered as a text. It is possible to apply statistical linguistic laws and the experimental results demonstrate that the behaviour of a mathematical model is the same of any literary text of any natural language. A text has the following characteristics: (a) the variables, its transformed functions and parameters are the lexic units or LUN of ecological models; (b) the syllables are constituted by a LUN, or a chain of them, separated by operating or ordering LUNs; (c) the flow equations are words; and (d) the distribution of words (LUM and CLUN) according to their lengths is based on a Poisson distribution, the Chebanov's law. It is founded on Vakar's formula, that is calculated likewise the linguistic entropy for L(M). We will apply these ideas over practical examples using MARIOLA model. In this paper it will be studied the problem of the lengths of the simple lexic units composed lexic units and words of text models, expressing these lengths in number of the primitive symbols, and syllables. The use of these linguistic laws renders it possible to indicate the degree of information given by an ecological model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, the authors propose a theory of the truth value of propositions from a logic-mathematical point of view. The work that the authors present is an attempt to address this question from an epistemological, linguistic, and logical-mathematical point of view. What is it to exist and how do we define existence? The main objective of this work is an approach to the first of these questions. We leave a more thorough treatment of the problem of existence for future works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La distinción entre argumentación y explicación es una tarea complicada pero necesaria por diversas razones. Una de ellas es la necesidad de incorporar la explicación en un movimiento del diálogo como resultado de una obligación dialéctica. Se propusieron distintos sistemas de diálogo que exploran la distinción enfatizando aspectos pragmáticos. En el presente trabajo me ocupo de aspectos estructurales de la explicación analizados en el marco de la lógica por defecto que permite caracterizar ciertas objeciones en el diálogo. Asimismo, considero que la versión operacional de la lógica por defecto constituye una aproximaciónadecuada en la construcción de la explicación y en la representación de la instancia de diálogo en el intercambio dialéctico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La distinción entre argumentación y explicación es una tarea complicada pero necesaria por diversas razones. Una de ellas es la necesidad de incorporar la explicación en un movimiento del diálogo como resultado de una obligación dialéctica. Se propusieron distintos sistemas de diálogo que exploran la distinción enfatizando aspectos pragmáticos. En el presente trabajo me ocupo de aspectos estructurales de la explicación analizados en el marco de la lógica por defecto que permite caracterizar ciertas objeciones en el diálogo. Asimismo, considero que la versión operacional de la lógica por defecto constituye una aproximaciónadecuada en la construcción de la explicación y en la representación de la instancia de diálogo en el intercambio dialéctico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste trabalho é relatado o processo de projeto de um chassis para um veículo de Formula Student. Ao longo do relatório são abordados aspetos regulamentares, requisitos e objetivos técnicos, a interligação com os diversos sistemas do veículo, os aspetos relacionados com o projeto da estrutura, métodos de modelação para análise com recurso ao método dos elementos finitos, e por fim a análise dos esforços aplicados ao chassis e a validação do mesmo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We modelled the distributions of two toads (Bufo bufo and Epidalea calamita) in the Iberian Peninsula using the favourability function, which makes predictions directly comparable for different species and allows fuzzy logic operations to relate different models. The fuzzy intersection between individual models, representing favourability for the presence of both species simultaneously, was compared with another favourability model built on the presences shared by both species. The fuzzy union between individual models, representing favourability for the presence of any of the two species, was compared with another favourabilitymodel based on the presences of either or both of them. The fuzzy intersections between favourability for each species and the complementary of favourability for the other (corresponding to the logical operation “A and not B”) were compared with models of exclusive presence of one species versus the exclusive presence of the other. The results of modelling combined species data were highly similar to those of fuzzy logic operations between individual models, proving fuzzy logic and the favourability function valuable for comparative distribution modelling. We highlight several advantages of fuzzy logic over other forms of combining distribution models, including the possibility to combine multiple species models for management and conservation planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Prior Analytics 1.1–22, Aristotle develops his proof system of non-modal and modal propositions. This system is given in the language of propositions, and Aristotle is concerned with establishing some properties and relations that the expressions of this language enjoy. However, modern scholarship has found some of his results inconsistent with positions defended elsewhere. The set of rules of inference of this system has also caused perplexity: there does not seem to be a single interpretation that validates all the rules which Aristotle is explicitly committed to using in his proofs. Some commentators have argued that these and other problems cannot be successfully addressed from the viewpoint of the traditional, ‘first-order’ interpretation of Aristotle’s syllogistic, whereby propositions are taken to involve quantification over individuals only. Accordingly, this interpretation not only is inadequate for formal analysis, but also stems from a misunderstanding of Aristotle’s ideas about quantification. On the contrary, in this study I purport to vindicate the adequacy and plausibility of the first-order interpretation. Together with some assumptions about the language of propositions and an appropriate regimentation, the first-order interpretation yields promising solutions to many of the problems raised by the modal syllogistic. Thus, I present a reconstruction of the language of propositions and a formal interpretation thereof which will prove respectful and responsive to most of the views endorsed by Aristotle in the ‘modal’ chapters of the Analytics.