355 resultados para Correctness
Resumo:
In this paper we give a compositional (or inductive) construction of monitoring automata for LTL formulas. Our construction is similar in spirit to the compositional construction of Kesten and Pnueli [5]. We introduce the notion of hierarchical Büchi automata and phrase our constructions in the framework of these automata. We give detailed constructions for all the principal LTL operators including past operators, along with proofs of correctness of the constructions.
Resumo:
Frequent episode discovery is a popular framework for pattern discovery from sequential data. It has found many applications in domains like alarm management in telecommunication networks, fault analysis in the manufacturing plants, predicting user behavior in web click streams and so on. In this paper, we address the discovery of serial episodes. In the episodes context, there have been multiple ways to quantify the frequency of an episode. Most of the current algorithms for episode discovery under various frequencies are apriori-based level-wise methods. These methods essentially perform a breadth-first search of the pattern space. However currently there are no depth-first based methods of pattern discovery in the frequent episode framework under many of the frequency definitions. In this paper, we try to bridge this gap. We provide new depth-first based algorithms for serial episode discovery under non-overlapped and total frequencies. Under non-overlapped frequency, we present algorithms that can take care of span constraint and gap constraint on episode occurrences. Under total frequency we present an algorithm that can handle span constraint. We provide proofs of correctness for the proposed algorithms. We demonstrate the effectiveness of the proposed algorithms by extensive simulations. We also give detailed run-time comparisons with the existing apriori-based methods and illustrate scenarios under which the proposed pattern-growth algorithms perform better than their apriori counterparts. (C) 2013 Elsevier B.V. All rights reserved.
Self-organized public key management in MANETs with enhanced security and without certificate-chains
Resumo:
In the self-organized public key management approaches, public key verification is achieved through verification routes constituted by the transitive trust relationships among the network principals. Most of the existing approaches do not distinguish among different available verification routes. Moreover, to ensure stronger security, it is important to choose an appropriate metric to evaluate the strength of a route. Besides, all of the existing self-organized approaches use certificate-chains for achieving authentication, which are highly resource consuming. In this paper, we present a self-organized certificate-less on-demand public key management (CLPKM) protocol, which aims at providing the strongest verification routes for authentication purposes. It restricts the compromise probability for a verification route by restricting its length. Besides, we evaluate the strength of a verification route using its end-to-end trust value. The other important aspect of the protocol is that it uses a MAC function instead of RSA certificates to perform public key verifications. By doing this, the protocol saves considerable computation power, bandwidth and storage space. We have used an extended strand space model to analyze the correctness of the protocol. The analytical, simulation, and the testbed implementation results confirm the effectiveness of the proposed protocol. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.
Resumo:
We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.
Resumo:
Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.
Resumo:
Piezoelectric actuators are mounted on both sides of a rectangular wing model. Possibility of the improvement of aircraft rolling power is investigated. All experiment projects, including designing the wind tunnel model, checking the material constants, measuring the natural frequencies and checking the effects of actuators, guarantee the correctness and precision of the finite element model. The wind tunnel experiment results show that the calculations coincide with the experiments. The feasibility of fictitious control surface is validated.
Resumo:
At high temperature rise rate, the mechanical properties of 10 # steel were determined experimentally in a very wide range of temperature and strain rates. A new constitutive relationship was put forward, which can fit with the experimental results and describe various phenomena observed in our experiments. Meanwhile, some interesting characteristics about the temperature rise rate, strain and strain rate hardening and thermal softening are also shown in this paper. Finally, the reliability of the constitutive law and the correctness of the constitutive parameters were verified by comparing the calculation results with the experimental data.
Resumo:
For understanding the correctness of simulations the behaviour of numerical solutions is analysed, Tn order to improve the accuracy of solutions three methods are presented. The method with GVC (group velocity control) is used to simulate coherent structures in compressible mixing layers. The effect of initial conditions for the mixing layer with convective Mach number 0.8 on coherent structures is discussed. For the given initial conditions two types of coherent structures in the mixing layer are obtained.
Resumo:
Resumen: El presente artículo estudia uno de los temas centrales del debate iusfilosófico contemporáneo, a saber, el lugar que ocupa la razón en la interpretación jurídica. En este sentido, enfoca el tema de la objetividad y la racionalidad en la interpretación jurídica a partir del pensamiento del profesor de Yale, Owen Fiss, con una presentación de sus ideas en torno a la adjudicación y a las notas de la interpretación. Se consideran luego cuatro grandes cuestiones que constituyen el núcleo de la propuesta de Fiss: la racionalidad de la interpretación, su carácter objetivo, sus criterios de corrección y el papel de las valoraciones morales en la interpretación. Se finaliza con balance conclusivo de los aspectos fuertes y débiles de las ideas de Fiss.
Resumo:
In this project, a system to detect and control traffic through Arduino has been designed and developed. The system has been divided in three parts. On the one hand, we have a software simulator which have been designed and developed to manage the traffic from a computer. The simulator is written in the Java Language and it is able to control four different types of crossroads, offering several options to the user for each one of them. On the other hand, with relation to the hardware, an Arduino board to make a scale model of one of the crossroads that controls the application has been used. This Arduino receives and processes the messages sent from the computer, next it shows the traffic light of the scale model in the same way that are seen in the simulator. And finally, to detect the traffic by the system, it has also been designed and developed a traffic sensor using another Arduino. To communicate the simulator in the computer and the Arduino which has been used to control the hardware of the scale model, and share information among them, the serial communication of each one of them has been used. Once completely developed each part of the system, several tests have been made to validate the correctness of both, software and hardware.
Resumo:
Normas linguísticas são os usos instituídos pelos falantes da língua. Há normas consagradas pela tradição literária, por exemplo, e há normas consagradas pela tradição das comunidades. Quando estas não são aceitas, pode se dar o conflito, motivado pela não aceitação da nova norma, ou da norma diferente, geralmente acompanhada de avaliações negativas. Tomando os pronomes pessoais ele/lhe acusativos, me inicial e se sujeito (usando outras categorias quando a situação for favorável) como referência, procura-se investigar os motivos que levam a tais conflitos. Usa-se um conjunto de pensamentos provenientes da sociolinguística, do funcionalismo, da linguística histórica, da tradição gramatical, que, juntos, dão sustentação à problemática, sem levantar corpus exaustivo para descrição e explicação de regras da língua, motivo por que essas teorias são aproveitadas, enfaticamente, apenas em suas bases teóricas gerais. Os exemplos são esparsamente colhidos em fontes diversas: livros, canções, textos literários, ensaios, mas principalmente em notícias veiculadas na internet. É o que basta para um exame crítico da questão abordada. Para tanto, foram cotejados os posicionamentos da normatividade (a língua ideal, homogênea) com os da normalidade (a língua em uso, heterogênea). No entrementes é que estão as causas dos conflitos: a ideia de que a escrita representa o modelo certo, a resistência às mudanças e variações, o imaginário social que decide o certo e o errado, a ideologia avessa à evolução da língua e os conselhos do tipo não use e evite vão desgastando a concepção de língua. Para posturas como essas, não são aceitos os usos estigmatizados, embora abundantemente usados nos veículos de comunicação sociais
Resumo:
A segmentação dos nomes nas suas partes constitutivas é uma etapa fundamental no processo de integração de bases de dados por meio das técnicas de vinculação de registros. Esta separação dos nomes pode ser realizada de diferentes maneiras. Este estudo teve como objetivo avaliar a utilização do Modelo Escondido de Markov (HMM) na segmentação nomes e endereços de pessoas e a eficiência desta segmentação no processo de vinculação de registros. Foram utilizadas as bases do Sistema de Informações sobre Mortalidade (SIM) e do Subsistema de Informação de Procedimentos de Alta Complexidade (APAC) do estado do Rio de Janeiro no período entre 1999 a 2004. Uma metodologia foi proposta para a segmentação de nome e endereço sendo composta por oito fases, utilizando rotinas implementadas em PL/SQL e a biblioteca JAHMM, implementação na linguagem Java de algoritmos de HMM. Uma amostra aleatória de 100 registros de cada base foi utilizada para verificar a correção do processo de segmentação por meio do modelo HMM.Para verificar o efeito da segmentação do nome por meio do HMM, três processos de vinculação foram aplicados sobre uma amostra das duas bases citadas acima, cada um deles utilizando diferentes estratégias de segmentação, a saber: 1) divisão dos nomes pela primeira parte, última parte e iniciais do nome do meio; 2) divisão do nome em cinco partes; (3) segmentação segundo o HMM. A aplicação do modelo HMM como mecanismo de segmentação obteve boa concordância quando comparado com o observador humano. As diferentes estratégias de segmentação geraram resultados bastante similares na vinculação de registros, tendo a estratégia 1 obtido um desempenho pouco melhor que as demais. Este estudo sugere que a segmentação de nomes brasileiros por meio do modelo escondido de Markov não é mais eficaz do que métodos tradicionais de segmentação.
Resumo:
As desigualdades sociais e educacionais contribuem para a reprodução das classes no Brasil. A juventude se encontra no cerne dessa questão como um dos grupos mais atingidos por essa distribuição desigual. Partindo do princípio de que existem diferentes formas de se experimentar a juventude na contemporaneidade e que as classes sociais seriam um importante fator para se pensar as diferentes formas de transição para a vida adulta, este trabalho objetiva discutir a questão da juventude e dos jovens no contexto da desigualdade. Para tanto, foi realizada uma pesquisa quantitativa com jovens alunos da Educação de Jovens e Adultos (EJA), no município de Mesquita (RJ), acerca de suas características e modos de vida; buscando, entretanto, uma possibilidade de generalização desse caso particular do possível. Foram abordados os conceitos de condição e posição juvenil, a fim de se construir um mapa da desigualdade e posicionar esse jovem aluno. Os resultados indicam que há diferenças entre os coortes geracionais que compõe a juventude com relação a suas trajetórias escolares (jovem-adolescente de 15 a 17 anos, jovem-jovem de 18 a 24 anos e jovem-adulto, de 25 a 29 anos de idade). A hipótese é de que essa juventude apresenta indicadores distintos pelas imbricações que implicam as políticas educacionais em vigor, em suas determinadas épocas de entrada e permanência na escola regular. Dessa forma, pensar a juventude nesse contexto pode contribuir para entender melhor quem é o novo público que ocupa os bancos escolares da EJA nos últimos anos e, ainda, tentar interpretar o impacto das políticas de correção de fluxo no plano concreto: na vida desses jovens.
Resumo:
The Dependency Structure Matrix (DSM) has proved to be a useful tool for system structure elicitation and analysis. However, as with any modelling approach, the insights gained from analysis are limited by the quality and correctness of input information. This paper explores how the quality of data in a DSM can be enhanced by elicitation methods which include comparison of information acquired from different perspectives and levels of abstraction. The approach is based on comparison of dependencies according to their structural importance. It is illustrated through two case studies: creation of a DSM showing the spatial connections between elements in a product, and a DSM capturing information flows in an organisation. We conclude that considering structural criteria can lead to improved data quality in DSM models, although further research is required to fully explore the benefits and limitations of our proposed approach.