880 resultados para Time-memory attacks
Resumo:
Fiel a los temas recurrentes en sus hondos versos elegíacos, en su último libro Miradas al último espejo, Fernando Ortiz recuerda las personas y los lugares de su existencia, canta los efectos del paso del tiempo y la precariedad de la vida, reflexiona sobre la poesía haciendo hincapié en el papel fundamental de la tradición. Lo que pertenece al pasado sobrevive aquí en unas palabras llenas de belleza y emoción, que pretenden abatir la barrera entre el hoy y el ayer: las distintas épocas se mezclan en el sentir de Ortiz y concurren juntas a guiarlo hasta el punto final. El poder lenitivo de los versos, el sarcasmo y la ironía alivian la resignación ante el fluir temporal y le ayudan a aceptar el disgregarse del ser humano en su rápido recorrido por la vida. La única certeza del peregrino es su viaje, razón por la que tiene que aprovecharlo, mostrando gratitud y gozando de los placeres de este mundo entre los que prima indudablemente el amor. Concibiendo la ardua tarea del poeta como una búsqueda de la verdad para transmitirla en sus versos, en Miradas al último espejo Ortiz avisa a sus lectores del destino que todos compartimos, aconseja sobre cómo actuar a lo largo del camino, ofrece el alivio de su poesía y se despide 'con cervantino agradecimiento de la vida'
Resumo:
In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.
Resumo:
A correlational study was designed to examine the general processing speed and orthographic processing speed accounts of the association between continuous naming speed and word reading skill in children from fourth to sixth grade. Children were given two tests of each of the following constructs: word reading skill, alphanumeric symbol naming speed, nonsymbol naming speed, alphanumeric processing speed, and nonsymbol processing speed. Results were not completely consistent with either the general processing speed or the orthographic processing speed accounts. Although an alphanumeric symbol processing efficiency component is clearly involved, it is argued that the particularly strong association between naming speed and word reading also reflects the efficiency of phonological processing in children of this age.
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
El artículo analiza cómo Paul Celan, utilizando recursos arreferenciales y antimiméticos propios de la poésie pure y la poesía absoluta, desarrolla un modelo poético en el que la temporalidad y la memoria son determinantes. En el nuevo modelo, el lenguaje no constituye una realidad autónoma e inmanente; antes bien, se concreta en una acción dialógica y remite a una realidad extralingüística. El mismo texto poético configura un espacio del recuerdo, se erige en un lugar de memoria que, a través de las sedimentaciones históricas que arrastra el lenguaje, da testimonio de lo ocurrido.
Resumo:
O presente trabalho tem a intenção de estudar o desastre ambiental ocorrido em março de 2011 em São Lourenço do Sul, uma enxurrada que afetou a parte sul do estado do Rio Grande do Sul. Em São Lourenço do Sul, mais da metade da cidade foi atingida, afetando bairros inteiros, deixando grande parte da população desabrigada, inclusive causando óbitos. As óticas dos professores das redes públicas da cidade, através de suas sofridas memórias sobre aquele momento, juntamente com outras fontes, serão de suma importância para que se aproxime ainda mais do fato. A História Ambiental, juntamente com a História Oral, dentro dos domínios da História do Tempo Presente, serão as diretrizes norteadoras do processo.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Multiple-time signatures are digital signature schemes where the signer is able to sign a predetermined number of messages. They are interesting cryptographic primitives because they allow to solve many important cryptographic problems, and at the same time offer substantial efficiency advantage over ordinary digital signature schemes like RSA. Multiple-time signature schemes have found numerous applications, in ordinary, on-line/off-line, forward-secure signatures, and multicast/stream authentication. We propose a multiple-time signature scheme with very efficient signing and verifying. Our construction is based on a combination of one-way functions and cover-free families, and it is secure against the adaptive chosen-message attack.
Resumo:
The Distributed Network Protocol v3.0 (DNP3) is one of the most widely used protocols, to control national infrastructure. Widely used interactive packet manipulation tools, such as Scapy, have not yet been augmented to parse and create DNP3 frames (Biondi 2014). In this paper we extend Scapy to include DNP3, thus allowing us to perform attacks on DNP3 in real-time. Our contribution builds on East et al. (2009), who proposed a range of possible attacks on DNP3. We implement several of these attacks to validate our DNP3 extension to Scapy, then executed the attacks on real world equipment. We present our results, showing that many of these theoretical attacks would be unsuccessful in an Ethernet-based network.
Resumo:
An optical and irreversible temperature sensor (e.g., a time-temperature integrator) is reported based on a mechanically embossed chiral-nematic polymer network. The polymer consists of a chemical and a physical (hydrogen-bonded) network and has a reflection band in the visible wavelength range. The sensors are produced by mechanical embossing at elevated temperatures. A relative large compressive deformation (up to 10%) is obtained inducing a shift to shorter wavelength of the reflection band (>30 nm). After embossing, a temperature sensor is obtained that exhibits an irreversible optical response. A permanent color shift to longer wavelengths (red) is observed upon heating of the polymer material to temperatures above the glass transition temperature. It is illustrated that the observed permanent color shift is related to shape memory in the polymer material. The films can be printed on a foil, thus showing that these sensors are potentially interesting as time-temperature integrators for applications in food and pharmaceutical products. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
A time-varying controllable fault-tolerant field associative memory model and the realization algorithms are proposed. On the one hand, this model simulates the time-dependent changeability character of the fault-tolerant field of human brain's associative memory. On the other hand, fault-tolerant fields of the memory samples of the model can be controlled, and we can design proper fault-tolerant fields for memory samples at different time according to the essentiality of memory samples. Moreover, the model has realized the nonlinear association of infinite value pattern from n dimension space to m dimension space. And the fault-tolerant fields of the memory samples are full of the whole real space R-n. The simulation shows that the model has the above characters and the speed of associative memory about the model is faster.
Resumo:
The spacing effect in list learning occurs because identical massed items suffer encoding deficits and because spaced items benefit from retrieval and increased time in working memory. Requiring the retrieval of identical items produced a spacing effect for recall and recognition, both for intentional and incidental learning. Not requiring retrieval produced spacing only for intentional learning because intentional learning encourages retrieval. Once-presented words provided baselines for these effects. Next, massed and spaced word pairs were judged for matches on their first three letters, forcing retrieval. The words were not identical, so there was no encoding deficit. Retrieval could and did cause spacing only for the first word of each pair; time in working memory, only for the second.