27 resultados para Cryptography, Discrete Logarithm, Extension Fields, Karatsuba Multiplication, Normal Basis


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução e Objetivos – Os campos eletromagnéticos estão presentes naturalmente no Universo. Há alguns anos, os valores referentes a campos eletromagnéticos eram relativamente constantes. Com o desenvolvimento da tecnologia, a exposição a novas fontes de radiação eletromagnética aumentou. Desta forma, é normal que a preocupação pública, principalmente sobre os potenciais riscos para a saúde provenientes dos campos eletromagnéticos, tenha aumentado. O objetivo deste trabalho foi conhecer e analisar a preocupação e a perceção dos indivíduos sobre a radiação eletromagnética, tendo por base que um dos principais fatores para a adoção de medidas de precaução é o modo como o risco é percecionado pelo indivíduo. Metodologia – Trata-se de um estudo do tipo descritivo, de natureza quantitativa. A amostra, composta por 320 indivíduos, é de natureza não probabilística de conveniência. Resultados – Verificou-se que os inquiridos se manifestam “pouco preocupados” relativamente à exposição aos campos eletromagnéticos, desconhecem as fontes emissoras de radiação eletromagnética presentes no seu quotidiano e não tomam precauções relativamente à exposição a campos eletromagnéticos. Conclusões – Os indivíduos mostram imaturidade conscienciosa em relação à problemática da radiação eletromagnética, em parte justificada pela ausência de mecanismos sensoriais que a permitam detetar. A aposta na educação e sensibilização pode garantir um futuro com melhor qualidade de vida. É importante reunir esforços de várias entidades (saúde, meios de comunicação social e educação). A escola, através das crianças e jovens, constitui um meio privilegiado para a transmissão de informação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To study a flavour model with a non-minimal Higgs sector one must first define the symmetries of the fields; then identify what types of vacua exist and how they may break the symmetries; and finally determine whether the remnant symmetries are compatible with the experimental data. Here we address all these issues in the context of flavour models with any number of Higgs doublets. We stress the importance of analysing the Higgs vacuum expectation values that are pseudo-invariant under the generators of all subgroups. It is shown that the only way of obtaining a physical CKM mixing matrix and, simultaneously, non-degenerate and non-zero quark masses is requiring the vacuum expectation values of the Higgs fields to break completely the full flavour group, except possibly for some symmetry belonging to baryon number. The application of this technique to some illustrative examples, such as the flavour groups Delta (27), A(4) and S-3, is also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With very few exceptions, M > 4 tectonic earthquakes in the Azores show normal fault solution and occur away from the islands. Exceptionally, the 1998 shock was pure strike-slip and occurred within the northern edge of the Pico-Faial Ridge. Fault plane solutions show two possible planes of rupture striking ENE-WSW (dextral) and NNW-SSE (sinistral). The former has not been recognised in the Azores, but is parallel to the transform direction related to the relative motion between the Eurasia and Nubia plates. Therefore, the main question we address in the present study is: do transform faults related to the Eurasia/Nubia plate boundary exist in the Azores? Knowing that the main source of strain is related to plate kinematics, we conclude that the sinistral strike-slip NNW-SSE fault plane solution is not consistent with either the fault dip (ca. 65, which is typical of a normal fault) or the ca. ENE-WSW direction of maximum extension; both are consistent with a normal fault, as observed in most major earthquakes on faults striking around NNW-SSE in the Azores. In contrast, the dextral strike-slip ENE-WSW fault plane solution is consistent with the transform direction related to the anticlockwise rotation of Nubia relative to Eurasia. Altogether, tectonic data, measured ground motion, observed destruction, and modelling are consistent with a dextral strike-slip source fault striking ENE-WSW. Furthermore, the bulk clockwise rotation measured by GPS is typical of bookshelf block rotations observed at the termination of such master strike-slip faults. Therefore, we suggest that the 1998 earthquake can be related to the WSW termination of a transform (ENE-WSW fault plane solution) associated with the Nubia-Eurasia diffuse plate boundary. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of transient dynamical phenomena near bifurcation thresholds has attracted the interest of many researchers due to the relevance of bifurcations in different physical or biological systems. In the context of saddle-node bifurcations, where two or more fixed points collide annihilating each other, it is known that the dynamics can suffer the so-called delayed transition. This phenomenon emerges when the system spends a lot of time before reaching the remaining stable equilibrium, found after the bifurcation, because of the presence of a saddle-remnant in phase space. Some works have analytically tackled this phenomenon, especially in time-continuous dynamical systems, showing that the time delay, tau, scales according to an inverse square-root power law, tau similar to (mu-mu (c) )(-1/2), as the bifurcation parameter mu, is driven further away from its critical value, mu (c) . In this work, we first characterize analytically this scaling law using complex variable techniques for a family of one-dimensional maps, called the normal form for the saddle-node bifurcation. We then apply our general analytic results to a single-species ecological model with harvesting given by a unimodal map, characterizing the delayed transition and the scaling law arising due to the constant of harvesting. For both analyzed systems, we show that the numerical results are in perfect agreement with the analytical solutions we are providing. The procedure presented in this work can be used to characterize the scaling laws of one-dimensional discrete dynamical systems with saddle-node bifurcations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of monopolar and bipolar shaped pulses in additional yield of apple juice extraction is evaluated. The applied electric field strength, pulsewidth, and number of pulses are assessed for both pulse types, and divergences are analyzed. Variation of electric field strength is ranged from 100 to 1300 V/cm, pulsewidth from 20 to 300 mu s, and the number of pulses from 10 to 200, at a frequency of 200 Hz. Two pulse trains separated by 1 s are applied to apple cubes. Results are plotted against reference untreated samples for all assays. Specific energy consumption is calculated for each experiment as well as qualitative indicators for apple juice of total soluble dry matter and absorbance at 390-nm wavelength. Bipolar pulses demonstrated higher efficiency, and specific energetic consumption has a threshold where higher inputs of energy do not result in higher juice extraction when electric field variation is applied. Total soluble dry matter and absorbance results do not illustrate significant differences between application of monopolar and bipolar pulses, but all values are inside the limits proposed for apple juice intended for human consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Jornalismo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For an interval map, the poles of the Artin-Mazur zeta function provide topological invariants which are closely connected to topological entropy. It is known that for a time-periodic nonautonomous dynamical system F with period p, the p-th power [zeta(F) (z)](p) of its zeta function is meromorphic in the unit disk. Unlike in the autonomous case, where the zeta function zeta(f)(z) only has poles in the unit disk, in the p-periodic nonautonomous case [zeta(F)(z)](p) may have zeros. In this paper we introduce the concept of spectral invariants of p-periodic nonautonomous discrete dynamical systems and study the role played by the zeros of [zeta(F)(z)](p) in this context. As we will see, these zeros play an important role in the spectral classification of these systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sparse matrix-vector multiplication (SMVM) is a fundamental operation in many scientific and engineering applications. In many cases sparse matrices have thousands of rows and columns where most of the entries are zero, while non-zero data is spread over the matrix. This sparsity of data locality reduces the effectiveness of data cache in general-purpose processors quite reducing their performance efficiency when compared to what is achieved with dense matrix multiplication. In this paper, we propose a parallel processing solution for SMVM in a many-core architecture. The architecture is tested with known benchmarks using a ZYNQ-7020 FPGA. The architecture is scalable in the number of core elements and limited only by the available memory bandwidth. It achieves performance efficiencies up to almost 70% and better performances than previous FPGA designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by the dark matter and the baryon asymmetry problems, we analyze a complex singlet extension of the Standard Model with a Z(2) symmetry (which provides a dark matter candidate). After a detailed two-loop calculation of the renormalization group equations for the new scalar sector, we study the radiative stability of the model up to a high energy scale (with the constraint that the 126 GeV Higgs boson found at the LHC is in the spectrum) and find it requires the existence of a new scalar state mixing with the Higgs with a mass larger than 140 GeV. This bound is not very sensitive to the cutoff scale as long as the latter is larger than 10(10) GeV. We then include all experimental and observational constraints/measurements from collider data, from dark matter direct detection experiments, and from the Planck satellite and in addition force stability at least up to the grand unified theory scale, to find that the lower bound is raised to about 170 GeV, while the dark matter particle must be heavier than about 50 GeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.