990 resultados para Efficient elliptic curve arithmetic
Resumo:
The hypersonic waverider forebody is designed in this paper. For the present waverider, the undersurface is carved out as a stream surface of a hypersonic inviscid flow field around wedge-elliptic cone, and the upper surface is assumed to be a freestream surface. A finite-volume code is used to generate the three-dimensional flow field. The leading edge is determined by satisfying the condition that the lip is situated at the intersection line of shocks.
Resumo:
Boron nitride is a promising material for nanotechnology applications due to its two-dimensional graphene-like, insulating, and highly-resistant structure. Recently it has received a lot of attention as a substrate to grow and isolate graphene as well as for its intrinsic UV lasing response. Similar to carbon, one-dimensional boron nitride nanotubes (BNNTs) have been theoretically predicted and later synthesised. Here we use first principles simulations to unambiguously demonstrate that i) BN nanotubes inherit the highly efficient UV luminescence of hexagonal BN; ii) the application of an external perpendicular field closes the electronic gap keeping the UV lasing with lower yield; iii) defects in BNNTS are responsible for tunable light emission from the UV to the visible controlled by a transverse electric field (TEF). Our present findings pave the road towards optoelectronic applications of BN-nanotube-based devices that are simple to implement because they do not require any special doping or complex growth
Resumo:
The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.
Resumo:
The micro-scale gas flows are usually low-speed flows and exhibit rarefied gas effects. It is challenging to simulate these flows because traditional CFD method is unable to capture the rarefied gas effects and the direct simulation Monte Carlo (DSMC) method is very inefficient for low-speed flows. In this study we combine two techniques to improve the efficiency of the DSMC method. The information preservation technique is used to reduce the statistical noise and the cell-size relaxed technique is employed to increase the effective cell size. The new cell-size relaxed IP method is found capable of simulating micro-scale gas flows as shown by the 2D lid-driven cavity flows.
Resumo:
Numerous investigations have utilized various semi-purified and purified diets to estimate the protein and amino acid requirements of several temperate fishes. The vast literature on the protein and amino acid requirements of fishes has continued to omit that of the tropical warm water species. The net effect is that fish feed formulation in Nigeria have relied on the requirement for temperate species. This paper attempts to review the state of knowledge on the protein amino acid requirements of fishes with emphasis on the warm water species, the methods of protein and amino acid requirement determinations and the influence of various factors on nutritional requirement studies. Finally evidence are presented with specific examples on how requirements of warm water fishes are different from the temperate species and used this to justify why fish feed formulation in Nigeria are far from being efficient
Resumo:
We propose a highly efficient content-lossless compression scheme for Chinese document images. The scheme combines morphologic analysis with pattern matching to cluster patterns. In order to achieve the error maps with minimal error numbers, the morphologic analysis is applied to decomposing and recomposing the Chinese character patterns. In the pattern matching, the criteria are adapted to the characteristics of Chinese characters. Since small-size components sometimes can be inserted into the blank spaces of large-size components, we can achieve small-size pattern library images. Arithmetic coding is applied to the final compression. Our method achieves much better compression performance than most alternative methods, and assures content-lossless reconstruction. (c) 2006 Society of Photo-Optical Instrumentation Engineers.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
Let l be any odd prime, and ζ a primitive l-th root of unity. Let C_l be the l-Sylow subgroup of the ideal class group of Q(ζ). The Teichmüller character w : Z_l → Z^*_l is given by w(x) = x (mod l), where w(x) is a p-1-st root of unity, and x ∈ Z_l. Under the action of this character, C_l decomposes as a direct sum of C^((i))_l, where C^((i))_l is the eigenspace corresponding to w^i. Let the order of C^((3))_l be l^h_3). The main result of this thesis is the following: For every n ≥ max( 1, h_3 ), the equation x^(ln) + y^(ln) + z^(ln) = 0 has no integral solutions (x,y,z) with l ≠ xyz. The same result is also proven with n ≥ max(1,h_5), under the assumption that C_l^((5)) is a cyclic group of order l^h_5. Applications of the methods used to prove the above results to the second case of Fermat's last theorem and to a Fermat-like equation in four variables are given.
The proof uses a series of ideas of H.S. Vandiver ([Vl],[V2]) along with a theorem of M. Kurihara [Ku] and some consequences of the proof of lwasawa's main conjecture for cyclotomic fields by B. Mazur and A. Wiles [MW]. In [V1] Vandiver claimed that the first case of Fermat's Last Theorem held for l if l did not divide the class number h^+ of the maximal real subfield of Q(e^(2πi/i)). The crucial gap in Vandiver's attempted proof that has been known to experts is explained, and complete proofs of all the results used from his papers are given.