405 resultados para Hamming Cube
Resumo:
Physical urticaria includes a heterogeneous group of disorders characterized by the development of urticarial lesions and/or angioedema after exposure to certain physical stimuli. The authors present the case of a child with severe acquired cold urticaria secondary to infectious mononucleosis. Avoidance of exposure to cold was recommended; prophylactic treatment with ketotifen and cetirizine was begun and a self-administered epinephrine kit was prescribed. The results of ice cube test and symptoms significantly improved. Physical urticaria, which involves complex pathogenesis, clinical course and therapy, may be potentially life threatening. Evaluation and diagnosis are especially important in children. To our knowledge this is the first description of persistent severe cold-induced urticaria associated with infectious mononucleosis in a child.
Resumo:
Contient : « Pratique de la fortification des places, par M. SAUVEUR » ; « Le Directeur des fortifications, de M. DE VAUBAN » ; « Le petit Directeur des fortifications », du même ; « Instruction de M. de Vauban pour servir au règlement du transport et remüement des terres » ; « État des différens prix de la toise cube de terre pour la foüille » ; « De la pesanteur de divers matériaux, suivant un mémoire de M. Sauveur »
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Ce mémoire s'intéresse à la reconstruction d'un modèle 3D à partir de plusieurs images. Le modèle 3D est élaboré avec une représentation hiérarchique de voxels sous la forme d'un octree. Un cube englobant le modèle 3D est calculé à partir de la position des caméras. Ce cube contient les voxels et il définit la position de caméras virtuelles. Le modèle 3D est initialisé par une enveloppe convexe basée sur la couleur uniforme du fond des images. Cette enveloppe permet de creuser la périphérie du modèle 3D. Ensuite un coût pondéré est calculé pour évaluer la qualité de chaque voxel à faire partie de la surface de l'objet. Ce coût tient compte de la similarité des pixels provenant de chaque image associée à la caméra virtuelle. Finalement et pour chacune des caméras virtuelles, une surface est calculée basée sur le coût en utilisant la méthode de SGM. La méthode SGM tient compte du voisinage lors du calcul de profondeur et ce mémoire présente une variation de la méthode pour tenir compte des voxels précédemment exclus du modèle par l'étape d'initialisation ou de creusage par une autre surface. Par la suite, les surfaces calculées sont utilisées pour creuser et finaliser le modèle 3D. Ce mémoire présente une combinaison innovante d'étapes permettant de créer un modèle 3D basé sur un ensemble d'images existant ou encore sur une suite d'images capturées en série pouvant mener à la création d'un modèle 3D en temps réel.
Resumo:
L’action humaine dans une séquence vidéo peut être considérée comme un volume spatio- temporel induit par la concaténation de silhouettes dans le temps. Nous présentons une approche spatio-temporelle pour la reconnaissance d’actions humaines qui exploite des caractéristiques globales générées par la technique de réduction de dimensionnalité MDS et un découpage en sous-blocs afin de modéliser la dynamique des actions. L’objectif est de fournir une méthode à la fois simple, peu dispendieuse et robuste permettant la reconnaissance d’actions simples. Le procédé est rapide, ne nécessite aucun alignement de vidéo, et est applicable à de nombreux scénarios. En outre, nous démontrons la robustesse de notre méthode face aux occultations partielles, aux déformations de formes, aux changements d’échelle et d’angles de vue, aux irrégularités dans l’exécution d’une action, et à une faible résolution.
Resumo:
Certaines stratégies alimentaires sont actuellement considérées pour remplacer l’usage des antimicrobiens dans les fermes porcines. Les objectifs de cette étude étaient d'évaluer l'effet de la granulométrie et de la texture des aliments sur les concentrations d'acides gras volatils intestinaux, la composition des populations pathogènes et commensales d’E. coli et sur les performances de croissance des porcs. Des porcs d'engraissement (n= 840) ont reçu l'une des six diètes suivantes: moulée texturée 500, 750 et 1250 µm et moulée cubée 500, 750 et 1250 µm. Le gain de poids a été mesuré à chaque changement de formulation de moulée. À l'abattoir, les contenus du caecum et du côlon de 165 porcs ont été échantillonnés pour le dénombrement des E. coli par PCR quantitatif (qPCR) et pour la quantification des AGV. Le gène yccT a été utilisé pour dénombrer les E. coli totaux. Une diminution du taux de conversion alimentaire a été associée avec la moulée cubée et/ou la moulée de 500 µm. Les concentrations d’acide propionique et butyrique, et ce tant au niveau du caecum que du côlon, étaient plus élevées chez les porcs recevant de la moulée texturée que chez ceux recevant de la moulée cubée. Du point de vue de la granulométrie, les concentrations caecales et du côlon d’acide butyrique étaient plus élevées chez les porcs alimentés avec de la moulée de 1250 µm que chez ceux recevant de la moulée de 500 µm. D'autre part, les niveaux intestinaux d’E. coli totaux étaient plus élevés pour les porcs nourris avec de la moulée cubée que pour ceux ayant reçu de la moulée texturée. Les résultats ont montré que la moulée texturée est associée à des performances de croissance plus faibles mais à des changements intestinaux favorables.
Resumo:
The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.
Resumo:
Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved
Resumo:
The median problem is a classical problem in Location Theory: one searches for a location that minimizes the average distance to the sites of the clients. This is for desired facilities as a distribution center for a set of warehouses. More recently, for obnoxious facilities, the antimedian was studied. Here one maximizes the average distance to the clients. In this paper the mixed case is studied. Clients are represented by a profile, which is a sequence of vertices with repetitions allowed. In a signed profile each element is provided with a sign from f+; g. Thus one can take into account whether the client prefers the facility (with a + sign) or rejects it (with a sign). The graphs for which all median sets, or all antimedian sets, are connected are characterized. Various consensus strategies for signed profiles are studied, amongst which Majority, Plurality and Scarcity. Hypercubes are the only graphs on which Majority produces the median set for all signed profiles. Finally, the antimedian sets are found by the Scarcity Strategy on e.g. Hamming graphs, Johnson graphs and halfcubes
Resumo:
This paper presents the results from an experimental program and an analytical assessment of the influence of addition of fibers on mechanical properties of concrete. Models derived based on the regression analysis of 60 test data for various mechanical properties of steel fiber-reinforced concrete have been presented. The various strength properties studied are cube and cylinder compressive strength, split tensile strength, modulus of rupture and postcracking performance, modulus of elasticity, Poisson’s ratio, and strain corresponding to peak compressive stress. The variables considered are grade of concrete, namely, normal strength 35 MPa , moderately high strength 65 MPa , and high-strength concrete 85 MPa , and the volume fraction of the fiber Vf =0.0, 0.5, 1.0, and 1.5% . The strength of steel fiber-reinforced concrete predicted using the proposed models have been compared with the test data from the present study and with various other test data reported in the literature. The proposed model predicted the test data quite accurately. The study indicates that the fiber matrix interaction contributes significantly to enhancement of mechanical properties caused by the introduction of fibers, which is at variance with both existing models and formulations based on the law of mixtures
Resumo:
The research in the area of geopolymer is gaining momentum during the past 20 years. Studies confirm that geopolymer concrete has good compressive strength, tensile strength, flexural strength, modulus of elasticity and durability. These properties are comparable with OPC concrete.There are many occasions where concrete is exposed to elevated temperatures like fire exposure from thermal processor, exposure from furnaces, nuclear exposure, etc.. In such cases, understanding of the behaviour of concrete and structural members exposed to elevated temperatures is vital. Even though many research reports are available about the behaviour of OPC concrete at elevated temperatures, there is limited information available about the behaviour of geopolymer concrete after exposure to elevated temperatures. A preliminary study was carried out for the selection of a mix proportion. The important variable considered in the present study include alkali/fly ash ratio, percentage of total aggregate content, fine aggregate to total aggregate ratio, molarity of sodium hydroxide, sodium silicate to sodium hydroxide ratio, curing temperature and curing period. Influence of different variables on engineering properties of geopolymer concrete was investigated. The study on interface shear strength of reinforced and unreinforced geopolymer concrete as well as OPC concrete was also carried out. Engineering properties of fly ash based geopolymer concrete after exposure to elevated temperatures (ambient to 800 °C) were studied and the corresponding results were compared with those of conventional concrete. Scanning Electron Microscope analysis, Fourier Transform Infrared analysis, X-ray powder Diffractometer analysis and Thermogravimetric analysis of geopolymer mortar or paste at ambient temperature and after exposure to elevated temperature were also carried out in the present research work. Experimental study was conducted on geopolymer concrete beams after exposure to elevated temperatures (ambient to 800 °C). Load deflection characteristics, ductility and moment-curvature behaviour of the geopolymer concrete beams after exposure to elevated temperatures were investigated. Based on the present study, major conclusions derived could be summarized as follows. There is a definite proportion for various ingredients to achieve maximum strength properties. Geopolymer concrete with total aggregate content of 70% by volume, ratio of fine aggregate to total aggregate of 0.35, NaOH molarity 10, Na2SiO3/NaOH ratio of 2.5 and alkali to fly ash ratio of 0.55 gave maximum compressive strength in the present study. An early strength development in geopolymer concrete could be achieved by the proper selection of curing temperature and the period of curing. With 24 hours of curing at 100 °C, 96.4% of the 28th day cube compressive strength could be achieved in 7 days in the present study. The interface shear strength of geopolymer concrete is lower to that of OPC concrete. Compared to OPC concrete, a reduction in the interface shear strength by 33% and 29% was observed for unreinforced and reinforced geopolymer specimens respectively. The interface shear strength of geopolymer concrete is lower than ordinary Portland cement concrete. The interface shear strength of geopolymer concrete can be approximately estimated as 50% of the value obtained based on the available equations for the calculation of interface shear strength of ordinary portland cement concrete (method used in Mattock and ACI). Fly ash based geopolymer concrete undergoes a high rate of strength loss (compressive strength, tensile strength and modulus of elasticity) during its early heating period (up to 200 °C) compared to OPC concrete. At a temperature exposure beyond 600 °C, the unreacted crystalline materials in geopolymer concrete get transformed into amorphous state and undergo polymerization. As a result, there is no further strength loss (compressive strength, tensile strength and modulus of elasticity) in geopolymer concrete, whereas, OPC concrete continues to lose its strength properties at a faster rate beyond a temperature exposure of 600 °C. At present no equation is available to predict the strength properties of geopolymer concrete after exposure to elevated temperatures. Based on the study carried out, new equations have been proposed to predict the residual strengths (cube compressive strength, split tensile strength and modulus of elasticity) of geopolymer concrete after exposure to elevated temperatures (upto 800 °C). These equations could be used for material modelling until better refined equations are available. Compared to OPC concrete, geopolymer concrete shows better resistance against surface cracking when exposed to elevated temperatures. In the present study, while OPC concrete started developing cracks at 400 °C, geopolymer concrete did not show any visible cracks up to 600 °C and developed only minor cracks at an exposure temperatureof 800 °C. Geopolymer concrete beams develop crack at an early load stages if they are exposed to elevated temperatures. Even though the material strength of the geopolymer concrete does not decrease beyond 600 °C, the flexural strength of corresponding beam reduces rapidly after 600 °C temperature exposure, primarily due to the rapid loss of the strength of steel. With increase in temperature, the curvature at yield point of geopolymer concrete beam increases and thereby the ductility reduces. In the present study, compared to the ductility at ambient temperature, the ductility of geopolymer concrete beams reduces by 63.8% at 800 °C temperature exposure. Appropriate equations have been proposed to predict the service load crack width of geopolymer concrete beam exposed to elevated temperatures. These equations could be used to limit the service load on geopolymer concrete beams exposed to elevated temperatures (up to 800 °C) for a predefined crack width (between 0.1mm and 0.3 mm) or vice versa. The moment-curvature relationship of geopolymer concrete beams at ambient temperature is similar to that of RCC beams and this could be predicted using strain compatibility approach Once exposed to an elevated temperature, the strain compatibility approach underestimates the curvature of geopolymer concrete beams between the first cracking and yielding point.
Resumo:
In der algebraischen Kryptoanalyse werden moderne Kryptosysteme als polynomielle, nichtlineare Gleichungssysteme dargestellt. Das Lösen solcher Gleichungssysteme ist NP-hart. Es gibt also keinen Algorithmus, der in polynomieller Zeit ein beliebiges nichtlineares Gleichungssystem löst. Dennoch kann man aus modernen Kryptosystemen Gleichungssysteme mit viel Struktur generieren. So sind diese Gleichungssysteme bei geeigneter Modellierung quadratisch und dünn besetzt, damit nicht beliebig. Dafür gibt es spezielle Algorithmen, die eine Lösung solcher Gleichungssysteme finden. Ein Beispiel dafür ist der ElimLin-Algorithmus, der mit Hilfe von linearen Gleichungen das Gleichungssystem iterativ vereinfacht. In der Dissertation wird auf Basis dieses Algorithmus ein neuer Solver für quadratische, dünn besetzte Gleichungssysteme vorgestellt und damit zwei symmetrische Kryptosysteme angegriffen. Dabei sind die Techniken zur Modellierung der Chiffren von entscheidender Bedeutung, so das neue Techniken entwickelt werden, um Kryptosysteme darzustellen. Die Idee für das Modell kommt von Cube-Angriffen. Diese Angriffe sind besonders wirksam gegen Stromchiffren. In der Arbeit werden unterschiedliche Varianten klassifiziert und mögliche Erweiterungen vorgestellt. Das entstandene Modell hingegen, lässt sich auch erfolgreich auf Blockchiffren und auch auf andere Szenarien erweitern. Bei diesen Änderungen muss das Modell nur geringfügig geändert werden.
Resumo:
As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.
Resumo:
Considers bandpass filters, Huffman coding, arithmetic coding and Hamming coding.