985 resultados para Lattice-based cryptography
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.
Resumo:
Traditional transportation fuel, petroleum, is limited and nonrenewable, and it also causes pollutions. Hydrogen is considered one of the best alternative fuels for transportation. The key issue for using hydrogen as fuel for transportation is hydrogen storage. Lithium nitride (Li3N) is an important material which can be used for hydrogen storage. The decompositions of lithium amide (LiNH2) and lithium imide (Li2NH) are important steps for hydrogen storage in Li3N. The effect of anions (e.g. Cl-) on the decomposition of LiNH2 has never been studied. Li3N can react with LiBr to form lithium nitride bromide Li13N4Br which has been proposed as solid electrolyte for batteries. The decompositions of LiNH2 and Li2NH with and without promoter were investigated by using temperature programmed decomposition (TPD) and X-ray diffraction (XRD) techniques. It was found that the decomposition of LiNH2 produced Li2NH and NH3 via two steps: LiNH2 into a stable intermediate species (Li1.5NH1.5) and then into Li2NH. The decomposition of Li2NH produced Li, N2 and H2 via two steps: Li2NH into an intermediate species --- Li4NH and then into Li. The kinetic analysis of Li2NH decomposition showed that the activation energies are 533.6 kJ/mol for the first step and 754.2 kJ/mol for the second step. Furthermore, XRD demonstrated that the Li4NH, which was generated in the decomposition of Li2NH, formed a solid solution with Li2NH. In the solid solution, Li4NH possesses a similar cubic structure as Li2NH. The lattice parameter of the cubic Li4NH is 0.5033nm. The decompositions of LiNH2 and Li2NH can be promoted by chloride ion (Cl-). The introduction of Cl- into LiNH2 resulted in the generation of a new NH3 peak at low temperature of 250 °C besides the original NH3 peak at 330 °C in TPD profiles. Furthermore, Cl- can decrease the decomposition temperature of Li2NH by about 110 °C. The degradation of Li3N was systematically investigated with techniques of XRD, Fourier transform infrared (FT-IR) spectroscopy, and UV-visible spectroscopy. It was found that O2 could not affect Li3N at room temperature. However, H2O in air can cause the degradation of Li3N due to the reaction between H2O and Li3N to LiOH. The produced LiOH can further react with CO2 in air to Li2CO3 at room temperature. Furthermore, it was revealed that Alfa-Li3N is more stable in air than Beta-Li3N. The chemical stability of Li13N4Br in air has been investigated by XRD, TPD-MS, and UV-vis absorption as a function of time. The aging process finally leads to the degradation of the Li13N4Br into Li2CO3, lithium bromite (LiBrO2) and the release of gaseous NH3. The reaction order n = 2.43 is the best fitting for the Li13N4Br degradation in air reaction. Li13N4Br energy gap was calculated to be 2.61 eV.
Resumo:
In this paper a superelement formulation for geometric nonlinear finite element analysis is proposed. The element formulation is based on matrices generated by the static condensation algorithm. After defining the element characteristics, a method for the calculation of the element forces in a large displacement and rotation analysis is developed. In order to use the element in the solution of stability problems, the formulation of the geometric stiffness matrix is derived. An example shows the benefits of the element for the calculation of lattice-boom cranes.
Resumo:
Using ultracold alkaline-earth atoms in optical lattices, we construct a quantum simulator for U(N) and SU(N) lattice gauge theories with fermionic matter based on quantum link models. These systems share qualitative features with QCD, including chiral symmetry breaking and restoration at nonzero temperature or baryon density. Unlike classical simulations, a quantum simulator does not suffer from sign problems and can address the corresponding chiral dynamics in real time.
Resumo:
We carry out lattice simulations of a cosmological electroweak phase transition for a Higgs mass mh 126 GeV. The analysis is based on a dimensionally reduced effective theory for an MSSM-like scenario including a relatively light coloured SU(2)-singlet scalar, referred to as a right-handed stop. The non-perturbative transition is stronger than in 2-loop perturbation theory, and may offer a window for electroweak baryogenesis. The main remaining uncertainties concern the physical value of the right-handed stop mass which according to our analysis could be as high as mR 155 GeV; a more precise effective theory derivation and vacuum renormalization than available at present are needed for confirming this value.
Resumo:
This note is based on our recent results on QCD with varying number of flavors of fundamental fermions. Topics include unusual, strong dynamics in the preconformal, confining phase, the physics of the conformal window and the role of ab-initio lattice simulations in establishing our current knowledge of the phases of many flavor QCD.
Resumo:
We present a lattice QCD calculation of the up, down, strange and charm quark masses performed using the gauge configurations produced by the European Twisted Mass Collaboration with Nf=2+1+1 dynamical quarks, which include in the sea, besides two light mass degenerate quarks, also the strange and charm quarks with masses close to their physical values. The simulations are based on a unitary setup for the two light quarks and on a mixed action approach for the strange and charm quarks. The analysis uses data at three values of the lattice spacing and pion masses in the range 210–450 MeV, allowing for accurate continuum limit and controlled chiral extrapolation. The quark mass renormalization is carried out non-perturbatively using the RI′-MOM method. The results for the quark masses converted to the scheme are: mud(2 GeV)=3.70(17) MeV, ms(2 GeV)=99.6(4.3) MeV and mc(mc)=1.348(46) GeV. We obtain also the quark mass ratios ms/mud=26.66(32) and mc/ms=11.62(16). By studying the mass splitting between the neutral and charged kaons and using available lattice results for the electromagnetic contributions, we evaluate mu/md=0.470(56), leading to mu=2.36(24) MeV and md=5.03(26) MeV.
Resumo:
After reviewing how simulations employing classical lattice gauge theory permit to test a conjectured Euclideanization property of a light-cone Wilson loop in a thermal non-Abelian plasma, we show how Euclidean data can in turn be used to estimate the transverse collision kernel, C(k⊥), characterizing the broadening of a high-energy jet. First results, based on data produced recently by Panero et al, suggest that C(k⊥) is enhanced over the known NLO result in a soft regime k⊥ < a few T. The shape of k3⊥ C(k⊥) is consistent with a Gaussian at small k⊥.
Resumo:
The behavior of bottomonium state correlators at non-zero temperature, 140.4(β = 6.664) ≤ T ≤ 221(β = 7.280) (MeV), where the transition temperature is 154(9) (MeV), is studied, using lattice NRQCD on 48³ ×12 HotQCD HiSQ action configurations with light dynamical Nf = 2+1 (mu,s/ms = 0.05) staggered quarks. In order to understand finite temperature effects on quarkonium states, zero temperature behavior of bottomonium correlators is compared based on 32⁴ (β = 6.664,6.800 and 6.950) and 48³ ×64 (β = 7.280) lattices. We find that temperature effects on S-wave bottomoniumstates are small but P-wave bottomoniumstates show a noticeable temperature dependence above the transition temperature.
Resumo:
We present results on the nucleon scalar, axial, and tensor charges as well as on the momentum fraction, and the helicity and transversity moments. The pion momentum fraction is also presented. The computation of these key observables is carried out using lattice QCD simulations at a physical value of the pion mass. The evaluation is based on gauge configurations generated with two degenerate sea quarks of twisted mass fermions with a clover term. We investigate excited states contributions with the nucleon quantum numbers by analyzing three sink-source time separations. We find that, for the scalar charge, excited states contribute significantly and to a less degree to the nucleon momentum fraction and helicity moment. Our result for the nucleon axial charge agrees with the experimental value. Furthermore, we predict a value of 1.027(62) in the MS¯¯¯¯¯ scheme at 2 GeV for the isovector nucleon tensor charge directly at the physical point. The pion momentum fraction is found to be ⟨x⟩π±u−d=0.214(15)(+12−9) in the MS¯¯¯¯¯ at 2 GeV.
Resumo:
El objetivo de la presente tesis doctoral es el desarrollo de un nuevo concepto de biosensor óptico sin marcado, basado en una combinación de técnicas de caracterización óptica de interrogación vertical y estructuras sub-micrométricas fabricadas sobre chips de silicio. Las características más importantes de dicho dispositivo son su simplicidad, tanto desde el punto de vista de medida óptica como de introducción de las muestras a medir en el área sensible, aspectos que suelen ser críticos en la mayoría de sensores encontrados en la literatura. Cada uno de los aspectos relacionados con el diseño de un biosensor, que son fundamentalmente cuatro (diseño fotónico, caracterización óptica, fabricación y fluídica/inmovilización química) son desarrollados en detalle en los capítulos correspondientes. En la primera parte de la tesis se hace una introducción al concepto de biosensor, en qué consiste, qué tipos hay y cuáles son los parámetros más comunes usados para cuantificar su comportamiento. Posteriormente se realiza un análisis del estado del arte en la materia, enfocado en particular en el área de biosensores ópticos sin marcado. Se introducen también cuáles son las reacciones bioquímicas a estudiar (inmunoensayos). En la segunda parte se describe en primer lugar cuáles son las técnicas ópticas empleadas en la caracterización: Reflectometría, Elipsometría y Espectrometría; además de los motivos que han llevado a su empleo. Posteriormente se introducen diversos diseños de las denominadas "celdas optofluídicas", que son los dispositivos en los que se va a producir la interacción bioquímica. Se presentan cuatro dispositivos diferentes, y junto con ellos, se proponen diversos métodos de cálculo teórico de la respuesta óptica esperada. Posteriormente se procede al cálculo de la sensibilidad esperada para cada una de las celdas, así como al análisis de los procesos de fabricación de cada una de ellas y su comportamiento fluídico. Una vez analizados todos los aspectos críticos del comportamiento del biosensor, se puede realizar un proceso de optimización de su diseño. Esto se realiza usando un modelo de cálculo simplificado (modelo 1.5-D) que permite la obtención de parámetros como la sensibilidad y el límite de detección de un gran número de dispositivos en un tiempo relativamente reducido. Para este proceso se escogen dos de las celdas optofluídicas propuestas. En la parte final de la tesis se muestran los resultados experimentales obtenidos. En primer lugar, se caracteriza una celda basada en agujeros sub-micrométricos como sensor de índice de refracción, usando para ello diferentes líquidos orgánicos; dichos resultados experimentales presentan una buena correlación con los cálculos teóricos previos, lo que permite validar el modelo conceptual presentado. Finalmente, se realiza un inmunoensayo químico sobre otra de las celdas propuestas (pilares nanométricos de polímero SU-8). Para ello se utiliza el inmunoensayo de albumina de suero bovino (BSA) y su anticuerpo (antiBSA). Se detalla el proceso de obtención de la celda, la funcionalización de la superficie con los bioreceptores (en este caso, BSA) y el proceso de biorreconocimiento. Este proceso permite dar una primera estimación de cuál es el límite de detección esperable para este tipo de sensores en un inmunoensayo estándar. En este caso, se alcanza un valor de 2.3 ng/mL, que es competitivo comparado con otros ensayos similares encontrados en la literatura. La principal conclusión de la tesis es que esta tipología de dispositivos puede ser usada como inmunosensor, y presenta ciertas ventajas respecto a los actualmente existentes. Estas ventajas vienen asociadas, de nuevo, a su simplicidad, tanto a la hora de medir ópticamente, como dentro del proceso de introducción de los bioanalitos en el área sensora (depositando simplemente una gota sobre la micro-nano-estructura). Los cálculos teorícos realizados en los procesos de optimización sugieren a su vez que el comportamiento del sensor, medido en magnitudes como límite de detección biológico puede ser ampliamente mejorado con una mayor compactación de pilares, alcanzandose un valor mínimo de 0.59 ng/mL). The objective of this thesis is to develop a new concept of optical label-free biosensor, based on a combination of vertical interrogation optical techniques and submicron structures fabricated over silicon chips. The most important features of this device are its simplicity, both from the point of view of optical measurement and regarding to the introduction of samples to be measured in the sensing area, which are often critical aspects in the majority of sensors found in the literature. Each of the aspects related to the design of biosensors, which are basically four (photonic design, optical characterization, fabrication and fluid / chemical immobilization) are developed in detail in the relevant chapters. The first part of the thesis consists of an introduction to the concept of biosensor: which elements consists of, existing types and the most common parameters used to quantify its behavior. Subsequently, an analysis of the state of the art in this area is presented, focusing in particular in the area of label free optical biosensors. What are also introduced to study biochemical reactions (immunoassays). The second part describes firstly the optical techniques used in the characterization: reflectometry, ellipsometry and spectrometry; in addition to the reasons that have led to their use. Subsequently several examples of the so-called "optofluidic cells" are introduced, which are the devices where the biochemical interactions take place. Four different devices are presented, and their optical response is calculated by using various methods. Then is exposed the calculation of the expected sensitivity for each of the cells, and the analysis of their fabrication processes and fluidic behavior at the sub-micrometric range. After analyzing all the critical aspects of the biosensor, it can be performed a process of optimization of a particular design. This is done using a simplified calculation model (1.5-D model calculation) that allows obtaining parameters such as sensitivity and the detection limit of a large number of devices in a relatively reduced time. For this process are chosen two different optofluidic cells, from the four previously proposed. The final part of the thesis is the exposition of the obtained experimental results. Firstly, a cell based sub-micrometric holes is characterized as refractive index sensor using different organic fluids, and such experimental results show a good correlation with previous theoretical calculations, allowing to validate the conceptual model presented. Finally, an immunoassay is performed on another typology of cell (SU-8 polymer pillars). This immunoassay uses bovine serum albumin (BSA) and its antibody (antiBSA). The processes for obtaining the cell surface functionalization with the bioreceptors (in this case, BSA) and the biorecognition (antiBSA) are detailed. This immunoassay can give a first estimation of which are the expected limit of detection values for this typology of sensors in a standard immunoassay. In this case, it reaches a value of 2.3 ng/mL, which is competitive with other similar assays found in the literature. The main conclusion of the thesis is that this type of device can be used as immunosensor, and has certain advantages over the existing ones. These advantages are associated again with its simplicity, by the simpler coupling of light and in the process of introduction of bioanalytes into the sensing areas (by depositing a droplet over the micro-nano-structure). Theoretical calculations made in optimizing processes suggest that the sensor Limit of detection can be greatly improved with higher compacting of the lattice of pillars, reaching a minimum value of 0.59 ng/mL).
Resumo:
One of the main obstacles to the widespread adoption of quantum cryptography has been the difficulty of integration into standard optical networks, largely due to the tremendous difference in power of classical signals compared with the single quantum used for quantum key distribution. This makes the technology expensive and hard to deploy. In this letter, we show an easy and straightforward integration method of quantum cryptography into optical access networks. In particular, we analyze how a quantum key distribution system can be seamlessly integrated in a standard access network based on the passive optical and time division multiplexing paradigms. The novelty of this proposal is based on the selective post-processing that allows for the distillation of secret keys avoiding the noise produced by other network users. Importantly, the proposal does not require the modification of the quantum or classical hardware specifications neither the use of any synchronization mechanism between the network and quantum cryptography devices.
Resumo:
Hoy en día, los sistemas middleware de publicar-suscribir con la filtración de mensajes basado en contenido tiende a ser popularizado, y un sistema como este requiere codificar su mensaje a la combinación de varios elementos que se encuentran en los conjuntos no-interseccionados. Varios predicados posibles en los dominios de esos conjuntos forman un filtro, y el núcleo de algoritmo filtrado es seleccionar filtros adaptados tan pronto como sea posible. Sin embargo, el conjunto, que está formado por los filtros, contiene la extremadamente fuerte indeterminación y distensibilidad, lo que restringe el algoritmo filtrado. Por la resolución de la distensibilidad, se estudió la característica del conjunto de filtros en álgebra, y sabía que es un retículo específico. Por lo tanto, se intenta usar el carácter, el cual los retículos forman un conjunto parcialmente ordenado (o poset, del inglés partially ordered set) con límites, para reducir el tamaño de conjunto de filtros (compresión equivalente). Por estas razones, es necesario implementar un contenedor abstracto de retículo, y evaluar su desempeño tanto en la teoría, como en la práctica, para la solución de la distensibilidad del conjunto de filtros. Retículo (Lattice) es una estructura importante de Álgebra Abstracta, comúnmente se utiliza para resolver el problema teórico, y apenas de ser un contenedor abstracto en la ciencia de software, como resultado de su implementación compleja que proviene de su trivialidad en álgebra. Y por eso se hace difícil mi trabajo. Con el fin de evitar la teoría compleja del sistema práctico, simplemente introduce su núcleo algoritmo, el algoritmo de conteo, y esto llevó a cabo con el problema - la distensibilidad del conjunto de filtros. A continuación, se investigó la solución posible con retículos en la teoría, y se obtuvo el diseño de la implementación, normas para las pruebas xUnit y par´ametros para la evaluación. Por último, señalamos el entorno, el resultado, el análisis y la conclusión de la prueba de rendimiento.
Resumo:
The postprocessing or secret-key distillation process in quantum key distribution (QKD) mainly involves two well-known procedures: information reconciliation and privacy amplification. Information or key reconciliation has been customarily studied in terms of efficiency. During this, some information needs to be disclosed for reconciling discrepancies in the exchanged keys. The leakage of information is lower bounded by a theoretical limit, and is usually parameterized by the reconciliation efficiency (or inefficiency), i.e. the ratio of additional information disclosed over the Shannon limit. Most techniques for reconciling errors in QKD try to optimize this parameter. For instance, the well-known Cascade (probably the most widely used procedure for reconciling errors in QKD) was recently shown to have an average efficiency of 1.05 at the cost of a high interactivity (number of exchanged messages). Modern coding techniques, such as rate-adaptive low-density parity-check (LDPC) codes were also shown to achieve similar efficiency values exchanging only one message, or even better values with few interactivity and shorter block-length codes.