956 resultados para Geometry of numbers
Resumo:
This work consists of the conception, developing and implementation of a Computational Routine CAE which has algorithms suitable for the tension and deformation analysis. The system was integrated to an academic software named as OrtoCAD. The expansion algorithms for the interface CAE genereated by this work were developed in FORTRAN with the objective of increase the applications of two former works of PPGEM-UFRN: project and fabrication of a Electromechanincal reader and Software OrtoCAD. The software OrtoCAD is an interface that, orinally, includes the visualization of prothetic cartridges from the data obtained from a electromechanical reader (LEM). The LEM is basically a tridimensional scanner based on reverse engineering. First, the geometry of a residual limb (i.e., the remaining part of an amputee leg wherein the prothesis is fixed) is obtained from the data generated by LEM by the use of Reverse Engineering concepts. The proposed core FEA uses the Shell's Theory where a 2D surface is generated from a 3D piece form OrtoCAD. The shell's analysis program uses the well-known Finite Elements Method to describe the geometry and the behavior of the material. The program is based square-based Lagragean elements of nine nodes and displacement field of higher order to a better description of the tension field in the thickness. As a result, the new FEA routine provide excellent advantages by providing new features to OrtoCAD: independency of high cost commercial softwares; new routines were added to the OrtoCAD library for more realistic problems by using criteria of fault engineering of composites materials; enhanced the performance of the FEA analysis by using a specific grid element for a higher number of nodes; and finally, it has the advantage of open-source project and offering customized intrinsic versatility and wide possibilities of editing and/or optimization that may be necessary in the future
Resumo:
The present work aims to show a possible relationship between the use of the History of Mathematics and Information and Communication Technologies (TIC) in teaching Mathematics through activities that use geometric constructions of the “Geometry of the Compass” (1797) by Lorenzo Mascheroni (1750-1800). For this, it was performed a qualitative research characterized by an historical exploration of bibliographical character followed by an empirical intervention based on use of the History of Mathematics combined with TIC through Mathematical Investigation. Thus, studies were performed in papers dealing with the topic, as well as a survey to highlight problems and /or episodes of the history of mathematics that can be solved with the help of TIC, allowing the production of a notebook of activities addressing the resolution of historical problems in a computer environment. In this search, we came across the problems of geometry that are presented by Mascheroni stated previously in the work that we propose solutions and investigations using GeoGebra software. The research resulted in the elaboration of an educational product, a notebook of activities, which was structure to allow during its implementation, students can conduct historical and/or Mathematics research, therefore, we present the procedures for realization of each construction, followed at some moments by original solution of the work. At the same time, we encourage students to investigate/reflect its construction (GeoGebra), in addition to making comparisons with the solution Mascheroni. This notebook was applied to two classes of the course of Didactics of Mathematics I (MAT0367) Course in Mathematics UFRN in 2014. Knowing the existence of some unfavorable arguments regarding the use of history of mathematics, such as loss of time, it was found that this factor can be mitigated with the aid of computational resource, because we can make checks using only the dynamism of and software without repeating the construction. It is noteworthy that the minimized time does not mean loss of reflection or maturation of ideas, when we adopted the process of historical and/or Mathematics Investigation
Resumo:
Several materials are currently under study for the CO2 capture process, like the metal oxides and mixed metal oxides, zeolites, carbonaceous materials, metal-organic frameworks (MOF's) organosilica and modified silica surfaces. In this work, evaluated the adsorption capacity of CO2 in mesoporous materials of different structures, such as MCM-48 and SBA- 15 without impregnating and impregnated with nickel in the proportions 5 %, 10 % and 20 % (m/m), known as 5Ni-MCM-48, 10Ni-MCM-48, 20Ni-MCM-48 and 5Ni-SBA-15, 10NiSBA-15, 20Ni-SBA-15. The materials were characterized by means of X-ray diffraction (XRD), thermal analysis (TG and DTG), Fourier transform infrared spectroscopy (FT-IR), N2 adsorption and desorption (BET) and scanning electron microscopy (SEM) with EDS. The adsorption process was performed varying the pressure of 100 - 4000 kPa and keeping the temperature constant and equal to 298 K. At a pressure of 100 kPa, higher concentrations of adsorption occurred for the materials 5Ni-MCM-48 (0.795 mmol g-1 ) and SBA-15 (0.914 mmol g-1 ) is not impregnated, and at a pressure of 4000 kPa for MCM-48 materials (14.89 mmol g-1) and SBA-15 (9.97 mmol g-1) not impregnated. The results showed that the adsorption capacity varies positively with the specific area, however, has a direct dependency on the type and geometry of the porous structure of channels. The data were fitted using the Langmuir and Freundlich models and were evaluated thermodynamic parameters Gibbs free energy and entropy of the adsorption system
Resumo:
The understanding of the occurrence and flow of groundwater in the subsurface is of fundamental importance in the exploitation of water, just like knowledge of all associated hydrogeological context. These factors are primarily controlled by geometry of a certain pore system, given the nature of sedimentary aquifers. Thus, the microstructural characterization, as the interconnectivity of the system, it is essential to know the macro properties porosity and permeability of reservoir rock, in which can be done on a statistical characterization by twodimensional analysis. The latter is being held on a computing platform, using image thin sections of reservoir rock, allowing the prediction of the properties effective porosity and hydraulic conductivity. For Barreiras Aquifer to obtain such parameters derived primarily from the interpretation of tests of aquifers, a practice that usually involves a fairly complex logistics in terms of equipment and personnel required in addition to high cost of operation. Thus, the analysis and digital image processing is presented as an alternative tool for the characterization of hydraulic parameters, showing up as a practical and inexpensive method. This methodology is based on a flowchart work involving sampling, preparation of thin sections and their respective images, segmentation and geometric characterization, three-dimensional reconstruction and flow simulation. In this research, computational image analysis of thin sections of rocks has shown that aquifer storage coefficients ranging from 0,035 to 0,12 with an average of 0,076, while its hydrogeological substrate (associated with the top of the carbonate sequence outcropping not region) presents effective porosities of the order of 2%. For the transport regime, it is evidenced that the methodology presents results below of those found in the bibliographic data relating to hydraulic conductivity, mean values of 1,04 x10-6 m/s, with fluctuations between 2,94 x10-6 m/s and 3,61x10-8 m/s, probably due to the larger scale study and the heterogeneity of the medium studied.
Resumo:
Einstein’s equations with negative cosmological constant possess the so-called anti de Sitter space, AdSd+1, as one of its solutions. We will later refer to this space as to the "bulk". The holographic principle states that quantum gravity in the AdSd+1 space can be encoded by a d−dimensional quantum field theory on the boundary of AdSd+1 space, invariant under conformal transformations, a CFTd. In the most famous example, the precise statement is the duality of the type IIB string theory in the space AdS5 × S 5 and the 4−dimensional N = 4 supersymmetric Yang-Mills theory. Another example is provided by a relation between Einstein’s equations in the bulk and hydrodynamic equations describing the effective theory on the boundary, the so-called fluid/gravity correspondence. An extension of the "AdS/CFT duality"for the CFT’s with boundary was proposed by Takayanagi, which was dubbed the AdS/BCFT correspondence. The boundary of a CFT extends to the bulk and restricts a region of the AdSd+1. Neumann conditions imposed on the extension of the boundary yield a dynamic equation that determines the shape of the extension. From the perspective of fluid/gravity correspondence, the shape of the Neumann boundary, and the geometry of the bulk is sourced by the energy-momentum tensor Tµν of a fluid residing on this boundary. Clarifying the relation of the Takayanagi’s proposal to the fluid/gravity correspondence, we will study the consistence of the AdS/BCFT with finite temperature CFT’s, or equivalently black hole geometries in the bulk.
Resumo:
The discussion about rift evolution in the Brazilian Equatorial margin during the South America-Africa breakup in the Jurassic/Cretaceous has been focused in many researches. But rift evolution based on development and growth of faults has not been well explored. In this sense, we investigated the Cretaceous Potiguar Basin in the Equatorial margin of Brazil to understand the geometry of major faults and the influence of crustal heterogeneity and preexisting structural fabric in the evolution of the basin internal architecture. Previous studies pointed out that the rift is an asymmetrical half-graben elongated along the NE-SW direction. We used 2D seismic, well logs and 3D gravity modeling to analyze four major border fault segments and determine their maximum displacement (Dmax) and length (L) ratio in the Potiguar Rift. We constrained the 3D gravity modeling with well data and the interpretation of seismic sections. The difference of the fault displacement measured in the gravity model is in the order of 10% compared to seismic and well data. The fault-growth curves allowed us to divide the faulted rift border into four main fault segments, which provide roughly similar Dmax/L ratios. Fault-growth curves suggest that a regional uniform tectonic mechanism influenced growth of the rift fault segments. The variation of the displacements along the fault segments indicates that the fault segments were formed independently during rift initiation and were linked by hard and soft linkages. The latter formed relay ramps. In the interconnection zones the Dmax/L ratios are highest due to interference of fault segment motions. We divided the evolution of the Potiguar Rift into five stages based on these ratios and correlated them with the major tectonic stages of the breakup between South America and Africa in Early Cretaceous.
Resumo:
The discussion about rift evolution in the Brazilian Equatorial margin during the South America-Africa breakup in the Jurassic/Cretaceous has been focused in many researches. But rift evolution based on development and growth of faults has not been well explored. In this sense, we investigated the Cretaceous Potiguar Basin in the Equatorial margin of Brazil to understand the geometry of major faults and the influence of crustal heterogeneity and preexisting structural fabric in the evolution of the basin internal architecture. Previous studies pointed out that the rift is an asymmetrical half-graben elongated along the NE-SW direction. We used 2D seismic, well logs and 3D gravity modeling to analyze four major border fault segments and determine their maximum displacement (Dmax) and length (L) ratio in the Potiguar Rift. We constrained the 3D gravity modeling with well data and the interpretation of seismic sections. The difference of the fault displacement measured in the gravity model is in the order of 10% compared to seismic and well data. The fault-growth curves allowed us to divide the faulted rift border into four main fault segments, which provide roughly similar Dmax/L ratios. Fault-growth curves suggest that a regional uniform tectonic mechanism influenced growth of the rift fault segments. The variation of the displacements along the fault segments indicates that the fault segments were formed independently during rift initiation and were linked by hard and soft linkages. The latter formed relay ramps. In the interconnection zones the Dmax/L ratios are highest due to interference of fault segment motions. We divided the evolution of the Potiguar Rift into five stages based on these ratios and correlated them with the major tectonic stages of the breakup between South America and Africa in Early Cretaceous.
Resumo:
Highlights of Data Expedition: • Students explored daily observations of local climate data spanning the past 35 years. • Topological Data Analysis, or TDA for short, provides cutting-edge tools for studying the geometry of data in arbitrarily high dimensions. • Using TDA tools, students discovered intrinsic dynamical features of the data and learned how to quantify periodic phenomenon in a time-series. • Since nature invariably produces noisy data which rarely has exact periodicity, students also considered the theoretical basis of almost-periodicity and even invented and tested new mathematical definitions of almost-periodic functions. Summary The dataset we used for this data expedition comes from the Global Historical Climatology Network. “GHCN (Global Historical Climatology Network)-Daily is an integrated database of daily climate summaries from land surface stations across the globe.” Source: https://www.ncdc.noaa.gov/oa/climate/ghcn-daily/ We focused on the daily maximum and minimum temperatures from January 1, 1980 to April 1, 2015 collected from RDU International Airport. Through a guided series of exercises designed to be performed in Matlab, students explore these time-series, initially by direct visualization and basic statistical techniques. Then students are guided through a special sliding-window construction which transforms a time-series into a high-dimensional geometric curve. These high-dimensional curves can be visualized by projecting down to lower dimensions as in the figure below (Figure 1), however, our focus here was to use persistent homology to directly study the high-dimensional embedding. The shape of these curves has meaningful information but how one describes the “shape” of data depends on which scale the data is being considered. However, choosing the appropriate scale is rarely an obvious choice. Persistent homology overcomes this obstacle by allowing us to quantitatively study geometric features of the data across multiple-scales. Through this data expedition, students are introduced to numerically computing persistent homology using the rips collapse algorithm and interpreting the results. In the specific context of sliding-window constructions, 1-dimensional persistent homology can reveal the nature of periodic structure in the original data. I created a special technique to study how these high-dimensional sliding-window curves form loops in order to quantify the periodicity. Students are guided through this construction and learn how to visualize and interpret this information. Climate data is extremely complex (as anyone who has suffered from a bad weather prediction can attest) and numerous variables play a role in determining our daily weather and temperatures. This complexity coupled with imperfections of measuring devices results in very noisy data. This causes the annual seasonal periodicity to be far from exact. To this end, I have students explore existing theoretical notions of almost-periodicity and test it on the data. They find that some existing definitions are also inadequate in this context. Hence I challenged them to invent new mathematics by proposing and testing their own definition. These students rose to the challenge and suggested a number of creative definitions. While autocorrelation and spectral methods based on Fourier analysis are often used to explore periodicity, the construction here provides an alternative paradigm to quantify periodic structure in almost-periodic signals using tools from topological data analysis.
Resumo:
The central dogma of molecular biology relies on the correct Watson-Crick (WC) geometry of canonical deoxyribonucleic acid (DNA) dG•dC and dA•dT base pairs to replicate and transcribe genetic information with speed and an astonishing level of fidelity. In addition, the Watson-Crick geometry of canonical ribonucleic acid (RNA) rG•rC and rA•rU base pairs is highly conserved to ensure that proteins are translated with high fidelity. However, numerous other potential nucleobase tautomeric and ionic configurations are possible that can give rise to entirely new pairing modes between the nucleotide bases. Very early on, James Watson and Francis Crick recognized their importance and in 1953 postulated that if bases adopted one of their less energetically disfavored tautomeric forms (and later ionic forms) during replication it could lead to the formation of a mismatch with a Watson-Crick-like geometry and could give rise to “natural mutations.”
Since this time numerous studies have provided evidence in support of this hypothesis and have expanded upon it; computational studies have addressed the energetic feasibilities of different nucleobases’ tautomeric and ionic forms in siico; crystallographic studies have trapped different mismatches with WC-like geometries in polymerase or ribosome active sites. However, no direct evidence has been given for (i) the direct existence of these WC-like mismatches in canonical DNA duplex, RNA duplexes, or non-coding RNAs; (ii) which, if any, tautomeric or ionic form stabilizes the WC-like geometry. This thesis utilizes nuclear magnetic resonance (NMR) spectroscopy and rotating frame relaxation dispersion (R1ρ RD) in combination with density functional theory (DFT), biochemical assays, and targeted chemical perturbations to show that (i) dG•dT mismatches in DNA duplexes, as well as rG•rU mismatches RNA duplexes and non-coding RNAs, transiently adopt a WC-like geometry that is stabilized by (ii) an interconnected network of rapidly interconverting rare tautomers and anionic bases. These results support Watson and Crick’s tautomer hypothesis, but additionally support subsequent hypotheses invoking anionic mismatches and ultimately tie them together. This dissertation shows that a common mismatch can adopt a Watson-Crick-like geometry globally, in both DNA and RNA, and whose geometry is stabilized by a kinetically linked network of rare tautomeric and anionic bases. The studies herein also provide compelling evidence for their involvement in spontaneous replication and translation errors.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Miniaturized, self-sufficient bioelectronics powered by unconventional micropower may lead to a new generation of implantable, wireless, minimally invasive medical devices, such as pacemakers, defibrillators, drug-delivering pumps, sensor transmitters, and neurostimulators. Studies have shown that micro-enzymatic biofuel cells (EBFCs) are among the most intuitive candidates for in vivo micropower. In the fisrt part of this thesis, the prototype design of an EBFC chip, having 3D intedigitated microelectrode arrays was proposed to obtain an optimum design of 3D microelectrode arrays for carbon microelectromechanical systems (C-MEMS) based EBFCs. A detailed modeling solving partial differential equations (PDEs) by finite element techniques has been developed on the effect of 1) dimensions of microelectrodes, 2) spatial arrangement of 3D microelectrode arrays, 3) geometry of microelectrode on the EBFC performance based on COMSOL Multiphysics. In the second part of this thesis, in order to investigate the performance of an EBFC, behavior of an EBFC chip performance inside an artery has been studied. COMSOL Multiphysics software has also been applied to analyze mass transport for different orientations of an EBFC chip inside a blood artery. Two orientations: horizontal position (HP) and vertical position (VP) have been analyzed. The third part of this thesis has been focused on experimental work towards high performance EBFC. This work has integrated graphene/enzyme onto three-dimensional (3D) micropillar arrays in order to obtain efficient enzyme immobilization, enhanced enzyme loading and facilitate direct electron transfer. The developed 3D graphene/enzyme network based EBFC generated a maximum power density of 136.3 μWcm-2 at 0.59 V, which is almost 7 times of the maximum power density of the bare 3D carbon micropillar arrays based EBFC. To further improve the EBFC performance, reduced graphene oxide (rGO)/carbon nanotubes (CNTs) has been integrated onto 3D mciropillar arrays to further increase EBFC performance in the fourth part of this thesisThe developed rGO/CNTs based EBFC generated twice the maximum power density of rGO based EBFC. Through a comparison of experimental and theoretical results, the cell performance efficiency is noted to be 67%.