977 resultados para Integrable Equations in Physics
Resumo:
The surge of interest in graphene, as epitomized by the Nobel Prize in Physics in 2010, is attributed to its extraordinary properties. Graphene is ultrathin, mechanically tough, and has amendable surface chemistry. These features make graphene and graphene based nanostructure an ideal candidate for the use of molecular mass manipulation. The controllable and programmable molecular mass manipulation is crucial in enabling future graphene based applications, however is challenging to achieve. This dissertation studies several aspects in molecular mass manipulation including mass transportation, patterning and storage. For molecular mass transportation, two methods based on carbon nanoscroll are demonstrated to be effective. They are torsional buckling instability assisted transportation and surface energy induced radial shrinkage. To achieve a more controllable transportation, a fundamental law of direction transport of molecular mass by straining basal graphene is studied. For molecular mass patterning, we reveal a barrier effect of line defects in graphene, which can enable molecular confining and patterning in a domain of desirable geometry. Such a strategy makes controllable patterning feasible for various types of molecules. For molecular mass storage, we propose a novel partially hydrogenated bilayer graphene structure which has large capacity for mass uptake. Also the mass release can be achieved by simply stretching the structure. Therefore the mass uptake and release is reversible. This kind of structure is crucial in enabling hydrogen fuel based technology. Lastly, spontaneous nanofluidic channel formation enabled by patterned hydrogenation is studied. This novel strategy enables programmable channel formation with pre-defined complex geometry.
Resumo:
Social networks are a recent phenomenon of communication, with a high prevalence of young users. This concept serves as a motto for a multidisciplinary project, which aims to create a simple communication network, using light as the transmission medium. Mixed team, composed by students from secondary and higher education schools, are partners on the development of an optical transceiver. A LED lamp array and a small photodiode are the optical transmitter and receiver, respectively. Using several transceivers aligned with each other, this con guration creates a ring communication network, enabling the exchange of messages between users. Through this project, some concepts addressed in physics classes from secondary schools (e.g. photoelectric phenomena and the properties of light) are experimentally veri ed and used to communicate, in a classroom or a laboratory.
Resumo:
Finding equilibration times is a major unsolved problem in physics with few analytical results. Here we look at equilibration times for quantum gases of bosons and fermions in the regime of negligibly weak interactions, a setting which not only includes paradigmatic systems such as gases confined to boxes, but also Luttinger liquids and the free superfluid Hubbard model. To do this, we focus on two classes of measurements: (i) coarse-grained observables, such as the number of particles in a region of space, and (ii) few-mode measurements, such as phase correlators.Weshow that, in this setting, equilibration occurs quite generally despite the fact that the particles are not interacting. Furthermore, for coarse-grained measurements the timescale is generally at most polynomial in the number of particles N, which is much faster than previous general upper bounds, which were exponential in N. For local measurements on lattice systems, the timescale is typically linear in the number of lattice sites. In fact, for one-dimensional lattices, the scaling is generally linear in the length of the lattice, which is optimal. Additionally, we look at a few specific examples, one of which consists ofNfermions initially confined on one side of a partition in a box. The partition is removed and the fermions equilibrate extremely quickly in time O(1 N).
Resumo:
Object-oriented modeling is spreading in current simulation of wastewater treatments plants through the use of the individual components of the process and its relations to define the underlying dynamic equations. In this paper, we describe the use of the free-software OpenModelica simulation environment for the object-oriented modeling of an activated sludge process under feedback control. The performance of the controlled system was analyzed both under normal conditions and in the presence of disturbances. The object-oriented described approach represents a valuable tool in teaching provides a practical insight in wastewater process control field.
Resumo:
In this paper, we use the approximation of shallow water waves (Margaritondo G 2005 Eur. J. Phys. 26 401) to understand the behaviour of a tsunami in a variable depth. We deduce the shallow water wave equation and the continuity equation that must be satisfied when a wave encounters a discontinuity in the sea depth. A short explanation about how the tsunami hit the west coast of India is given based on the refraction phenomenon. Our procedure also includes a simple numerical calculation suitable for undergraduate students in physics and engineering.
Resumo:
Neste artigo é apresentado um método numérico que pode ser utilizado por alunos de graduação para a solução de problemas em física quântica de poucos corpos. O método é aplicado a dois problemas de dois corpos geralmente vistos pelos estudantes: o átomo de hidrogênio e o dêuteron. O método porém, pode ser estendido para três ou mais partículas.
Resumo:
We extend previous papers in the literature concerning the homogenization of Robin type boundary conditions for quasilinear equations, in the case of microscopic obstacles of critical size: here we consider nonlinear boundary conditions involving some maximal monotone graphs which may correspond to discontinuous or non-Lipschitz functions arising in some catalysis problems.
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I- 100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.
Resumo:
This lecture course covers the theory of so-called duality-based a posteriori error estimation of DG finite element methods. In particular, we formulate consistent and adjoint consistent DG methods for the numerical approximation of both the compressible Euler and Navier-Stokes equations; in the latter case, the viscous terms are discretized based on employing an interior penalty method. By exploiting a duality argument, adjoint-based a posteriori error indicators will be established. Moreover, application of these computable bounds within automatic adaptive finite element algorithms will be developed. Here, a variety of isotropic and anisotropic adaptive strategies, as well as $hp$-mesh refinement will be investigated.
Resumo:
At the HL-LHC, proton bunches will cross each other every 25. ns, producing an average of 140 pp-collisions per bunch crossing. To operate in such an environment, the CMS experiment will need a L1 hardware trigger able to identify interesting events within a latency of 12.5. μs. The future L1 trigger will make use also of data coming from the silicon tracker to control the trigger rate. The architecture that will be used in future to process tracker data is still under discussion. One interesting proposal makes use of the Time Multiplexed Trigger concept, already implemented in the CMS calorimeter trigger for the Phase I trigger upgrade. The proposed track finding algorithm is based on the Hough Transform method. The algorithm has been tested using simulated pp-collision data. Results show a very good tracking efficiency. The algorithm will be demonstrated in hardware in the coming months using the MP7, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1. Tb/s.
Resumo:
The transverse momentum dependent parton distribution/fragmentation functions (TMDs) are essential in the factorization of a number of processes like Drell-Yan scattering, vector boson production, semi-inclusive deep inelastic scattering, etc. We provide a comprehensive study of unpolarized TMDs at next-to-next-to-leading order, which includes an explicit calculation of these TMDs and an extraction of their matching coefficients onto their integrated analogues, for all flavor combinations. The obtained matching coefficients are important for any kind of phenomenology involving TMDs. In the present study each individual TMD is calculated without any reference to a specific process. We recover the known results for parton distribution functions and provide new results for the fragmentation functions. The results for the gluon transverse momentum dependent fragmentation functions are presented for the first time at one and two loops. We also discuss the structure of singularities of TMD operators and TMD matrix elements, crossing relations between TMD parton distribution functions and TMD fragmentation functions, and renormalization group equations. In addition, we consider the behavior of the matching coefficients at threshold and make a conjecture on their structure to all orders in perturbation theory.
Resumo:
Dissertação de mest. em Física, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2004
Resumo:
Currently, due to part of world is focalized to petroleum, many researches with this theme have been advanced to make possible the production into reservoirs which were classified as unviable. Because of geological and operational challenges presented to oil recovery, more and more efficient methods which are economically successful have been searched. In this background, steam flood is in evidence mainly when it is combined with other procedures to purpose low costs and high recovery factors. This work utilized nitrogen as an alternative fluid after steam flood to adjust the best combination of alternation between these fluids in terms of time and rate injection. To describe the simplified economic profile, many analysis based on liquid cumulative production were performed. The completion interval and injection fluid rates were fixed and the oil viscosity was ranged at 300 cP, 1.000 cP and 3.000 cP. The results defined, for each viscosity, one specific model indicating the best period to stop the introduction of steam and insertion of nitrogen, when the first injected fluid reached its economic limit. Simulations in physics model defined from one-eighth nine-spot inverted were realized using the commercial simulator Steam, Thermal and Advanced Processes Reservoir Simulator STARS of Computer Modelling Group CMG
Resumo:
The study of photophysical and photochemical processes crosses the interest of many fields of research in physics, chemistry and biology. In particular, the photophysical and photochemical reactions, after light absorption by a photosynthetic pigment-protein complex, are among the fastest events in biology, taking place on timescales ranging from tens of femtoseconds to a few nanoseconds. Among the experimental approaches developed for this purpose, the advent of ultrafast transient absorption spectroscopy has become a powerful and widely used technique.[1,2] Focusing on the process of photosynthesis, it relies upon the efficient absorption and conversion of the radiant energy from the Sun. Chlorophylls and carotenoids are the main players in the process. Photosynthetic pigments are typically arranged in a highly organized fashion to constitute antennas and reaction centers, supramolecular devices where light harvesting and charge separation take place. The very early steps in the photosynthetic process take place after the absorption of a photon by an antenna system, which harvests light and eventually delivers it to the reaction center. In order to compete with internal conversion, intersystem crossing, and fluorescence, which inevitably lead to energy loss, the energy and electron transfer processes that fix the excited-state energy in photosynthesis must be extremely fast. In order to investigate these events, ultrafast techniques down to a sub-100 fs resolution must be used. In this way, energy migration within the system as well as the formation of new chemical species such as charge-separated states can be tracked in real time. This can be achieved by making use of ultrafast transient absorption spectroscopy. The basic principles of this notable technique, instrumentation, and some recent applications to photosynthetic systems[3] will be described. Acknowledgements M. Moreno Oliva thanks the MINECO for a “Juan de la Cierva-Incorporación” research contract. References [1] U. Megerle, I. Pugliesi, C. Schriever, C.F. Sailer and E. Riedle, Appl. Phys. B, 96, 215 – 231 (2009). [2] R. Berera, R. van Grondelle and J.T.M. Kennis, Photosynth. Res., 101, 105 – 118 (2009). [3] T. Nikkonen, M. Moreno Oliva, A. Kahnt, M. Muuronen, J. Helaja and D.M. Guldi, Chem. Eur. J., 21, 590 – 600 (2015).
Resumo:
The study of complex systems has become a prestigious area of science, although relatively young . Its importance was demonstrated by the diversity of applications that several studies have already provided to various fields such as biology , economics and Climatology . In physics , the approach of complex systems is creating paradigms that influence markedly the new methods , bringing to Statistical Physics problems macroscopic level no longer restricted to classical studies such as those of thermodynamics . The present work aims to make a comparison and verification of statistical data on clusters of profiles Sonic ( DT ) , Gamma Ray ( GR ) , induction ( ILD ) , neutron ( NPHI ) and density ( RHOB ) to be physical measured quantities during exploratory drilling of fundamental importance to locate , identify and characterize oil reservoirs . Software were used : Statistica , Matlab R2006a , Origin 6.1 and Fortran for comparison and verification of the data profiles of oil wells ceded the field Namorado School by ANP ( National Petroleum Agency ) . It was possible to demonstrate the importance of the DFA method and that it proved quite satisfactory in that work, coming to the conclusion that the data H ( Hurst exponent ) produce spatial data with greater congestion . Therefore , we find that it is possible to find spatial pattern using the Hurst coefficient . The profiles of 56 wells have confirmed the existence of spatial patterns of Hurst exponents , ie parameter B. The profile does not directly assessed catalogs verification of geological lithology , but reveals a non-random spatial distribution