976 resultados para lp-lattice Summing Operato
Resumo:
We consider a two-parameter family of Z(2) gauge theories on a lattice discretization T(M) of a three-manifold M and its relation to topological field theories. Familiar models such as the spin-gauge model are curves on a parameter space Gamma. We show that there is a region Gamma(0) subset of Gamma where the partition function and the expectation value h < W-R(gamma)> i of the Wilson loop can be exactly computed. Depending on the point of Gamma(0), the model behaves as topological or quasi-topological. The partition function is, up to a scaling factor, a topological number of M. The Wilson loop on the other hand, does not depend on the topology of gamma. However, for a subset of Gamma(0), < W-R(gamma)> depends on the size of gamma and follows a discrete version of an area law. At the zero temperature limit, the spin-gauge model approaches the topological and the quasi-topological regions depending on the sign of the coupling constant.
Antioxidant and inflammatory aspects of lipoprotein-associated phospholipase A2 (Lp-PLA2 ): a review
Resumo:
The association of cardiovascular events with Lp-PLA2 has been studied continuously today. The enzyme has been strongly associated with several cardiovascular risk markers and events. Its discovery was directly related to the hydrolysis of the platelet-activating factor and oxidized phospholipids, which are considered protective functions. However, the hydrolysis of bioactive lipids generates lysophospholipids, compounds that have a pro-inflammatory function. Therefore, the evaluation of the distribution of Lp-PLA2 in the lipid fractions emphasized the dual role of the enzyme in the inflammatory process, since the HDL-Lp-PLA2 enzyme contributes to the reduction of atherosclerosis, while LDL-Lp-PLA2 stimulates this process. Recently, it has been verified that diet components and drugs can influence the enzyme activity and concentration. Thus, the effects of these treatments on Lp-PLA2 may represent a new kind of prevention of cardiovascular disease. Therefore, the association of the enzyme with the traditional assessment of cardiovascular risk may help to predict more accurately these diseases.
Resumo:
Abstract Background Lipoprotein-associated phospholipase A2 activity (Lp-PLA2) is a good marker of cardiovascular risk in adults. It is strongly associated with stroke and many others cardiovascular events. Despite this, the impact of obesity on this enzyme activity and its relation to biomarkers of cardiovascular disease in adolescents is not very well investigated. The purpose of this article is to evaluate the influence of obesity and cardiometabolic markers on Lp-PLA2 activity in adolescents. Results This cross-sectional study included 242 adolescents (10–19 years) of both gender. These subjects were classified in Healthy Weight (n = 77), Overweight (n = 82) and Obese (n = 83) groups. Lipid profile, glucose, insulin, HDL size, LDL(−) and anti-LDL(−) antibodies were analyzed. The Lp-PLA2 activity was determined by a colorimetric commercial kit. Body mass index (BMI), waist circumference and body composition were monitored. Food intake was evaluated using three 24-hour diet recalls. The Lp-PLA2 activity changed in function to high BMI, waist circumference and fat mass percentage. It was also positively associated with HOMA-IR, glucose, insulin and almost all variables of lipid profile. Furthermore, it was negatively related to Apo AI (β = −0.137; P = 0.038) and strongly positively associated with Apo B (β = 0.293; P < 0.001) and with Apo B/Apo AI ratio (β = 0.343; P < 0.001). The better predictor model for enzyme activity, on multivariate analysis, included Apo B/Apo AI (β = 0.327; P < 0.001), HDL size (β = −0.326; P < 0.001), WC (β = 0.171; P = 0.006) and glucose (β = 0.119; P = 0.038). Logistic regression analysis demonstrated that changes in Apo B/Apo AI ratio were associated with a 73.5 times higher risk to elevated Lp-PLA2 activity. Conclusions Lp-PLA2 changes in function of obesity, and that it shows important associations with markers of cardiovascular risk, in particular with waist circumference, glucose, HDL size and Apo B/Apo AI ratio. These results suggest that Lp-PLA2 activity can be a cardiovascular biomarker in adolescence.
Resumo:
We employ the approach of stochastic dynamics to describe the dissemination of vector-borne diseases such as dengue, and we focus our attention on the characterization of the threshold of the epidemic. The coexistence space comprises two representative spatial structures for both human and mosquito populations. The human population has its evolution described by a process that is similar to the Susceptible-Infected-Recovered (SIR) dynamics. The population of mosquitoes follows a dynamic of the type of the Susceptible Infected-Susceptible (SIS) model. The coexistence space is a bipartite lattice constituted by two structures representing the human and mosquito populations. We develop a truncation scheme to solve the evolution equations for the densities and the two-site correlations from which we get the threshold of the disease and the reproductive ratio. We present a precise deØnition of the reproductive ratio which reveals the importance of the correlations developed in the early stage of the disease. According to our deØnition, the reproductive rate is directed related to the conditional probability of the occurrence of a susceptible human (mosquito) given the presence in the neighborhood of an infected mosquito (human). The threshold of the epidemic as well as the phase transition between the epidemic and the non-epidemic states are also obtained by performing Monte Carlo simulations. References: [1] David R. de Souza, T^ania Tom∂e, , Suani R. T. Pinho, Florisneide R. Barreto and M∂ario J. de Oliveira, Phys. Rev. E 87, 012709 (2013). [2] D. R. de Souza, T. Tom∂e and R. M. ZiÆ, J. Stat. Mech. P03006 (2011).
Resumo:
The pulmonary crackling and the formation of liquid bridges are problems that for centuries have been attracting the attention of scientists. In order to study these phenomena, it was developed a canonical cubic lattice-gas like model to explain the rupture of liquid bridges in lung airways [A. Alencar et al., 2006, PRE]. Here, we further develop this model and add entropy analysis to study thermodynamic properties, such as free energy and force. The simulations were performed using the Monte Carlo method with Metropolis algorithm. The exchange between gas and liquid particles were performed randomly according to the Kawasaki dynamics and weighted by the Boltzmann factor. Each particle, which can be solid (s), liquid (l) or gas (g), has 26 neighbors: 6 + 12 + 8, with distances 1, √2 and √3, respectively. The energy of a lattice's site m is calculated by the following expression: Em = ∑k=126 Ji(m)j(k) in witch (i, j) = g, l or s. Specifically, it was studied the surface free energy of the liquid bridge, trapped between two planes, when its height is changed. For that, was considered two methods. First, just the internal energy was calculated. Then was considered the entropy. It was fond no difference in the surface free energy between this two methods. We calculate the liquid bridge force between the two planes using the numerical surface free energy. This force is strong for small height, and decreases as the distance between the two planes, height, is increased. The liquid-gas system was also characterized studying the variation of internal energy and heat capacity with the temperature. For that, was performed simulation with the same proportion of liquid and gas particle, but different lattice size. The scale of the liquid-gas system was also studied, for low temperature, using different values to the interaction Jij.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
In der vorliegenden Arbeit wurde die Rolle des SLA/LP Proteins bei der autoimmunen Hepatits untersucht. Zum einen sollte die Hypothese einer aberranten Expression des SLA/LP Moleküls als Auslöser der Autoimmunreaktion gegen SLA/LP überprüft werden. Hierzu wurde die Expression des SLA/LP Moleküls in Leber und Lymphozyten von Patienten mit verschiedenen hepatischen Erkrankungen und bei gesunden Personen bestimmt. Die quantitativen Expressionsanalysen wurden mittels real-time PCR unter Einsatz SLA/LP-spezifischer Oligonukleotide durchgeführt. Es zeigte sich, dass SLA/LP ubiquitär im Körper exprimiert wird, mit erhöhter Expression im Pankreas. Die Ergebnisse der SLA/LP Expressionsanalysen in peripheren mononukleären Blutzellen und Leberparenchymzellen von Patienten mit einer autoimmunen Hepatitis ergaben keine Hinweise auf eine aberrante Expression des SLA/LPs. Es konnte gezeigt werden, dass SLA/LP im Leberparenchym der Patienten tendenziell eher erhöht exprimiert wird, doch war kein Unterschied zwischen unterschiedlichen hepatischen Erkrankungen nachweisbar. Somit konnte in dieser Arbeit gezeigt werden, dass eine aberrante Expression nicht für die Auslösung der Erkrankung zuständig ist. Zum andern sollte in dieser Arbeit überprüft werden, ob eine Autoimmunreaktion gegen SLA/LP zu einer Entzündung in der Leber führen kann. Hierzu wurden Mäuse unterschiedlicher Stämme mit SLA/LP Protein in komplettem Freunds Adjuvans immunisiert und auf Leberschädigung und Leberentzündung untersucht. Es konnte gezeigt werden, dass SLA/LP-Autoimmunität Leberentzündung und Leberschädigung auslösen kann. Die Auslösung der Hepatitis war aber vom Mausstamm und der Defizienz von Interleukin 10 abhängig. Somit scheint unter bestimmten immunologischen Bedingungen eine Immunreaktion gegen SLA/LP zu einer Leberentzündung und Leberschädigung zu führen.
Resumo:
It is lively debated how eclogites find their way from deep to mid-crustal levels during exhumation. Different exhumation models for high-pressure and ultrahigh-pressure rocks were suggested in previous studies, based mainly on field observations and less on microstructural studies on the exhumed rocks. The development and improvement of electron microscopy techniques allows it, to focus interest on direct investigations of microstructures and crystallographic properties in eclogites. In this case, it is of importance to study the applicability of crystallographic measurements on eclogites for exhumation processes and to unravel which processes affect eclogite textures. Previous studies suggested a strong relationship between deformation and lattice preferred orientation (LPO) in omphacite but it is still unclear if the deformation is related to the exhumation of eclogites. This study is focused on the questions which processes affect omphacite LPO and if textural investigations of omphacite are applicable for studying eclogite exhumation. Therefore, eclogites from two examples in the Alps and in the Caledonides were collected systematically and investigated with respect to omphacite LPO by using the electron backscattered diffraction (EBSD) technique. Omphacite textures of the Tauern Window (Austria) and the Western Gneiss Region (Norway) were studied to compare lattice preferred orientation with field observations and suggested exhumation models from previous studies. The interpretation of omphacite textures, regarding the deformation regime is mainly based on numerical simulations in previous studies. Omphacite LPO patterns of the Eclogite Zone are clearly independent from any kind of exhumation process. The textures were generated during omphacite growth on the prograde path of eclogite development until metamorphic peak conditions. Field observations in the Eclogite Zone show that kinematics in garnet mica schist, surrounding the eclogites, strongly indicate an extrusion wedge geometry. Stretching lineations show top-N thrusting at the base and a top-S normal faulting with a sinistral shear component at the top of the Eclogite Zone. The different shear sense on both sides of the unit does not affect the omphacite textures in any way. The omphacite lattice preferred orientation patterns of the Western Gneiss Region can not be connected with any exhumation model. The textures were probably generated during the metamorphic peak and reflect the change from subduction to exhumation. Eclogite Zone and Western Gneiss Region differ significantly in size and especially in metamorphic conditions. While the Eclogite Zone is characterized by constant P-T conditions (600-650°C, 20-25 kbar), the Western Gneiss Region contains a wide P-T range from high- to ultrahigh pressure conditions (400-800°C, 20-35 kbar). In contrast to this, the omphacite textures of both units are very similar. This means that omphacite LPO is independent from P-T conditions and therefore from burial depth. Further, in both units, omphacite LPO is independent from grain and subgrain size as well as from any shape preferred orientation (SPO) on grain and subgrain scale. Overall, omphacite lattice preferred orientation are generated on the prograde part of omphacite development. Therefore, textural investigations on omphacite LPO are not applicable to study eclogite exhumation.
Resumo:
The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.
Resumo:
In this thesis, a strategy to model the behavior of fluids and their interaction with deformable bodies is proposed. The fluid domain is modeled by using the lattice Boltzmann method, thus analyzing the fluid dynamics by a mesoscopic point of view. It has been proved that the solution provided by this method is equivalent to solve the Navier-Stokes equations for an incompressible flow with a second-order accuracy. Slender elastic structures idealized through beam finite elements are used. Large displacements are accounted for by using the corotational formulation. Structural dynamics is computed by using the Time Discontinuous Galerkin method. Therefore, two different solution procedures are used, one for the fluid domain and the other for the structural part, respectively. These two solvers need to communicate and to transfer each other several information, i.e. stresses, velocities, displacements. In order to guarantee a continuous, effective, and mutual exchange of information, a coupling strategy, consisting of three different algorithms, has been developed and numerically tested. In particular, the effectiveness of the three algorithms is shown in terms of interface energy artificially produced by the approximate fulfilling of compatibility and equilibrium conditions at the fluid-structure interface. The proposed coupled approach is used in order to solve different fluid-structure interaction problems, i.e. cantilever beams immersed in a viscous fluid, the impact of the hull of the ship on the marine free-surface, blood flow in a deformable vessels, and even flapping wings simulating the take-off of a butterfly. The good results achieved in each application highlight the effectiveness of the proposed methodology and of the C++ developed software to successfully approach several two-dimensional fluid-structure interaction problems.
Resumo:
A permutation is said to avoid a pattern if it does not contain any subsequence which is order-isomorphic to it. Donald Knuth, in the first volume of his celebrated book "The art of Computer Programming", observed that the permutations that can be computed (or, equivalently, sorted) by some particular data structures can be characterized in terms of pattern avoidance. In more recent years, the topic was reopened several times, while often in terms of sortable permutations rather than computable ones. The idea to sort permutations by using one of Knuth’s devices suggests to look for a deterministic procedure that decides, in linear time, if there exists a sequence of operations which is able to convert a given permutation into the identical one. In this thesis we show that, for the stack and the restricted deques, there exists an unique way to implement such a procedure. Moreover, we use these sorting procedures to create new sorting algorithms, and we prove some unexpected commutation properties between these procedures and the base step of bubblesort. We also show that the permutations that can be sorted by a combination of the base steps of bubblesort and its dual can be expressed, once again, in terms of pattern avoidance. In the final chapter we give an alternative proof of some enumerative results, in particular for the classes of permutations that can be sorted by the two restricted deques. It is well-known that the permutations that can be sorted through a restricted deque are counted by the Schrӧder numbers. In the thesis, we show how the deterministic sorting procedures yield a bijection between sortable permutations and Schrӧder paths.
Resumo:
The energy released during a seismic crisis in volcanic areas is strictly related to the physical processes in the volcanic structure. In particular Long Period seismicity, that seems to be related to the oscillation of a fluid-filled crack (Chouet , 1996, Chouet, 2003, McNutt, 2005), can precedes or accompanies an eruption. The present doctoral thesis is focused on the study of the LP seismicity recorded in the Campi Flegrei volcano (Campania, Italy) during the October 2006 crisis. Campi Flegrei Caldera is an active caldera; the combination of an active magmatic system and a dense populated area make the Campi Flegrei a critical volcano. The source dynamic of LP seismicity is thought to be very different from the other kind of seismicity ( Tectonic or Volcano Tectonic): it’s characterized by a time sustained source and a low content in frequency. This features implies that the duration–magnitude, that is commonly used for VT events and sometimes for LPs as well, is unadapted for LP magnitude evaluation. The main goal of this doctoral work was to develop a method for the determination of the magnitude for the LP seismicity; it’s based on the comparison of the energy of VT event and LP event, linking the energy to the VT moment magnitude. So the magnitude of the LP event would be the moment magnitude of a VT event with the same energy of the LP. We applied this method to the LP data-set recorded at Campi Flegrei caldera in 2006, to an LP data-set of Colima volcano recorded in 2005 – 2006 and for an event recorded at Etna volcano. Experimenting this method to lots of waveforms recorded at different volcanoes we tested its easy applicability and consequently its usefulness in the routinely and in the quasi-real time work of a volcanological observatory.
Resumo:
This thesis reports on the realization, characterization and analysis of ultracold bosonic and fermionic atoms in three-dimensional optical lattice potentials. Ultracold quantum gases in optical lattices can be regarded as ideal model systems to investigate quantum many-body physics. In this work interacting ensembles of bosonic 87Rb and fermionic 40K atoms are employed to study equilibrium phases and nonequilibrium dynamics. The investigations are enabled by a versatile experimental setup, whose core feature is a blue-detuned optical lattice that is combined with Feshbach resonances and a red-detuned dipole trap to allow for independent control of tunneling, interactions and external confinement. The Fermi-Hubbard model, which plays a central role in the theoretical description of strongly correlated electrons, is experimentally realized by loading interacting fermionic spin mixtures into the optical lattice. Using phase-contrast imaging the in-situ size of the atomic density distribution is measured, which allows to extract the global compressibility of the many-body state as a function of interaction and external confinement. Thereby, metallic and insulating phases are clearly identified. At strongly repulsive interaction, a vanishing compressibility and suppression of doubly occupied lattice sites signal the emergence of a fermionic Mott insulator. In a second series of experiments interaction effects in bosonic lattice quantum gases are analyzed. Typically, interactions between microscopic particles are described as two-body interactions. As such they are also contained in the single-band Bose-Hubbard model. However, our measurements demonstrate the presence of multi-body interactions that effectively emerge via virtual transitions of atoms to higher lattice bands. These findings are enabled by the development of a novel atom optical measurement technique: In quantum phase revival spectroscopy periodic collapse and revival dynamics of the bosonic matter wave field are induced. The frequencies of the dynamics are directly related to the on-site interaction energies of atomic Fock states and can be read out with high precision. The third part of this work deals with mixtures of bosons and fermions in optical lattices, in which the interspecies interactions are accurately controlled by means of a Feshbach resonance. Studies of the equilibrium phases show that the bosonic superfluid to Mott insulator transition is shifted towards lower lattice depths when bosons and fermions interact attractively. This observation is further analyzed by applying quantum phase revival spectroscopy to few-body systems consisting of a single fermion and a coherent bosonic field on individual lattice sites. In addition to the direct measurement of Bose-Fermi interaction energies, Bose-Bose interactions are proven to be modified by the presence of a fermion. This renormalization of bosonic interaction energies can explain the shift of the Mott insulator transition. The experiments of this thesis lay important foundations for future studies of quantum magnetism with fermionic spin mixtures as well as for the realization of complex quantum phases with Bose-Fermi mixtures. They furthermore point towards physics that reaches beyond the single-band Hubbard model.
Resumo:
I modelli su reticolo con simmetrie SU(n) sono attualmente oggetto di studio sia dal punto di vista sperimentale, sia dal punto di vista teorico; particolare impulso alla ricerca in questo campo è stato dato dai recenti sviluppi in campo sperimentale per quanto riguarda la tecnica dell’intrappolamento di atomi ultrafreddi in un reticolo ottico. In questa tesi viene studiata, sia con tecniche analitiche sia con simulazioni numeriche, la generalizzazione del modello di Heisenberg su reticolo monodimensionale a simmetria SU(3). In particolare, viene proposto un mapping tra il modello di Heisenberg SU(3) e l’Hamiltoniana con simmetria SU(2) bilineare-biquadratica con spin 1. Vengono inoltre presentati nuovi risultati numerici ottenuti con l’algoritmo DMRG che confermano le previsioni teoriche in letteratura sul modello in esame. Infine è proposto un approccio per la formulazione della funzione di partizione dell’Hamiltoniana bilineare-biquadratica a spin-1 servendosi degli stati coerenti per SU(3).