947 resultados para Generalized entropy
Resumo:
We study the Von Neumann and Renyi entanglement entropy of long-range harmonic oscillators (LRHO) by both theoretical and numerical means. We show that the entanglement entropy in massless harmonic oscillators increases logarithmically with the sub-system size as S - c(eff)/3 log l. Although the entanglement entropy of LRHO's shares some similarities with the entanglement entropy at conformal critical points we show that the Renyi entanglement entropy presents some deviations from the expected conformal behaviour. In the massive case we demonstrate that the behaviour of the entanglement entropy with respect to the correlation length is also logarithmic as the short-range case. Copyright (c) EPLA, 2012
Resumo:
Background: Prostate cancer is a serious public health problem that affects quality of life and has a significant mortality rate. The aim of the present study was to quantify the fractal dimension and Shannon’s entropy in the histological diagnosis of prostate cancer. Methods: Thirty-four patients with prostate cancer aged 50 to 75 years having been submitted to radical prostatectomy participated in the study. Histological slides of normal (N), hyperplastic (H) and tumor (T) areas of the prostate were digitally photographed with three different magnifications (40x, 100x and 400x) and analyzed. The fractal dimension (FD), Shannon’s entropy (SE) and number of cell nuclei (NCN) in these areas were compared. Results: FD analysis demonstrated the following significant differences between groups: T vs. N and H vs. N groups (p < 0.05) at a magnification of 40x; T vs. N (p < 0.01) at 100x and H vs. N (p < 0.01) at 400x. SE analysis revealed the following significant differences groups: T vs. H and T vs. N (p < 0.05) at 100x; and T vs. H and T vs. N (p < 0.001) at 400x. NCN analysis demonstrated the following significant differences between groups: T vs. H and T vs. N (p < 0.05) at 40x; T vs. H and T vs. N (p < 0.0001) at 100x; and T vs. H and T vs. N (p < 0.01) at 400x. Conclusions: The quantification of the FD and SE, together with the number of cell nuclei, has potential clinical applications in the histological diagnosis of prostate cancer.
Resumo:
Abstract Background The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. Findings For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. Conclusions For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.
Resumo:
An out of equilibrium Ising model subjected to an irreversible dynamics is analyzed by means of a stochastic dynamics, on a effort that aims to understand the observed critical behavior as consequence of the intrinsic microscopic characteristics. The study focus on the kinetic phase transitions that take place by assuming a lattice model with inversion symmetry and under the influence of two competing Glauber dynamics, intended to describe the stationary states using the entropy production, which characterize the system behavior and clarifies its reversibility conditions. Thus, it is considered a square lattice formed by two sublattices interconnected, each one of which is in contact with a heat bath at different temperature from the other. Analytical and numerical treatments are faced, using mean-field approximations and Monte Carlo simulations. For the one dimensional model exact results for the entropy production were obtained, though in this case the phase transition that takes place in the two dimensional counterpart is not observed, fact which is in accordance with the behavior shared by lattice models presenting inversion symmetry. Results found for the stationary state show a critical behavior of the same class as the equilibrium Ising model with a phase transition of the second order, which is evidenced by a divergence with an exponent µ ¼ 0:003 of the entropy production derivative.
Resumo:
The first part of my thesis presents an overview of the different approaches used in the past two decades in the attempt to forecast epileptic seizure on the basis of intracranial and scalp EEG. Past research could reveal some value of linear and nonlinear algorithms to detect EEG features changing over different phases of the epileptic cycle. However, their exact value for seizure prediction, in terms of sensitivity and specificity, is still discussed and has to be evaluated. In particular, the monitored EEG features may fluctuate with the vigilance state and lead to false alarms. Recently, such a dependency on vigilance states has been reported for some seizure prediction methods, suggesting a reduced reliability. An additional factor limiting application and validation of most seizure-prediction techniques is their computational load. For the first time, the reliability of permutation entropy [PE] was verified in seizure prediction on scalp EEG data, contemporarily controlling for its dependency on different vigilance states. PE was recently introduced as an extremely fast and robust complexity measure for chaotic time series and thus suitable for online application even in portable systems. The capability of PE to distinguish between preictal and interictal state has been demonstrated using Receiver Operating Characteristics (ROC) analysis. Correlation analysis was used to assess dependency of PE on vigilance states. Scalp EEG-Data from two right temporal epileptic lobe (RTLE) patients and from one patient with right frontal lobe epilepsy were analysed. The last patient was included only in the correlation analysis, since no datasets including seizures have been available for him. The ROC analysis showed a good separability of interictal and preictal phases for both RTLE patients, suggesting that PE could be sensitive to EEG modifications, not visible on visual inspection, that might occur well in advance respect to the EEG and clinical onset of seizures. However, the simultaneous assessment of the changes in vigilance showed that: a) all seizures occurred in association with the transition of vigilance states; b) PE was sensitive in detecting different vigilance states, independently of seizure occurrences. Due to the limitations of the datasets, these results cannot rule out the capability of PE to detect preictal states. However, the good separability between pre- and interictal phases might depend exclusively on the coincidence of epileptic seizure onset with a transition from a state of low vigilance to a state of increased vigilance. The finding of a dependency of PE on vigilance state is an original finding, not reported in literature, and suggesting the possibility to classify vigilance states by means of PE in an authomatic and objectic way. The second part of my thesis provides the description of a novel behavioral task based on motor imagery skills, firstly introduced (Bruzzo et al. 2007), in order to study mental simulation of biological and non-biological movement in paranoid schizophrenics (PS). Immediately after the presentation of a real movement, participants had to imagine or re-enact the very same movement. By key release and key press respectively, participants had to indicate when they started and ended the mental simulation or the re-enactment, making it feasible to measure the duration of the simulated or re-enacted movements. The proportional error between duration of the re-enacted/simulated movement and the template movement were compared between different conditions, as well as between PS and healthy subjects. Results revealed a double dissociation between the mechanisms of mental simulation involved in biological and non-biologial movement simulation. While for PS were found large errors for simulation of biological movements, while being more acurate than healthy subjects during simulation of non-biological movements. Healthy subjects showed the opposite relationship, making errors during simulation of non-biological movements, but being most accurate during simulation of non-biological movements. However, the good timing precision during re-enactment of the movements in all conditions and in both groups of participants suggests that perception, memory and attention, as well as motor control processes were not affected. Based upon a long history of literature reporting the existence of psychotic episodes in epileptic patients, a longitudinal study, using a slightly modified behavioral paradigm, was carried out with two RTLE patients, one patient with idiopathic generalized epilepsy and one patient with extratemporal lobe epilepsy. Results provide strong evidence for a possibility to predict upcoming seizures in RTLE patients behaviorally. In the last part of the thesis it has been validated a behavioural strategy based on neurobiofeedback training, to voluntarily control seizures and to reduce there frequency. Three epileptic patients were included in this study. The biofeedback was based on monitoring of slow cortical potentials (SCPs) extracted online from scalp EEG. Patients were trained to produce positive shifts of SCPs. After a training phase patients were monitored for 6 months in order to validate the ability of the learned strategy to reduce seizure frequency. Two of the three refractory epileptic patients recruited for this study showed improvements in self-management and reduction of ictal episodes, even six months after the last training session.
Resumo:
Eine Gruppe G hat endlichen Prüferrang (bzw. Ko-zentralrang) kleiner gleich r, wenn für jede endlich erzeugte Gruppe H gilt: H (bzw. H modulo seinem Zentrum) ist r-erzeugbar. In der vorliegenden Arbeit werden, soweit möglich, die bekannten Sätze über Gruppen von endlichem Prüferrang (kurz X-Gruppen), auf die wesentlich größere Klasse der Gruppen mit endlichem Ko-zentralrang (kurz R-Gruppen) verallgemeinert.Für lokal nilpotente R-Gruppen, welche torsionsfrei oder p-Gruppen sind, wird gezeigt, dass die Zentrumsfaktorgruppe eine X-Gruppe sein muss. Es folgt, dass Hyperzentralität und lokale Nilpotenz für R-Gruppen identische Bediungungen sind. Analog hierzu sind R-Gruppen genau dann lokal auflösbar, wenn sie hyperabelsch sind. Zentral für die Strukturtheorie hyperabelscher R-Gruppen ist die Tatsache, dass solche Gruppen eine aufsteigende Normalreihe abelscher X-Gruppen besitzen. Es wird eine Sylowtheorie für periodische hyperabelsche R-Gruppen entwickelt. Für torsionsfreie hyperabelsche R-Gruppen wird deren Auflösbarkeit bewiesen. Des weiteren sind lokal endliche R-Gruppen fast hyperabelsch. Für R-Gruppen fallen sehr große Gruppenklassen mit den fast hyperabelschen Gruppen zusammen. Hierzu wird der Begriff der Sektionsüberdeckung eingeführt und gezeigt, dass R-Gruppen mit fast hyperabelscher Sektionsüberdeckung fast hyperabelsch sind.
Resumo:
The first part of the thesis concerns the study of inflation in the context of a theory of gravity called "Induced Gravity" in which the gravitational coupling varies in time according to the dynamics of the very same scalar field (the "inflaton") driving inflation, while taking on the value measured today since the end of inflation. Through the analytical and numerical analysis of scalar and tensor cosmological perturbations we show that the model leads to consistent predictions for a broad variety of symmetry-breaking inflaton's potentials, once that a dimensionless parameter entering into the action is properly constrained. We also discuss the average expansion of the Universe after inflation (when the inflaton undergoes coherent oscillations about the minimum of its potential) and determine the effective equation of state. Finally, we analyze the resonant and perturbative decay of the inflaton during (p)reheating. The second part is devoted to the study of a proposal for a quantum theory of gravity dubbed "Horava-Lifshitz (HL) Gravity" which relies on power-counting renormalizability while explicitly breaking Lorentz invariance. We test a pair of variants of the theory ("projectable" and "non-projectable") on a cosmological background and with the inclusion of scalar field matter. By inspecting the quadratic action for the linear scalar cosmological perturbations we determine the actual number of propagating degrees of freedom and realize that the theory, being endowed with less symmetries than General Relativity, does admit an extra gravitational degree of freedom which is potentially unstable. More specifically, we conclude that in the case of projectable HL Gravity the extra mode is either a ghost or a tachyon, whereas in the case of non-projectable HL Gravity the extra mode can be made well-behaved for suitable choices of a pair of free dimensionless parameters and, moreover, turns out to decouple from the low-energy Physics.
Resumo:
The present thesis is concerned with certain aspects of differential and pseudodifferential operators on infinite dimensional spaces. We aim to generalize classical operator theoretical concepts of pseudodifferential operators on finite dimensional spaces to the infinite dimensional case. At first we summarize some facts about the canonical Gaussian measures on infinite dimensional Hilbert space riggings. Considering the naturally unitary group actions in $L^2(H_-,gamma)$ given by weighted shifts and multiplication with $e^{iSkp{t}{cdot}_0}$ we obtain an unitary equivalence $F$ between them. In this sense $F$ can be considered as an abstract Fourier transform. We show that $F$ coincides with the Fourier-Wiener transform. Using the Fourier-Wiener transform we define pseudodifferential operators in Weyl- and Kohn-Nirenberg form on our Hilbert space rigging. In the case of this Gaussian measure $gamma$ we discuss several possible Laplacians, at first the Ornstein-Uhlenbeck operator and then pseudo-differential operators with negative definite symbol. In the second case, these operators are generators of $L^2_gamma$-sub-Markovian semi-groups and $L^2_gamma$-Dirichlet-forms. In 1992 Gramsch, Ueberberg and Wagner described a construction of generalized Hörmander classes by commutator methods. Following this concept and the classical finite dimensional description of $Psi_{ro,delta}^0$ ($0leqdeltaleqroleq 1$, $delta< 1$) in the $C^*$-algebra $L(L^2)$ by Beals and Cordes we construct in both cases generalized Hörmander classes, which are $Psi^*$-algebras. These classes act on a scale of Sobolev spaces, generated by our Laplacian. In the case of the Ornstein-Uhlenbeck operator, we prove that a large class of continuous pseudodifferential operators considered by Albeverio and Dalecky in 1998 is contained in our generalized Hörmander class. Furthermore, in the case of a Laplacian with negative definite symbol, we develop a symbolic calculus for our operators. We show some Fredholm-criteria for them and prove that these Fredholm-operators are hypoelliptic. Moreover, in the finite dimensional case, using the Gaussian-measure instead of the Lebesgue-measure the index of these Fredholm operators is still given by Fedosov's formula. Considering an infinite dimensional Heisenberg group rigging we discuss the connection of some representations of the Heisenberg group to pseudo-differential operators on infinite dimensional spaces. We use this connections to calculate the spectrum of pseudodifferential operators and to construct generalized Hörmander classes given by smooth elements which are spectrally invariant in $L^2(H_-,gamma)$. Finally, given a topological space $X$ with Borel measure $mu$, a locally compact group $G$ and a representation $B$ of $G$ in the group of all homeomorphisms of $X$, we construct a Borel measure $mu_s$ on $X$ which is invariant under $B(G)$.
Resumo:
In questa tesi abbiamo presentato il calcolo dell’Entropia di Entanglement di un sistema quantistico unidimensionale integrabile la cui rappresentazione statistica é data dal modello RSOS, il cui punto critico é una realizzazione su reticolo di tutti i modelli conformi minimali. Sfruttando l’integrabilitá di questi modelli, abbiamo svolto il calcolo utilizzando la tecnica delle Corner Transfer Matrices (CTM). Il risultato ottenuto si discosta leggermente dalla previsione di J. Cardy e P. Calabrese ricavata utilizzando la teoria dei campi conformi descriventi il punto critico. Questa differenza é stata imputata alla non-unitarietá del modello studiato, in quanto la tecnica CTM studia il ground state, mentre la previsione di Cardy e Calabrese si focalizza sul vuoto conforme del modello: nel caso dei sistemi non-unitari questi due stati non coincidono, ma possono essere visti come eccitazioni l’uno dell’altro. Dato che l’Entanglement é un fenomeno genuinamente quantistico e il modello RSOS descrive un sistema statistico classico bidimensionale, abbiamo proposto una Hamiltoniana quantistica unidimensionale integrabile la cui rappresentazione statistica é data dal modello RSOS.
Resumo:
A 2D Unconstrained Third Order Shear Deformation Theory (UTSDT) is presented for the evaluation of tangential and normal stresses in moderately thick functionally graded conical and cylindrical shells subjected to mechanical loadings. Several types of graded materials are investigated. The functionally graded material consists of ceramic and metallic constituents. A four parameter power law function is used. The UTSDT allows the presence of a finite transverse shear stress at the top and bottom surfaces of the graded shell. In addition, the initial curvature effect included in the formulation leads to the generalization of the present theory (GUTSDT). The Generalized Differential Quadrature (GDQ) method is used to discretize the derivatives in the governing equations, the external boundary conditions and the compatibility conditions. Transverse and normal stresses are also calculated by integrating the three dimensional equations of equilibrium in the thickness direction. In this way, the six components of the stress tensor at a point of the conical or cylindrical shell or panel can be given. The initial curvature effect and the role of the power law functions are shown for a wide range of functionally conical and cylindrical shells under various loading and boundary conditions. Finally, numerical examples of the available literature are worked out.
Resumo:
Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
Scopo di questo lavoro di tesi è lo studio di alcune proprietà delle teorie generali della gravità in relazione alla meccanica e la termodinamica dei buchi neri. In particolare, la trattazione che seguirà ha lo scopo di fornire un percorso autoconsistente che conduca alla nozione di entropia di un orizzonte descritta in termini delle carica di Noether associata all'invarianza del funzionale d'azione, che descrive la teoria gravitazionale in considerazione, per trasformazioni di coordinate generali. Si presterà particolare attenzione ad alcune proprietà geometriche della Lagrangiana, proprietà che sono indipendenti dalla particolare forma della teoria che si sta prendendo in considerazione; trattasi cioè non di proprietà dinamiche, legate cioè alla forma delle equazioni del moto del campo gravitazionale, ma piuttosto caratteristiche proprie di qualunque varietà rappresentante uno spaziotempo curvo. Queste caratteristiche fanno sì che ogni teoria generale della gravità possieda alcune grandezze definite localmente sullo spaziotempo, in particolare una corrente di Noether e la carica ad essa associata. La forma esplicita della corrente e della carica dipende invece dalla Lagrangiana che si sceglie di adottare per descrivere il campo gravitazionale. Il lavoro di tesi sarà orientato prima a descrivere come questa corrente di Noether emerge in qualunque teoria della gravità invariante per trasformazioni generali e come essa viene esplicitata nel caso di Lagrangiane particolari, per poi identificare la carica ad essa associata come una grandezza connessa all' entropia di un orizzonte in qualunque teoria generale della gravità.
Resumo:
In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.