972 resultados para Kikuchi approximations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurden experimentelle Untersuchungen zu gepfropften Polymerfilmen durchgeführt. Dabei wurden endgepfropfte poly-methyl-methacrylate (PMMA) Bürsten hergestellt durch „grafting from“ Methoden und polystyrol (PS)/ poly-vinyl-methyl-ether (PVME) Polymerfilme gepfropft auf UV sensitiven Oberflächen untersucht. Zur Strukturuntersuchung wurden die hergestellten Systeme wurden mit Rasterkraftmikroskopie (engl.: Surface Probe Microscopy, SPM), Röntgen - und Neutronenreflektivitätsmessungen, sowie mit Röntgenstreuung unter streifenden Einfall (engl.: Grazing Incidence Small Angle X-Ray Scattering, GISAXS) untersucht. rnEs wurde gezeigt, dass ein aus der Transmissionsstreuung bekanntes Model auch für auch für die GISAXS Analyse polydisperser Polymerdomänen und Kolloidsysteme verwendet werden kann. Der maximale Fehler durch die gemachten Näherungen wurde auf < 20% abgeschätzt.rnErgebnisse aus der Strukturanalyse wurden mit mechanischen Filmeigenschaften verknüpft. Dazu wurden mechanische Spannungsexperimente durchgeführt. Hierzu wurden die zu untersuchenden Filme selektiv auf einzelne Mikro-Federbalken-Sensoren (engl.: Micro Cantilever Sensor, MCS) der MCS Arrays aufgebracht. Dies wurde durch Maskierungstechniken und Mikro-Kontaktdrucken bewerkstelligt. rnPhasenübergansexperimente der gepfropften PS/PVME Filme haben gezeigt, dass die Möglichkeit einer Polymer/Polymer Phasenseparation stark von Propfpunktdichte der gebundenen Polymerketten mit der Oberfläche abhängt. PS/PVME Filmsysteme mit hohen Pfropfpunktdichten zeigten keinen Phasenübergang. Bei niedrig gepfropften Filmsystemen waren hingegen Polymer/Polymer Phasenseparationen zu beobachten. Es wurde geschlussfolgert, dass die gepfropften Polymersysteme einen hinreichenden Grad an entropischen Freiheitsgraden benötigen um eine Phasenseparation zu zeigen. Mechanische Spannungsexperimente haben dabei das Verstehen der Phasenseparationsmechanismen möglich gemacht.rnAus Quellexperimenten dichtgepfropfter PMMA Bürsten, wurden Lösungsmittel-Polymer Wechselwirkungsparameter (-Parameter) bestimmt. Dabei wurde festgestellt, dass sich die erhaltenen Parameter aufgrund von Filmbenetzung und entropischen Effekten maßgeblich von den errechneten Bulkwerten unterscheiden. Weiterhin wurden nicht reversible Kettenverschlaufungseffekt beobachtet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coupled-cluster theory in its single-reference formulation represents one of the most successful approaches in quantum chemistry for the description of atoms and molecules. To extend the applicability of single-reference coupled-cluster theory to systems with degenerate or near-degenerate electronic configurations, multireference coupled-cluster methods have been suggested. One of the most promising formulations of multireference coupled cluster theory is the state-specific variant suggested by Mukherjee and co-workers (Mk-MRCC). Unlike other multireference coupled-cluster approaches, Mk-MRCC is a size-extensive theory and results obtained so far indicate that it has the potential to develop to a standard tool for high-accuracy quantum-chemical treatments. This work deals with developments to overcome the limitations in the applicability of the Mk-MRCC method. Therefore, an efficient Mk-MRCC algorithm has been implemented in the CFOUR program package to perform energy calculations within the singles and doubles (Mk-MRCCSD) and singles, doubles, and triples (Mk-MRCCSDT) approximations. This implementation exploits the special structure of the Mk-MRCC working equations that allows to adapt existing efficient single-reference coupled-cluster codes. The algorithm has the correct computational scaling of d*N^6 for Mk-MRCCSD and d*N^8 for Mk-MRCCSDT, where N denotes the system size and d the number of reference determinants. For the determination of molecular properties as the equilibrium geometry, the theory of analytic first derivatives of the energy for the Mk-MRCC method has been developed using a Lagrange formalism. The Mk-MRCC gradients within the CCSD and CCSDT approximation have been implemented and their applicability has been demonstrated for various compounds such as 2,6-pyridyne, the 2,6-pyridyne cation, m-benzyne, ozone and cyclobutadiene. The development of analytic gradients for Mk-MRCC offers the possibility of routinely locating minima and transition states on the potential energy surface. It can be considered as a key step towards routine investigation of multireference systems and calculation of their properties. As the full inclusion of triple excitations in Mk-MRCC energy calculations is computational demanding, a parallel implementation is presented in order to circumvent limitations due to the required execution time. The proposed scheme is based on the adaption of a highly efficient serial Mk-MRCCSDT code by parallelizing the time-determining steps. A first application to 2,6-pyridyne is presented to demonstrate the efficiency of the current implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 3D human movement analysis performed using stereophotogrammetric systems and skin markers, bone pose can only be estimated in an indirect fashion. During a movement, soft tissue deformations make the markers move with respect to the underlying bone generating soft tissue artefact (STA). STA has devastating effects on bone pose estimation and its compensation remains an open question. The aim of this PhD thesis was to contribute to the solution of this crucial issue. Modelling STA using measurable trial-specific variables is a fundamental prerequisite for its removal from marker trajectories. Two STA model architectures are proposed. Initially, a thigh marker-level artefact model is presented. STA was modelled as a linear combination of joint angles involved in the movement. This model was calibrated using ex-vivo and in-vivo STA invasive measures. The considerable number of model parameters led to defining STA approximations. Three definitions were proposed to represent STA as a series of modes: individual marker displacements, marker-cluster geometrical transformations (MCGT), and skin envelope shape variations. Modes were selected using two criteria: one based on modal energy and another on the selection of modes chosen a priori. The MCGT allows to select either rigid or non-rigid STA components. It was also empirically demonstrated that only the rigid component affects joint kinematics, regardless of the non-rigid amplitude. Therefore, a model of thigh and shank STA rigid component at cluster-level was then defined. An acceptable trade-off between STA compensation effectiveness and number of parameters can be obtained, improving joint kinematics accuracy. The obtained results lead to two main potential applications: the proposed models can generate realistic STAs for simulation purposes to compare different skeletal kinematics estimators; and, more importantly, focusing only on the STA rigid component, the model attains a satisfactory STA reconstruction with less parameters, facilitating its incorporation in an pose estimator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis details the development of quantum chemical methods for the accurate theoretical description of molecular systems with a complicated electronic structure. In simple cases, a single Slater determinant, in which the electrons occupy a number of energetically lowest molecular orbitals, offers a qualitatively correct model. The widely used coupled-cluster method CCSD(T) efficiently includes electron correlation effects starting from this determinant and provides reaction energies in error by only a few kJ/mol. However, the method often fails when several electronic configurations are important, as, for instance, in the course of many chemical reactions or in transition metal compounds. Internally contracted multireference coupled-cluster methods (ic-MRCC methods) cure this deficiency by using a linear combination of determinants as a reference function. Despite their theoretical elegance, the ic-MRCC equations involve thousands of terms and are therefore derived by the computer. Calculations of energy surfaces of BeH2, HF, LiF, H2O, N2 and Be3 unveil the theory's high accuracy compared to other approaches and the quality of various hierarchies of approximations. New theoretical advances include size-extensive techniques for removing linear dependencies in the ic-MRCC equations and a multireference analog of CCSD(T). Applications of the latter method to O3, Ni2O2, benzynes, C6H7NO and Cr2 underscore its potential to become a new standard method in quantum chemistry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A critical point in the analysis of ground displacements time series is the development of data driven methods that allow the different sources that generate the observed displacements to be discerned and characterised. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows reducing the dimensionality of the data space maintaining most of the variance of the dataset explained. Anyway, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. The Independent Component Analysis (ICA) is a popular technique adopted to approach this problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, I use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here I present the application of the vbICA technique to GPS position time series. First, I use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise) and a volcanic source, and I study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, I apply vbICA to different tectonically active scenarios, such as the 2009 L'Aquila (central Italy) earthquake, the 2012 Emilia (northern Italy) seismic sequence, and the 2006 Guerrero (Mexico) Slow Slip Event (SSE).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das &amp;amp;quot;korrekte&amp;amp;quot; asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research for exact solutions of mixed integer problems is an active topic in the scientific community. State-of-the-art MIP solvers exploit a floating- point numerical representation, therefore introducing small approximations. Although such MIP solvers yield reliable results for the majority of problems, there are cases in which a higher accuracy is required. Indeed, it is known that for some applications floating-point solvers provide falsely feasible solutions, i.e. solutions marked as feasible because of approximations that would not pass a check with exact arithmetic and cannot be practically implemented. The framework of the current dissertation is SCIP, a mixed integer programs solver mainly developed at Zuse Institute Berlin. In the same site we considered a new approach for exactly solving MIPs. Specifically, we developed a constraint handler to plug into SCIP, with the aim to analyze the accuracy of provided floating-point solutions and compute exact primal solutions starting from floating-point ones. We conducted a few computational experiments to test the exact primal constraint handler through the adoption of two main settings. Analysis mode allowed to collect statistics about current SCIP solutions' reliability. Our results confirm that floating-point solutions are accurate enough with respect to many instances. However, our analysis highlighted the presence of numerical errors of variable entity. By using the enforce mode, our constraint handler is able to suggest exact solutions starting from the integer part of a floating-point solution. With the latter setting, results show a general improvement of the quality of provided final solutions, without a significant loss of performances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give a brief review of the Functional Renormalization method in quantum field theory, which is intrinsically non perturbative, in terms of both the Polchinski equation for the Wilsonian action and the Wetterich equation for the generator of the proper verteces. For the latter case we show a simple application for a theory with one real scalar field within the LPA and LPA' approximations. For the first case, instead, we give a covariant "Hamiltonian" version of the Polchinski equation which consists in doing a Legendre transform of the flow for the corresponding effective Lagrangian replacing arbitrary high order derivative of fields with momenta fields. This approach is suitable for studying new truncations in the derivative expansion. We apply this formulation for a theory with one real scalar field and, as a novel result, derive the flow equations for a theory with N real scalar fields with the O(N) internal symmetry. Within this new approach we analyze numerically the scaling solutions for N=1 in d=3 (critical Ising model), at the leading order in the derivative expansion with an infinite number of couplings, encoded in two functions V(phi) and Z(phi), obtaining an estimate for the quantum anomalous dimension with a 10% accuracy (confronting with Monte Carlo results).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The CBS-QB3 method was used to calculate the gas-phase free energy difference between 20 phenols and their respective anions, and the CPCM continuum solvation method was applied to calculate the free energy differences of solvation for the phenols and their anions. The CPCM solvation calculations were performed on both gas-phase and solvent-phase optimized structures. Absolute pKa calculations with solvated phase optimized structures for the CPCM calculations yielded standard deviations and root-mean-square errors of less than 0.4 pKa unit. This study is the most accurate absolute determination of the pKa values of phenols, and is among the most accurate of any such calculations for any group of compounds. The ability to make accurate predictions of pKa values using a coherent, well-defined approach, without external approximations or fitting to experimental data, is of general importance to the chemical community. The solvated phase optimized structures of the anions are absolutely critical to obtain this level of accuracy, and yield a more realistic charge separation between the negatively charged oxygen and the ring system of the phenoxide anions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The measurement of fluid volumes in cases of pericardial effusion is a necessary procedure during autopsy. With the increased use of virtual autopsy methods in forensics, the need for a quick volume measurement method on computed tomography (CT) data arises, especially since methods such as CT angiography can potentially alter the fluid content in the pericardium. We retrospectively selected 15 cases with hemopericardium, which underwent post-mortem imaging and autopsy. Based on CT data, the pericardial blood volume was estimated using segmentation techniques and downsampling of CT datasets. Additionally, a variety of measures (distances, areas and 3D approximations of the effusion) were examined to find a quick and easy way of estimating the effusion volume. Segmentation of CT images as shown in the present study is a feasible method to measure the pericardial fluid amount accurately. Downsampling of a dataset significantly increases the speed of segmentation without losing too much accuracy. Some of the other methods examined might be used to quickly estimate the severity of the effusion volumes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When analysing blood spatters, traces often occur which regarding the collision angle, cannot be allocated to any supposed centre of origin. Drops following highly curved (ballistic) trajectories usually form these types of traces. The reconstruction of such trajectories requires knowledge of the mass, the diameter (of which approximations are known) and the velocity of the blood drops. This article provides an upper range of the velocity in relation to the diameter of the blood drops based on physical laws. This is very helpful in analysing ballistic trajectories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Patients with panic disorder (PD) have a bias to respond to normal stimuli in a fearful way. This may be due to the preactivation of fear-associated networks prior to stimulus perception. Based on EEG, we investigated the difference between patients with PD and normal controls in resting state activity using features of transiently stable brain states (microstates). EEGs from 18 drug-naive patients and 18 healthy controls were analyzed. Microstate analysis showed that one class of microstates (with a right-anterior to left-posterior orientation of the mapped field) displayed longer durations and covered more of the total time in the patients than controls. Another microstate class (with a symmetric, anterior-posterior orientation) was observed less frequently in the patients compared to controls. The observation that selected microstate classes differ between patients with PD and controls suggests that specific brain functions are altered already during resting condition. The altered resting state may be the starting point of the observed dysfunctional processing of phobic stimuli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Schizophrenia has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical circuitry. Alterations in gamma band oscillations have attracted a great deal of interest as they appear to represent a pathophysiological process of cortical dysfunction in schizophrenia. Gamma band oscillations reflect local cortical activities, and the synchronization of these activities among spatially distributed cortical areas has been suggested to play a central role in the formation of networks. To assess global coordination across spatially distributed brain regions, Omega complexity (OC) in multichannel EEG was proposed. Using OC, we investigated global coordination of resting-state EEG activities in both gamma (30–50 Hz) and below-gamma (1.5–30 Hz) bands in drug-naïve patients with schizophrenia and investigated the effects of neuroleptic treatment. We found that gamma band OC was significantly higher in drug-naïve patients with schizophrenia compared to control subjects and that a right frontal electrode (F3) contributed significantly to the higher OC. After neuroleptic treatment, reductions in the contribution of frontal electrodes to global OC in both bands correlated with the improvement of schizophrenia symptomatology. The present study suggests that frontal brain processes in schizophrenia were less coordinated with activity in the remaining brain. In addition, beneficial effects of neuroleptic treatment were accompanied by improvement of brain coordination predominantly due to changes in frontal regions. Our study provides new evidence of improper intrinsic brain integration in schizophrenia by investigating the resting-state gamma band activity.