613 resultados para Histogram quotient
Resumo:
A sample scanning confocal optical microscope (SCOM) was designed and constructed in order to perform local measurements of fluorescence, light scattering and Raman scattering. This instrument allows to measure time resolved fluorescence, Raman scattering and light scattering from the same diffraction limited spot. Fluorescence from single molecules and light scattering from metallic nanoparticles can be studied. First, the electric field distribution in the focus of the SCOM was modelled. This enables the design of illumination modes for different purposes, such as the determination of the three-dimensional orientation of single chromophores. Second, a method for the calculation of the de-excitation rates of a chromophore was presented. This permits to compare different detection schemes and experimental geometries in order to optimize the collection of fluorescence photons. Both methods were combined to calculate the SCOM fluorescence signal of a chromophore in a general layered system. The fluorescence excitation and emission of single molecules through a thin gold film was investigated experimentally and modelled. It was demonstrated that, due to the mediation of surface plasmons, single molecule fluorescence near a thin gold film can be excited and detected with an epi-illumination scheme through the film. Single molecule fluorescence as close as 15nm to the gold film was studied in this manner. The fluorescence dynamics (fluorescence blinking and excited state lifetime) of single molecules was studied in the presence and in the absence of a nearby gold film in order to investigate the influence of the metal on the electronic transition rates. The trace-histogram and the autocorrelation methods for the analysis of single molecule fluorescence blinking were presented and compared via the analysis of Monte-Carlo simulated data. The nearby gold influences the total decay rate in agreement to theory. The gold presence produced no influence on the ISC rate from the excited state to the triplet but increased by a factor of 2 the transition rate from the triplet to the singlet ground state. The photoluminescence blinking of Zn0.42Cd0.58Se QDs on glass and ITO substrates was investigated experimentally as a function of the excitation power (P) and modelled via Monte-Carlo simulations. At low P, it was observed that the probability of a certain on- or off-time follows a negative power-law with exponent near to 1.6. As P increased, the on-time fraction reduced on both substrates whereas the off-times did not change. A weak residual memory effect between consecutive on-times and consecutive off-times was observed but not between an on-time and the adjacent off-time. All of this suggests the presence of two independent mechanisms governing the lifetimes of the on- and off-states. The simulated data showed Poisson-distributed off- and on-intensities, demonstrating that the observed non-Poissonian on-intensity distribution of the QDs is not a product of the underlying power-law probability and that the blinking of QDs occurs between a non-emitting off-state and a distribution of emitting on-states with different intensities. All the experimentally observed photo-induced effects could be accounted for by introducing a characteristic lifetime tPI of the on-state in the simulations. The QDs on glass presented a tPI proportional to P-1 suggesting the presence of a one-photon process. Light scattering images and spectra of colloidal and C-shaped gold nano-particles were acquired. The minimum size of a metallic scatterer detectable with the SCOM lies around 20 nm.
Resumo:
Vergleich von Datensätzen,die mit Hilfe von den Programmen "Volume©","Pulmo©", "Yacta©" und PulmoFUNC(ILab) erstellt wurden. Dabei wurden jeweils die Lungen- und Emphysemvolumina verglichen, die mit den 4 Programmen ermittelt wurden. Außerdem wurde noch die mittlere Lungendichte als Mittelwert aller Lungenvoxel bestimmt. Zusätzlich wurde noch der Emphysemindex als Quotient aus Emphysem- und Lungenvolumina errechnet. Die Programme waren unterschiedlich benutzerfreundlich in der Bearbeitung: Die weitestgehend manuell zu bearbeitenden Programme Volume© und Pulmo© benötigten zur Bearbeitung deutlich mehr Zeit als die überwiegend automatisch arbeitenden Programme Yacta und PulmoFUNC(ILab).
Resumo:
Background: Brain cooling (BC) represents the elective treatment in asphyxiated newborns. Amplitude Integrated Electroencephalography (aEEG) and Near Infrared Spectroscopy (NIRS) monitoring may help to evaluate changes in cerebral electrical activity and cerebral hemodynamics during hypothermia. Objectives: To evaluate the prognostic value of aEEG time course and NIRS data in asphyxiated cooled infants. Methods: 12 term neonates admitted to our NICU with moderate-severe Hypoxic-Ischemic Encephalopathy (HIE) underwent selective BC. aEEG and NIRS monitoring were started as soon as possible and maintained during the whole hypothermic treatment. Follow-up was scheduled at regular intervals; adverse outcome was defined as death, cerebral palsy (CP) or global quotient < 88.7 at Griffiths’ Scale. Results: 2/12 infants died, 2 developed CP, 1 was normal at 6 months of age and then lost at follow-up and 7 showed a normal outcome at least at 1 year of age. The aEEG background pattern at 24 hours of life was abnormal in 10 newborns; only 4 of them developed an adverse outcome, whereas the 2 infants with a normal aEEG developed normally. In infants with adverse outcome NIRS showed a higher Tissue Oxygenation Index (TOI) than those with normal outcome (80.0±10.5% vs 66.9±7.0%, p=0.057; 79.7±9.4% vs 67.1±7.9%, p=0.034; 80.2±8.8% vs 71.6±5.9%, p=0.069 at 6, 12 and 24 hours of life, respectively). Conclusions: The aEEG background pattern at 24 hours of life loses its positive predictive value after BC implementation; TOI could be useful to predict early on infants that may benefit from other innovative therapies.
Resumo:
Die Messung der Stärke von Empfindungen hat in der Psychologie eine lange Tradition, die bis in die Zeit der Entstehung der Psychologie als eine eigenständige Wissenschaft gegen Ende des 19. Jahrhunderts zurückreicht. Gustav Theodor Fechner verband die Beobachtung Webers der Konstanz des Koeffizienten des eben merklichen Unterschieds zu der Vergleichsintensität (sog. "Weber-Quotient") mit der Annahme einer sensorischen Schwelle, und entwickelte daraus erstmals eine Skala für die Stärke von Empfindungen. Die Fechner-Skala verwendet die Anzahl sukzessiver Schwellenschritte als natürliche, psychologische Einheit. Die Stärke einer Empfindung für eine gegebene Reizintensität wird ausgedrückt als die Anzahl von Schwellenschritten, die man gehen muss, um von keiner Empfindung bis zur in Frage stehenden Empfindung zu gelangen. Die Funktion, die den Zusammenhang von Reizintensität und der Anzahl nötiger Schwellenschritte beschreibt, ist stets logarithmisch und über sukzessive Schwellenmessungen für Reize aus den verschiedensten Sinnesmodalitäten bestimmbar. Derart sich ergebende Skalierungen heißen "indirekt", weil die in Frage stehende Reizintensität selbst nicht von der Urteilsperson bewertet wird. Intensitäten sind vom Urteiler nur mit anderen Intensitäten in Bezug auf ein "stärker" oder "schwächer", also ordinal, zu vergleichen. Indirekte Skalierungsmethoden eignen sich insbesondere, wenn der Reizeindruck flüchtig und von der absoluten Stärke her schwer durch den Urteiler zu quantifizieren ist. Ein typisches Beispiel hierfür ist die Auffälligkeit (Salienz) von visuellen Objekten, die in zufällig wechselnde Hintergründe eingebettet sind und dem Betrachter nur als ein rasches raumzeitliches Aufblitzen präsentiert werden. Die Stärke des Unterschieds in Merkmalen wie Helligkeit, Farbe, Orientierung, Schattierung, Form, Krümmung, oder Bewegung bestimmt das Ausmaß der Salienz von Objekten. Obschon eine Fülle von Arbeiten existiert zu der Frage, welche Merkmale und deren Kombinationen ohne Wissen des Ortes ihrer Präsentation automatisch starke Salienz ("Pop-Out") erzeugen, existieren bislang keine systematischen Versuche, die Salienz von Merkmalen für einen weiten Bereich von Merkmalsunterschieden zu erfassen und vergleichbar zu machen. Indirekte Skalierungen liegen vor für die Merkmale Kontrast (Legge und Foley, 1980) und Orientierung (Motoyoshi und Nishida, 2001). Ein Vergleich der Salienz über mehrere Merkmale und der Nachweis, dass die Salienz eine eigene, von der Merkmalsdimension unabhängige sensorische Qualität ist, steht aber bislang aus. In der vorliegenden Arbeit wird gezeigt, dass der Unterschied von Objekten zur einbettenden Umgebung hinsichtlich visueller Merkmale zu Salienz führt und diese Salienz unabhängig von dem sie erzeugenden Merkmal der Stärke nach skalierbar ist. Es wird ferner gezeigt, dass die Einheiten der für zwei Merkmale erhobenen indirekten Skalierungsfunktionen in einem absoluten Sinne gleich sind, solange sichergestellt ist, dass (i) keine alternativen Hinweisreize existieren und nur der reine Merkmalsunterschied von Objekt und Umgebung bewertet wird und (ii) das sensorische Rauschen in den aktivierten Merkmalskanälen für beide Merkmale gleich ist. Für diesen Aufweis wurden exemplarisch die Merkmale Orientierung und Ortsfrequenz ausgewählt und die Salienz ihrer Merkmalskontraste über Naka-Rushton-Funktionen, gewonnen aus den zugrundeliegenden Salienz-Inkrementschwellenmessungen, indirekt skaliert. Für das Merkmal Ortsfrequenz liegt hiermit erstmals eine indirekte Skalierung vor. Hierfür musste eine spezielle Messtechnik entwickelt werden, die die Bewertung reiner Ortsfrequenzunterschiede, frei von konfundierenden absoluten Ausprägungen der Ortsfrequenzen, sicherstellt. Die Methode ist in Kapitel 7 dargestellt. Experimente, die die konfundierende Wirkung absoluter Merkmalsausprägungen auf die Salienzmessung demonstrieren, sind in Kapitel 6 dargestellt. In Kapitel 8 findet sich ein empirischer Abgleich der Ergebnisse von Inkrement- und Dekrementschwellenmessungen, eine Messtechnik, die zur Erfassung von Unterschiedsschwellen im Extrembereich der Orientierungsunterschiede von 90° nötig ist. Kapitel 9 enthält den empirischen Aufweis der Transitivität der Gleichheitsrelation für Salienzmessungen von Orientierung und Ortsfrequenz durch Abgleich mit einem dritten Merkmal und erbringt damit den Beleg der merkmalsunabhängigen Erfassung von Auffälligkeit über die indirekte Skalierungsmethodik. Ferner wird dort die Wirksamkeit der Grundsalienz von Mustern, gegeben über externes Rauschen in den Merkmalen (sog. "Merkmalsjitter") für die Verschiebung des Nullpunktes der Skalierungsfunktion aufgezeigt. Im letzten Experiment (Kapitel 10) wird dann die Skalierung von Orientierung und Ortsfrequenz bei gleicher Grundsalienz der Muster verglichen und gezeigt, dass beide Skalen in einem absoluten Sinne gleiche Einheiten aufweisen (also gleiche Skalenzahlen gleiche sensorische Auffälligkeiten anzeigen, obwohl sie von verschiedenen Merkmalen stammen), wenn der Effekt des sensorischen Rauschens, der im Merkmal Orientierung nicht über die verschiedenen Schwellenschritte konstant ist, kompensiert wird. Die Inkonstanz des Effektes des sensorischen Rauschens im Merkmal Orientierung wird über die Veränderung der Steigung der psychometrischen Präferenzfunktion für die Vergleichsurteile der Orientierungssalienz für eine fest vorgegebene Ortsfrequenzsalienz greifbar, und der Effekt der Steigungsveränderung kompensiert exakt die Nichtlinearität in der für beide Merkmale erhobenen Salienz-Matchingfunktion. Im letzten Kapitel wird ein Ausblick auf eine mögliche Modellierung der Salienzfunktionen über klassische Multikanal-Feedforwardmodelle gegeben. In den ersten fünf Kapiteln sind einführend die Gebiete der indirekten Skalierung, der Merkmalssalienz und der Texturtrennung im menschlichen visuellen System dargestellt.
Resumo:
Sei $\pi:X\rightarrow S$ eine \"uber $\Z$ definierte Familie von Calabi-Yau Varietaten der Dimension drei. Es existiere ein unter dem Gauss-Manin Zusammenhang invarianter Untermodul $M\subset H^3_{DR}(X/S)$ von Rang vier, sodass der Picard-Fuchs Operator $P$ auf $M$ ein sogenannter {\em Calabi-Yau } Operator von Ordnung vier ist. Sei $k$ ein endlicher K\"orper der Charaktetristik $p$, und sei $\pi_0:X_0\rightarrow S_0$ die Reduktion von $\pi$ \uber $k$. F\ur die gew\ohnlichen (ordinary) Fasern $X_{t_0}$ der Familie leiten wir eine explizite Formel zur Berechnung des charakteristischen Polynoms des Frobeniusendomorphismus, des {\em Frobeniuspolynoms}, auf dem korrespondierenden Untermodul $M_{cris}\subset H^3_{cris}(X_{t_0})$ her. Sei nun $f_0(z)$ die Potenzreihenl\osung der Differentialgleichung $Pf=0$ in einer Umgebung der Null. Da eine reziproke Nullstelle des Frobeniuspolynoms in einem Teichm\uller-Punkt $t$ durch $f_0(z)/f_0(z^p)|_{z=t}$ gegeben ist, ist ein entscheidender Schritt in der Berechnung des Frobeniuspolynoms die Konstruktion einer $p-$adischen analytischen Fortsetzung des Quotienten $f_0(z)/f_0(z^p)$ auf den Rand des $p-$adischen Einheitskreises. Kann man die Koeffizienten von $f_0$ mithilfe der konstanten Terme in den Potenzen eines Laurent-Polynoms, dessen Newton-Polyeder den Ursprung als einzigen inneren Gitterpunkt enth\alt, ausdr\ucken,so beweisen wir gewisse Kongruenz-Eigenschaften unter den Koeffizienten von $f_0$. Diese sind entscheidend bei der Konstruktion der analytischen Fortsetzung. Enth\alt die Faser $X_{t_0}$ einen gew\ohnlichen Doppelpunkt, so erwarten wir im Grenz\ubergang, dass das Frobeniuspolynom in zwei Faktoren von Grad eins und einen Faktor von Grad zwei zerf\allt. Der Faktor von Grad zwei ist dabei durch einen Koeffizienten $a_p$ eindeutig bestimmt. Durchl\auft nun $p$ die Menge aller Primzahlen, so erwarten wir aufgrund des Modularit\atssatzes, dass es eine Modulform von Gewicht vier gibt, deren Koeffizienten durch die Koeffizienten $a_p$ gegeben sind. Diese Erwartung hat sich durch unsere umfangreichen Rechnungen best\atigt. Dar\uberhinaus leiten wir weitere Formeln zur Bestimmung des Frobeniuspolynoms her, in welchen auch die nicht-holomorphen L\osungen der Gleichung $Pf=0$ in einer Umgebung der Null eine Rolle spielen.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
This thesis reports on the experimental realization, characterization and application of a novel microresonator design. The so-called “bottle microresonator” sustains whispering-gallery modes in which light fields are confined near the surface of the micron-sized silica structure by continuous total internal reflection. While whispering-gallery mode resonators in general exhibit outstanding properties in terms of both temporal and spatial confinement of light fields, their monolithic design makes tuning of their resonance frequency difficult. This impedes their use, e.g., in cavity quantum electrodynamics (CQED) experiments, which investigate the interaction of single quantum mechanical emitters of predetermined resonance frequency with a cavity mode. In contrast, the highly prolate shape of the bottle microresonators gives rise to a customizable mode structure, enabling full tunability. The thesis is organized as follows: In chapter I, I give a brief overview of different types of optical microresonators. Important quantities, such as the quality factor Q and the mode volume V, which characterize the temporal and spatial confinement of the light field are introduced. In chapter II, a wave equation calculation of the modes of a bottle microresonator is presented. The intensity distribution of different bottle modes is derived and their mode volume is calculated. A brief description of light propagation in ultra-thin optical fibers, which are used to couple light into and out of bottle modes, is given as well. The chapter concludes with a presentation of the fabrication techniques of both structures. Chapter III presents experimental results on highly efficient, nearly lossless coupling of light into bottle modes as well as their spatial and spectral characterization. Ultra-high intrinsic quality factors exceeding 360 million as well as full tunability are demonstrated. In chapter IV, the bottle microresonator in add-drop configuration, i.e., with two ultra-thin fibers coupled to one bottle mode, is discussed. The highly efficient, nearly lossless coupling characteristics of each fiber combined with the resonator's high intrinsic quality factor, enable resonant power transfers between both fibers with efficiencies exceeding 90%. Moreover, the favorable ratio of absorption and the nonlinear refractive index of silica yields optical Kerr bistability at record low powers on the order of 50 µW. Combined with the add-drop configuration, this allows one to route optical signals between the outputs of both ultra-thin fibers, simply by varying the input power, thereby enabling applications in all-optical signal processing. Finally, in chapter V, I discuss the potential of the bottle microresonator for CQED experiments with single atoms. Its Q/V-ratio, which determines the ratio of the atom-cavity coupling rate to the dissipative rates of the subsystems, aligns with the values obtained for state-of-the-art CQED microresonators. In combination with its full tunability and the possibility of highly efficient light transfer to and from the bottle mode, this makes the bottle microresonator a unique tool for quantum optics applications.
Resumo:
Given a reductive group G acting on an affine scheme X over C and a Hilbert function h: Irr G → N_0, we construct the moduli space M_Ө(X) of Ө-stable (G,h)-constellations on X, which is a common generalisation of the invariant Hilbert scheme after Alexeev and Brion and the moduli space of Ө-stable G-constellations for finite groups G introduced by Craw and Ishii. Our construction of a morphism M_Ө(X) → X//G makes this moduli space a candidate for a resolution of singularities of the quotient X//G. Furthermore, we determine the invariant Hilbert scheme of the zero fibre of the moment map of an action of Sl_2 on (C²)⁶ as one of the first examples of invariant Hilbert schemes with multiplicities. While doing this, we present a general procedure for the realisation of such calculations. We also consider questions of smoothness and connectedness and thereby show that our Hilbert scheme gives a resolution of singularities of the symplectic reduction of the action.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
If the generic fibre f−1(c) of a Lagrangian fibration f : X → B on a complex Poisson– variety X is smooth, compact, and connected, it is isomorphic to the compactification of a complex abelian Lie–group. For affine Lagrangian fibres it is not clear what the structure of the fibre is. Adler and van Moerbeke developed a strategy to prove that the generic fibre of a Lagrangian fibration is isomorphic to the affine part of an abelian variety.rnWe extend their strategy to verify that the generic fibre of a given Lagrangian fibration is the affine part of a (C∗)r–extension of an abelian variety. This strategy turned out to be successful for all examples we studied. Additionally we studied examples of Lagrangian fibrations that have the affine part of a ramified cyclic cover of an abelian variety as generic fibre. We obtained an embedding in a Lagrangian fibration that has the affine part of a C∗–extension of an abelian variety as generic fibre. This embedding is not an embedding in the category of Lagrangian fibrations. The C∗–quotient of the new Lagrangian fibration defines in a natural way a deformation of the cyclic quotient of the original Lagrangian fibration.
Resumo:
Mutations in the dystrophin gene have long been recognised as a cause of mental retardation. However, for reasons that are unclear, some boys with dystrophin mutations do not show general cognitive deficits. To investigate the relationship between dystrophin mutations and cognition, the general intellectual abilities of a group of 25 boys with genetically confirmed Duchenne muscular dystrophy were evaluated. Furthermore, a subgroup underwent additional detailed neuropsychological assessment. The results showed a mean full scale intelligence quotient (IQ) of 88 (standard deviation 24). Patients performed very poorly on various neuropsychological tests, including arithmetics, digit span tests and verbal fluency. No simple relationship between dystrophin mutations and cognitive functioning could be detected. However, our analysis revealed that patients who lack the dystrophin isoform Dp140 have significantly greater cognitive problems.
Resumo:
Background: Resonance frequency analysis (RFA) is a noninvasive technique for the quantitative assessment of implant stability. Information on the implant stability quotient (ISQ) of transmucosally inserted implants is limited. Purpose: The aim of this investigation was to compare the ISQ of conventionally inserted implants by raising a muco-periostal flap with implants inserted using a flapless procedure. Materials and Methods: Forty elderly patients with complete edentulous maxilla were consecutively admitted for treatment with implant-supported prostheses. A computer tomography was obtained for the computer-assisted implant planning. One hundred ten implants were placed conventionally in 23 patients (flap-group) and 85 implants in 17 patients by means of the flapless method (flapless-group) using a stereolithographic template. RFA measurements were performed after implant placement (baseline) and after a healing time of 12 weeks (reentry). Results: All implants exhibited clinically and radiographically successful osseointegration. Bone level did not change significantly neither for genders nor type of surgical protocol. Mean ISQ values of the flapless-group were significantly higher at baseline (p < .001) and at reentry (p < .001) compared with the flap-group. The ISQ values were significantly lower at reentry compared with baseline for the flap-group (p = .028) but not for the flapless-group. This group showed a moderate, but insignificant increase. RFA measurements of males resulted in ISQ values that were thoroughly higher as compared with females at both time-points in both groups. Correlation between RFA and bone level was not found. Conclusions: The flapless procedure showed favorable conditions with regard to implant stability and crestal bone level. Some changes of the ISQ values that represent primary (mechanical) and secondary (bone remodeling) implant stability were observed in slight favor of the flapless method and male patients. In properly planned and well-selected cases, the minimal invasive transmucosal technique using a drill-guide is a safe procedure.
Resumo:
Evidence suggests that the social cognition deficits prevalent in autism spectrum disorders (ASDs) are widely distributed in first degree and extended relatives. This ¿broader autism phenotype¿ (BAP) can be extended into non-clinical populations and show wide distributions of social behaviors such as empathy and social responsiveness ¿ with ASDs exhibiting these behaviors on the lower ends of the distributions. Little evidence has previously shown relationships between self-report measures of social cognition and more objective tasks such as face perception in functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs). In this study, three specific hypotheses were addressed: a) increased social ability, as measured by an increased Empathy Quotient, decreased Social Responsiveness Scale (SRS-A) score, and increased Social Attribution Task score, will predict increased activation of the fusiform gyrus in response to faces as compared to houses; b) these same measures will predict N170 amplitude and latency showing decreased latency and increased amplitude for faces as compared to houses with increased social ability; c) increased amygdala volume will predict increased fusiform gyrus activation when viewing faces as compared to houses. Findings supported all of the hypotheses. Empathy scores significantly predicted both right FFG activation [F(1,20) = 4.811, p = .041, ß = .450, R2 = 0.20] and left FFG activation [F(1,20) = 7.70, p = .012, ß = .537, R2 = 0.29]. Based on ERP results increased right lateralization face-related N170 was significantly predicted by the EQ [F(1,54) = 6.94, p = .011, ß = .338, R2 = 0.11]. Finally, total amygdala volume significantly predicted right [F(1,20) = 7.217, p = .014, ß = .515, R2 = 0.27] and left [F(1,20) = 36.77, p < .001, ß = .805, R2 = 0.65] FFG activation. Consistent with the a priori hypotheses, traits attributed to the BAP can significantly predict neural responses to faces in a non-clinical population. This is consistent with the face processing deficits seen in ASDs. The findings presented here contribute to the extension of the BAP from unaffected relatives of individuals with ASDs to the general population. These findings also give continued evidence in support of a continuous distribution of traits found in psychiatric illnesses in place of a traditional, dichotomous ¿all-or-nothing¿ diagnostic framework of neurodevelopmental and neuropsychiatric disorders.
Resumo:
BACKGROUND: Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. METHODS: Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. RESULTS: Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. CONCLUSIONS: When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright 2014 Elsevier B.V. All rights reserved.