937 resultados para high-order reasoning
Resumo:
The boundary layer over concave surfaces can be unstable due to centrifugal forces, giving rise to Goertler vortices. These vortices create two regions in the spanwise direction—the upwash and downwash regions. The downwash region is responsible for compressing the boundary layer toward the wall, increasing the heat transfer rate. The upwash region does the opposite. In the nonlinear development of the Goertler vortices, it can be observed that the upwash region becomes narrow and the spanwise–average heat transfer rate is higher than that for a Blasius boundary layer. This paper analyzes the influence of the spanwise wavelength of the Goertler the heat transfer. The equation is written in vorticity-velocity formulation. The time integration is done via a classical fourth-order Runge-Kutta method. The spatial derivatives are calculated using high-order compact finite difference and spectral methods. Three different wavelengths are analyzed. The results show that steady Goertler flow can increase the heat transfer rates to values close to the values of turbulence, without the existence of a secondary instability. The geometry (and computation domain) are presented
Resumo:
Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.
Resumo:
The upgrade of the CERN accelerator complex has been planned in order to further increase the LHC performances in exploring new physics frontiers. One of the main limitations to the upgrade is represented by the collective instabilities. These are intensity dependent phenomena triggered by electromagnetic fields excited by the interaction of the beam with its surrounding. These fields are represented via wake fields in time domain or impedances in frequency domain. Impedances are usually studied assuming ultrarelativistic bunches while we mainly explored low and medium energy regimes in the LHC injector chain. In a non-ultrarelativistic framework we carried out a complete study of the impedance structure of the PSB which accelerates proton bunches up to 1.4 GeV. We measured the imaginary part of the impedance which creates betatron tune shift. We introduced a parabolic bunch model which together with dedicated measurements allowed us to point to the resistive wall impedance as the source of one of the main PSB instability. These results are particularly useful for the design of efficient transverse instability dampers. We developed a macroparticle code to study the effect of the space charge on intensity dependent instabilities. Carrying out the analysis of the bunch modes we proved that the damping effects caused by the space charge, which has been modelled with semi-analytical method and using symplectic high order schemes, can increase the bunch intensity threshold. Numerical libraries have been also developed in order to study, via numerical simulations of the bunches, the impedance of the whole CERN accelerator complex. On a different note, the experiment CNGS at CERN, requires high-intensity beams. We calculated the interpolating Hamiltonian of the beam for highly non-linear lattices. These calculations provide the ground for theoretical and numerical studies aiming to improve the CNGS beam extraction from the PS to the SPS.
Resumo:
In this work we are concerned with the analysis and numerical solution of Black-Scholes type equations arising in the modeling of incomplete financial markets and an inverse problem of determining the local volatility function in a generalized Black-Scholes model from observed option prices. In the first chapter a fully nonlinear Black-Scholes equation which models transaction costs arising in option pricing is discretized by a new high order compact scheme. The compact scheme is proved to be unconditionally stable and non-oscillatory and is very efficient compared to classical schemes. Moreover, it is shown that the finite difference solution converges locally uniformly to the unique viscosity solution of the continuous equation. In the next chapter we turn to the calibration problem of computing local volatility functions from market data in a generalized Black-Scholes setting. We follow an optimal control approach in a Lagrangian framework. We show the existence of a global solution and study first- and second-order optimality conditions. Furthermore, we propose an algorithm that is based on a globalized sequential quadratic programming method and a primal-dual active set strategy, and present numerical results. In the last chapter we consider a quasilinear parabolic equation with quadratic gradient terms, which arises in the modeling of an optimal portfolio in incomplete markets. The existence of weak solutions is shown by considering a sequence of approximate solutions. The main difficulty of the proof is to infer the strong convergence of the sequence. Furthermore, we prove the uniqueness of weak solutions under a smallness condition on the derivatives of the covariance matrices with respect to the solution, but without additional regularity assumptions on the solution. The results are illustrated by a numerical example.
Resumo:
Recognizing one’s body as separate from the external world plays a crucial role in detecting external events, and thus in planning adequate reactions to them. In addition, recognizing one’s body as distinct from others’ bodies allows remapping the experiences of others onto one’s sensory system, providing improved social understanding. In line with these assumptions, two well-known multisensory mechanisms demonstrated modulations of somatosensation when viewing both one’s own and someone else’s body: the Visual Enhancement of Touch (VET) and the Visual Remapping of Touch (VRT) effects. Vision of the body, in the former, and vision of the body being touched, in the latter, enhance tactile processing. The present dissertation investigated the multisensory nature of these mechanisms and their neural bases. Further experiments compared these effects for viewing one’s own body or viewing another person’s body. These experiments showed important differences in multisensory processing for one’s own body, and for other bodies, and also highlighted interactions between VET and VRT effects. The present experimental evidence demonstrated that a multisensory representation of one’s body – underlie by a high order fronto-parietal network - sends rapid modulatory feedback to primary somatosensory cortex, thus functionally enhancing tactile processing. These effects were highly spatially-specific, and depended on current body position. In contrast, vision of another person’s body can drive mental representations able to modulate tactile perception without any spatial constraint. Finally, these modulatory effects seem sometimes to interact with high order information, such as emotional content of a face. This allows one’s somatosensory system to adequately modulate perception of external events on the body surface, as a function of its interaction with the emotional state expressed by another individual.
Resumo:
This work of thesis involves various aspects of crystal engineering. Chapter 1 focuses on crystals containing crown ether complexes. Aspects such as the possibility of preparing these materials by non-solution methods, i.e. by direct reaction of the solid components, thermal behavior and also isomorphism and interconversion between hydrates are taken into account. In chapter 2 a study is presented aimed to understanding the relationship between hydrogen bonding capability and shape of the building blocks chosen to construct crystals. The focus is on the control exerted by shape on the organization of sandwich cations such as cobalticinium, decamethylcobalticinium and bisbenzenchromium(I) and on the aggregation of monoanions all containing carboxylic and carboxylate groups, into 0-D, 1-D, 2-D and 3-D networks. Reactions conducted in multi-component molecular assemblies or co-crystals have been recognized as a way to control reactivity in the solid state. The [2+2] photodimerization of olefins is a successful demonstration of how templated solid state synthesis can efficiently synthesize unique materials with remarkable stereoselectivity and under environment-friendly conditions. A demonstration of this synthetic strategy is given in chapter 3. The combination of various types of intermolecular linkages, leading to formation of high order aggregation and crystalline materials or to a random aggregation resulting in an amorphous precipitate, may not go to completeness. In such rare cases an aggregation process intermediate between crystalline and amorphous materials is observed, resulting in the formation of a gel, i.e. a viscoelastic solid-like or liquid-like material. In chapter 4 design of new Low Molecular Weight Gelators is presented. Aspects such as the relationships between molecular structure, crystal packing and gelation properties and the application of this kind of gels as a medium for crystal growth of organic molecules, such as APIs, are also discussed.
Resumo:
Geochemical mapping is a valuable tool for the control of territory that can be used not only in the identification of mineral resources and geological, agricultural and forestry studies but also in the monitoring of natural resources by giving solutions to environmental and economic problems. Stream sediments are widely used in the sampling campaigns carried out by the world's governments and research groups for their characteristics of broad representativeness of rocks and soils, for ease of sampling and for the possibility to conduct very detailed sampling In this context, the environmental role of stream sediments provides a good basis for the implementation of environmental management measures, in fact the composition of river sediments is an important factor in understanding the complex dynamics that develop within catchment basins therefore they represent a critical environmental compartment: they can persistently incorporate pollutants after a process of contamination and release into the biosphere if the environmental conditions change. It is essential to determine whether the concentrations of certain elements, in particular heavy metals, can be the result of natural erosion of rocks containing high concentrations of specific elements or are generated as residues of human activities related to a certain study area. This PhD thesis aims to extract from an extensive database on stream sediments of the Romagna rivers the widest spectrum of informations. The study involved low and high order stream in the mountain and hilly area, but also the sediments of the floodplain area, where intensive agriculture is active. The geochemical signals recorded by the stream sediments will be interpreted in order to reconstruct the natural variability related to bedrock and soil contribution, the effects of the river dynamics, the anomalous sites, and with the calculation of background values be able to evaluate their level of degradation and predict the environmental risk.
Resumo:
Plasmabasierte Röntgenlaser sind aufgrund ihrer kurzen Wellenlänge und schma-rnlen spektralen Bandbreite attraktive Diagnose-Instrumente in einer Vielzahl potentieller Anwendungen, beispielsweise in den Bereichen Spektroskopie, Mikroskopie und EUV-Lithografie. Dennoch sind Röntgenlaser zum heutigen Stand noch nicht sehr weit verbreitet, was vorwiegend auf eine zu geringe Pulsenergie und für manche Anwendungen nicht hinreichende Strahlqualität zurückzuführen ist. In diesem Zusammenhang wurden in den letzten Jahren bedeutende Fortschritte erzielt. Die gleichzeitige Weiterentwicklung von Pumplasersystemen und Pumpmechanismen ermöglichte es, kompakte Röntgenlaserquellen mit bis zu 100 Hz zu betreiben. Um gleichzeitig höhere Pulsenergien, höhere Strahlqualität und volle räumliche Kohärenz zu erhalten, wurden intensive Studien theoretischer und experimenteller Natur durchgeführt. In diesem Kontext wurde in der vorliegenden Arbeit ein experimenteller Aufbau zur Kombination von zwei Röntgenlaser-Targets entwickelt, die sogenannte Butterfly-Konfiguration. Der erste Röntgenlaser wird dabei als sogenannter Seed für das zweite, als Verstärker dienende Röntgenlasermedium verwendet (injection-seeding). Aufrndiese Weise werden störende Effekte vermieden, welche beim Entstehungsprozessrndes Röntgenlasers durch die Verstärkung von spontaner Emission zustande kom-rnmen. Unter Verwendung des ebenfalls an der GSI entwickelten Double-Pulse Gra-rnzing Incidence Pumpschemas ermöglicht das hier vorgestellte Konzept, erstmaligrnbeide Röntgenlasertargets effizient und inklusive Wanderwellenanregung zu pum-rnpen.rnBei einer ersten experimentellen Umsetzung gelang die Erzeugung verstärkter Silber-Röntgenlaserpulse von 1 µJ bei 13.9 nm Wellenlänge. Anhand der gewonnenen Daten erfolgte neben dem Nachweis der Verstärkung die Bestimmung der Lebensdauer der Besetzungsinversion zu 3 ps. In einem Nachfolgeexperiment wurden die Eigenschaften eines Molybdän-Röntgenlaserplasmas näher untersucht. Neben dem bisher an der GSI angewandten Pumpschema kam in dieser Strahlzeit noch eine weitere Technik zum Einsatz, welche auf einem zusätzlichen Pumppuls basierte. In beiden Schemata gelang neben dem Nachweis der Verstärkung die zeitliche und räumliche Charakterisierung des Verstärkermediums. Röntgenlaserpulse mit bis zu 240 nJ bei einer Wellenlänge von 18.9 nm wurden nachgewiesen. Die erreichte Brillanz der verstärkten Pulse lag ca. zwei Größenordnungen über der des ursprünglichen Seeds und mehr als eine Größenordnung über der Brillanz eines Röntgenlasers, dessen Erzeugung auf der Verwendung eines einzelnen Targets basierte. Das in dieser Arbeitrnentwickelte und experimentell verifizierte Konzept birgt somit das Potential, extrem brillante plasmabasierte Röntgenlaser mit vollständiger räumlicher und zeitlicher Kohärenz zu erzeugen.rnDie in dieser Arbeit diskutierten Ergebnisse sind ein wesentlicher Beitrag zu der Entwicklung eines Röntgenlasers, der bei spektroskopischen Untersuchungen von hochgeladenen Schwerionen eingesetzt werden soll. Diese Experimente sind amrnExperimentierspeicherring der GSI und zukünftig auch am High-Energy StoragernRing der FAIR-Anlage vorgesehen.rn
Resumo:
In dieser Arbeit wird ein neuer Dynamikkern entwickelt und in das bestehendernnumerische Wettervorhersagesystem COSMO integriert. Für die räumlichernDiskretisierung werden diskontinuierliche Galerkin-Verfahren (DG-Verfahren)rnverwendet, für die zeitliche Runge-Kutta-Verfahren. Hierdurch ist ein Verfahrenrnhoher Ordnung einfach zu realisieren und es sind lokale Erhaltungseigenschaftenrnder prognostischen Variablen gegeben. Der hier entwickelte Dynamikkern verwendetrngeländefolgende Koordinaten in Erhaltungsform für die Orographiemodellierung undrnkoppelt das DG-Verfahren mit einem Kessler-Schema für warmen Niederschlag. Dabeirnwird die Fallgeschwindigkeit des Regens, nicht wie üblich implizit imrnKessler-Schema diskretisiert, sondern explizit im Dynamikkern. Hierdurch sindrndie Zeitschritte der Parametrisierung für die Phasenumwandlung des Wassers undrnfür die Dynamik vollständig entkoppelt, wodurch auch sehr große Zeitschritte fürrndie Parametrisierung verwendet werden können. Die Kopplung ist sowohl fürrnOperatoraufteilung, als auch für Prozessaufteilung realisiert.rnrnAnhand idealisierter Testfälle werden die Konvergenz und die globalenrnErhaltungseigenschaften des neu entwickelten Dynamikkerns validiert. Die Massernwird bis auf Maschinengenauigkeit global erhalten. Mittels Bergüberströmungenrnwird die Orographiemodellierung validiert. Die verwendete Kombination ausrnDG-Verfahren und geländefolgenden Koordinaten ermöglicht die Behandlung vonrnsteileren Bergen, als dies mit dem auf Finite-Differenzenverfahren-basierendenrnDynamikkern von COSMO möglich ist. Es wird gezeigt, wann die vollernTensorproduktbasis und wann die Minimalbasis vorteilhaft ist. Die Größe desrnEinflusses auf das Simulationsergebnis der Verfahrensordnung, desrnParametrisierungszeitschritts und der Aufteilungsstrategie wirdrnuntersucht. Zuletzt wird gezeigt dass bei gleichem Zeitschritt die DG-Verfahrenrnaufgrund der besseren Skalierbarkeit in der Laufzeit konkurrenzfähig zurnFinite-Differenzenverfahren sind.
Resumo:
We give a brief review of the Functional Renormalization method in quantum field theory, which is intrinsically non perturbative, in terms of both the Polchinski equation for the Wilsonian action and the Wetterich equation for the generator of the proper verteces. For the latter case we show a simple application for a theory with one real scalar field within the LPA and LPA' approximations. For the first case, instead, we give a covariant "Hamiltonian" version of the Polchinski equation which consists in doing a Legendre transform of the flow for the corresponding effective Lagrangian replacing arbitrary high order derivative of fields with momenta fields. This approach is suitable for studying new truncations in the derivative expansion. We apply this formulation for a theory with one real scalar field and, as a novel result, derive the flow equations for a theory with N real scalar fields with the O(N) internal symmetry. Within this new approach we analyze numerically the scaling solutions for N=1 in d=3 (critical Ising model), at the leading order in the derivative expansion with an infinite number of couplings, encoded in two functions V(phi) and Z(phi), obtaining an estimate for the quantum anomalous dimension with a 10% accuracy (confronting with Monte Carlo results).
Resumo:
One of the most intriguing phenomena in glass forming systems is the dynamic crossover (T(B)), occurring well above the glass temperature (T(g)). So far, it was estimated mainly from the linearized derivative analysis of the primary relaxation time τ(T) or viscosity η(T) experimental data, originally proposed by Stickel et al. [J. Chem. Phys. 104, 2043 (1996); J. Chem. Phys. 107, 1086 (1997)]. However, this formal procedure is based on the general validity of the Vogel-Fulcher-Tammann equation, which has been strongly questioned recently [T. Hecksher et al. Nature Phys. 4, 737 (2008); P. Lunkenheimer et al. Phys. Rev. E 81, 051504 (2010); J. C. Martinez-Garcia et al. J. Chem. Phys. 134, 024512 (2011)]. We present a qualitatively new way to identify the dynamic crossover based on the apparent enthalpy space (H(a)(') = dlnτ/d(1/T)) analysis via a new plot lnH(a)(') vs. 1∕T supported by the Savitzky-Golay filtering procedure for getting an insight into the noise-distorted high order derivatives. It is shown that depending on the ratio between the "virtual" fragility in the high temperature dynamic domain (m(high)) and the "real" fragility at T(g) (the low temperature dynamic domain, m = m(low)) glass formers can be splitted into two groups related to f < 1 and f > 1, (f = m(high)∕m(low)). The link of this phenomenon to the ratio between the apparent enthalpy and activation energy as well as the behavior of the configurational entropy is indicated.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
Among the optical structures investigated for optical sensing purpose, a significant amount of research has been conducted on photonic crystal based sensors. A particular advantage of photonic crystal based sensors is that they show superior sensitivity for ultra-small volume sensing. In this study we investigate polarization changes in response to the changes in the cover index of magneto-optic active photonic band gap structures. One-dimensional photonic-band gap structures fabricated on iron garnet materials yield large polarization rotations at the band gap edges. The enhanced polarization effects serve as an excellent tool for chemical sensing showing high degree of sensitivity for photonic crystal cover refractive index changes. The one dimensional waveguide photonic crystals are fabricated on single-layer bismuth-substituted rare earth iron garnet films ((Bi, Y, Lu)3(Fe, Ga)5O12 ) grown by liquid phase epitaxy on gadolinium gallium garnet substrates. Band gaps have been observed where Bragg scattering conditions links forward-going fundamental waveguide modes to backscattered high-order waveguide modes. Large near-band-edge polarization rotations which increase progressively with backscattered-mode order have been experimentally demonstrated for multiple samples with different composition, film thickness and fabrication parameters. Experimental findings are supported by theoretical analysis of Bloch modes polarization states showing that large near stop-band edge rotations are induced by the magneto-photonic crystal. Theoretical and experimental analysis conducted on polarization rotation sensitivity to waveguide photonic crystal cover refractive index changes shows a monotonic enhancement of the rotation with cover index. The sensor is further developed for selective chemical sensing by employing Polypyrrole as the photonic crystal cover layer. Polypyrrole is one of the extensively studied conducting polymers for selective analyte detection. Successful detection of aqueous ammonia and methanol has been achieved with Polypyrrole deposited magneto-photonic crystals.
Resumo:
The IDA model of cognition is a fully integrated artificial cognitive system reaching across the full spectrum of cognition, from low-level perception/action to high-level reasoning. Extensively based on empirical data, it accurately reflects the full range of cognitive processes found in natural cognitive systems. As a source of plausible explanations for very many cognitive processes, the IDA model provides an ideal tool to think with about how minds work. This online tutorial offers a reasonably full account of the IDA conceptual model, including background material. It also provides a high-level account of the underlying computational “mechanisms of mind” that constitute the IDA computational model.