887 resultados para Problem Behavior Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attraverso l’analisi di teorie della lettura “centripete” e “centrifughe”, tra fenomenologia, semiotica e teoria della risposta estetica, questa ricerca punta a definire la lettura come un’esperienza estetica di una variabile e plurale letterarietà, o per essere più precisi, come una relazione estetica ad una funzione nel linguaggio, che di volta in volta diviene immanente e trascendente rispetto al linguaggio, immanente nella percepibilità espressiva del segno e trascendente nella sua ristretta finzionalità o fittività, aperta alla dimensione del senso. Così, la letterarietà è vista, dal punto di vista di una teoria della lettura, come una funzione che nega o sovverte il linguaggio ordinario, inteso come contesto normale, ma anche una funzione che permette il supplemento di senso del linguaggio. Ciò rende la definizione di cosa sia letteratura e di quali testi siano considerabili come letterari come una definizione dipendente dalla lettura, ed anche mette in questione la classica dicotomia tra linguaggio standard e linguaggio deviante, di secondo grado e figurativo, comportamento che distinguerebbe la letteratura. Questi quattro saggi vorrebbero dimostrare che la lettura, come una pratica estetica, è l’espressione di una oscillazione tra una Finzione variabile nei suoi effetti ed una Ricezione, la quale è una risposta estetica controllata dal testo, ma anche una relazione estetica all’artefatto a natura verbale. Solo in questo modo può essere compresa la caratteristica paradossale della lettura, il suo stare tra una percezione passiva ed un’attiva esecuzione, tra un’attenzione aspettuale ed una comprensione intenzionale. Queste modalità si riflettono anche sulla natura dialettica della lettura, come una dialettica di apertura e chiusura, ma anche di libertà e fedeltà, risposta ad uno stimolo che può essere interpretato come una domanda, e che presenta la lettura stessa come una premessa dell’interpretazione, come momento estetico. Così una teoria della lettura dipende necessariamente da una teoria dell’arte che si presenta come funzionale, relativa più al Quando vi è arte?/Come funziona? piuttosto che al Che cosa è Arte?, che rende questo secondo problema legato al primo. Inoltre, questo Quando dell’Arte, che definisce l’opera d’arte come un’arte- all’-opera, dipende a sua volta, in un campo letterario, dalla domanda Quando vi è esperienza estetica letteraria? e dalla sue condizioni, quelle di finzione e ricezione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A complete understanding of the glass transition isstill a challenging problem. Some researchers attributeit to the (hypothetical) occurrence of a static phasetransition, others emphasize the dynamical transitionof mode coupling-theory from an ergodic to a non ergodicstate. A class of disordered spin models has been foundwhich unifies both scenarios. One of these models isthe p-state infinite range Potts glass with p>4, whichexhibits in the thermodynamic limit both a dynamicalphase transition at a temperature T_D, and a static oneat T_0 < T_D. In this model every spins interacts withall the others, irrespective of distance. Interactionsare taken from a Gaussian distribution.In order to understand better its behavior forfinite number N of spins and the approach to thethermodynamic limit, we have performed extensive MonteCarlo simulations of the p=10 Potts glass up to N=2560.The time-dependent spin-autocorrelation function C(t)shows strong finite size effects and it does not showa plateau even for temperatures around the dynamicalcritical temperature T_D. We show that the N-andT-dependence of the relaxation time for T > T_D can beunderstood by means of a dynamical finite size scalingAnsatz.The behavior in the spin glass phase down to atemperature T=0.7 (about 60% of the transitiontemperature) is studied. Well equilibratedconfigurations are obtained with the paralleltempering method, which is also useful for properlyestablishing static properties, such as the orderparameter distribution function P(q). Evidence is givenfor the compatibility with a one step replica symmetrybreaking scenario. The study of the cumulants of theorder parameter does not permit a reliable estimation ofthe static transition temperature. The autocorrelationfunction at low T exhibits a two-step decay, and ascaling behavior typical of supercooled liquids, thetime-temperature superposition principle, is observed. Inthis region the dynamics is governed by Arrheniusrelaxations, with barriers growing like N^{1/2}.We analyzed the single spin dynamics down to temperaturesmuch lower than the dynamical transition temperature. We found strong dynamical heterogeneities, which explainthe non-exponential character of the spin autocorrelationfunction. The spins seem to relax according to dynamicalclusters. The model in three dimensions tends to acquireferromagnetic order for equal concentration of ferro-and antiferromagnetic bonds. The ordering has differentcharacteristics from the pure ferromagnet. The spinglass susceptibility behaves like chi_{SG} proportionalto 1/T in the region where a spin glass is predicted toexist in mean-field. Also the analysis of the cumulantsis consistent with the absence of spin glass orderingat finite temperature. The dynamics shows multi-scalerelaxations if a bimodal distribution of bonds isused. We propose to understand it with a model based onthe local spin configuration. This is consistent with theabsence of plateaus if Gaussian interactions are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, atomistic simulations are performed to investigate hydrophobic solvation and hydrophobic interactions in cosolvent/water binary mixtures. Many cosolvent/water binary mixtures exhibit non-ideal behavior caused by aggregation at the molecular scale level although they are stable and homogenous at the macroscopic scale. Force-field based atomistic simulations provide routes to relate atomistic-scale structure and interactions to thermodynamic solution properties. The predicted solution properties are however sensitive to the parameters used to describe the molecular interactions. In this thesis, a force field for tertiary butanol (TBA) and water mixtures is parameterized by making use of the Kirkwood-Buff theory of solution. The new force field is capable of describing the alcohol-alcohol, water-water and alcohol-water clustering in the solution as well as the solution components’ chemical potential derivatives in agreement with experimental data. With the new force field, the preferential solvation and the solvation thermodynamics of a hydrophobic solute in TBA/water mixtures have been studied. First, methane solvation at various TBA/water concentrations is discussed in terms of solvation free energy-, enthalpy- and entropy- changes, which have been compared to experimental data. We observed that the methane solvation free energy varies smoothly with the alcohol/water composition while the solvation enthalpies and entropies vary nonmonotonically. The latter occurs due to structural solvent reorganization contributions which are not present in the free energy change due to exact enthalpy-entropy compensation. It is therefore concluded that the enthalpy and entropy of solvation provide more detailed information on the reorganization of solvent molecules around the inserted solute. Hydrophobic interactions in binary urea/water mixtures are next discussed. This system is particularly relevant in biology (protein folding/unfolding), however, changes in the hydrophobic interaction induced by urea molecules are not well understood. In this thesis, this interaction has been studied by calculating the free energy (potential of mean force), enthalpy and entropy changes as a function of the solute-solute distance in water and in aqueous urea (6.9 M) solution. In chapter 5, the potential of mean force in both solution systems is analyzed in terms of its enthalpic and entropic contributions. In particular, contributions of solvent reorganization in the enthalpy and entropy changes are studied separately to better understand what are the changes in interactions in the system that contribute to the free energy of association of the nonpolar solutes. We observe that in aqueous urea the association between nonpolar solutes remains thermodynamically favorable (i.e., as it is the case in pure water). This observation contrasts a long-standing belief that clusters of nonpolar molecules dissolve completely in the presence of urea molecules. The consequences of our observations for the stability of proteins in concentrated urea solutions are discussed in the chapter 6 of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines the literature on local home bias, i.e. investor preference towards geographically nearby stocks, and investigates the role of firm’s visibility, profitability, and opacity in explaining such behavior. While firm’s visibility is expected to proxy for the behavioral root originating such a preference, firm’s profitability and opacity are expected to capture the informational one. I find that less visible, and more profitable and opaque firms, conditionally to the demand, benefit from being headquartered in regions characterized by a scarcity of listed firms (local supply of stocks). Specifically, research estimates suggest that firms headquartered in regions with a poor supply of stocks would be worth i) 11 percent more if non-visible, non-profitable and non-opaque; ii) 16 percent more if profitable; and iii) 28 percent more if both profitable and opaque. Overall, as these features are able to explain most, albeit not all, of the local home bias effect, I reasonably argue and then assess that most of the preference for local is determined by a successful attempt to exploit local information advantage (60 percent), while the rest is determined by a mere (irrational) feeling of familiarity with the local firm (40 percent). Several and significant methodological, theoretical, and practical implications come out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biologische Membranen sind Fettmolekül-Doppelschichten, die sich wie zweidimensionale Flüssigkeiten verhalten. Die Energie einer solchen fluiden Oberfläche kann häufig mit Hilfe eines Hamiltonians beschrieben werden, der invariant unter Reparametrisierungen der Oberfläche ist und nur von ihrer Geometrie abhängt. Beiträge innerer Freiheitsgrade und der Umgebung können in den Formalismus mit einbezogen werden. Dieser Ansatz wird in der vorliegenden Arbeit dazu verwendet, die Mechanik fluider Membranen und ähnlicher Oberflächen zu untersuchen. Spannungen und Drehmomente in der Oberfläche lassen sich durch kovariante Tensoren ausdrücken. Diese können dann z. B. dazu verwendet werden, die Gleichgewichtsposition der Kontaktlinie zu bestimmen, an der sich zwei aneinander haftende Oberflächen voneinander trennen. Mit Ausnahme von Kapillarphänomenen ist die Oberflächenenergie nicht nur abhängig von Translationen der Kontaktlinie, sondern auch von Änderungen in der Steigung oder sogar Krümmung. Die sich ergebenden Randbedingungen entsprechen den Gleichgewichtsbedingungen an Kräfte und Drehmomente, falls sich die Kontaktlinie frei bewegen kann. Wenn eine der Oberflächen starr ist, muss die Variation lokal dieser Fläche folgen. Spannungen und Drehmomente tragen dann zu einer einzigen Gleichgewichtsbedingung bei; ihre Beiträge können nicht mehr einzeln identifiziert werden. Um quantitative Aussagen über das Verhalten einer fluiden Oberfläche zu machen, müssen ihre elastischen Eigenschaften bekannt sein. Der "Nanotrommel"-Versuchsaufbau ermöglicht es, Membraneigenschaften lokal zu untersuchen: Er besteht aus einer porenüberspannenden Membran, die während des Experiments durch die Spitze eines Rasterkraftmikroskops in die Pore gedrückt wird. Der lineare Verlauf der resultierenden Kraft-Abstands-Kurven kann mit Hilfe der in dieser Arbeit entwickelten Theorie reproduziert werden, wenn der Einfluss von Adhäsion zwischen Spitze und Membran vernachlässigt wird. Bezieht man diesen Effekt in die Rechnungen mit ein, ändert sich das Resultat erheblich: Kraft-Abstands-Kurven sind nicht länger linear, Hysterese und nichtverschwindende Trennkräfte treten auf. Die Voraussagen der Rechnungen könnten in zukünftigen Experimenten dazu verwendet werden, Parameter wie die Biegesteifigkeit der Membran mit einer Auflösung im Nanometerbereich zu bestimmen. Wenn die Materialeigenschaften bekannt sind, können Probleme der Membranmechanik genauer betrachtet werden. Oberflächenvermittelte Wechselwirkungen sind in diesem Zusammenhang ein interessantes Beispiel. Mit Hilfe des oben erwähnten Spannungstensors können analytische Ausdrücke für die krümmungsvermittelte Kraft zwischen zwei Teilchen, die z. B. Proteine repräsentieren, hergeleitet werden. Zusätzlich wird das Gleichgewicht der Kräfte und Drehmomente genutzt, um mehrere Bedingungen an die Geometrie der Membran abzuleiten. Für den Fall zweier unendlich langer Zylinder auf der Membran werden diese Bedingungen zusammen mit Profilberechnungen kombiniert, um quantitative Aussagen über die Wechselwirkung zu treffen. Theorie und Experiment stoßen an ihre Grenzen, wenn es darum geht, die Relevanz von krümmungsvermittelten Wechselwirkungen in der biologischen Zelle korrekt zu beurteilen. In einem solchen Fall bieten Computersimulationen einen alternativen Ansatz: Die hier präsentierten Simulationen sagen voraus, dass Proteine zusammenfinden und Membranbläschen (Vesikel) bilden können, sobald jedes der Proteine eine Mindestkrümmung in der Membran induziert. Der Radius der Vesikel hängt dabei stark von der lokal aufgeprägten Krümmung ab. Das Resultat der Simulationen wird in dieser Arbeit durch ein approximatives theoretisches Modell qualitativ bestätigt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this work is the diffusion of turbulence in a non-turbulent flow. Such phenomenon can be found in almost every practical case of turbulent flow: all types of shear flows (wakes, jet, boundary layers) present some boundary between turbulence and the non-turbulent surround; all transients from a laminar flow to turbulence must account for turbulent diffusion; mixing of flows often involve the injection of a turbulent solution in a non-turbulent fluid. The mechanism of what Phillips defined as “the erosion by turbulence of the underlying non-turbulent flow”, is called entrainment. It is usually considered to operate on two scales with different mechanics. The small scale nibbling, which is the entrainment of fluid by viscous diffusion of turbulence, and the large scale engulfment, which entraps large volume of flow to be “digested” subsequently by viscous diffusion. The exact role of each of them in the overall entrainment rate is still not well understood, as it is the interplay between these two mechanics of diffusion. It is anyway accepted that the entrainment rate scales with large properties of the flow, while is not understood how the large scale inertial behavior can affect an intrinsically viscous phenomenon as diffusion of vorticity. In the present work we will address then the problem of turbulent diffusion through pseudo-spectral DNS simulations of the interface between a volume of decaying turbulence and quiescent flow. Such simulations will give us first hand measures of velocity, vorticity and strains fields at the interface; moreover the framework of unforced decaying turbulence will permit to study both spatial and temporal evolution of such fields. The analysis will evidence that for this kind of flows the overall production of enstrophy , i.e. the square of vorticity omega^2 , is dominated near the interface by the local inertial transport of “fresh vorticity” coming from the turbulent flow. Viscous diffusion instead plays a major role in enstrophy production in the outbound of the interface, where the nibbling process is dominant. The data from our simulation seems to confirm the theory of an inertially stirred viscous phenomenon proposed by others authors before and provides new data about the inertial diffusion of turbulence across the interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the heat flux through a domain with subregions in which the thermal capacity approaches zero. In these subregions the parabolic heat equation degenerates to an elliptic one. We show the well-posedness of such parabolic-elliptic differential equations for general non-negative L-infinity-capacities and study the continuity of the solutions with respect to the capacity, thus giving a rigorous justification for modeling a small thermal capacity by setting it to zero. We also characterize weak directional derivatives of the temperature with respect to capacity as solutions of related parabolic-elliptic problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fatigue life in metals is predicted utilizing regression analysis of large sets of experimental data, thus representing the material’s macroscopic response. Furthermore, a high variability in the short crack growth (SCG) rate has been observed in polycrystalline materials, in which the evolution and distributionof local plasticity is strongly influenced by the microstructure features. The present work serves to (a) identify the relationship between the crack driving force based on the local microstructure in the proximity of the crack-tip and (b) defines the correlation between scatter observed in the SCG rates to variability in the microstructure. A crystal plasticity model based on the fast Fourier transform formulation of the elasto-viscoplastic problem (CP-EVP-FFT) is used, since the ability to account for the both elastic and plastic regime is critical in fatigue. Fatigue is governed by slip irreversibility, resulting in crack growth, which starts to occur during local elasto-plastic transition. To investigate the effects of microstructure variability on the SCG rate, sets of different microstructure realizations are constructed, in which cracks of different length are introduced to mimic quasi-static SCG in engineering alloys. From these results, the behavior of the characteristic variables of different length scale are analyzed: (i) Von Mises stress fields (ii) resolved shear stress/strain in the pertinent slip systems, and (iii) slip accumulation/irreversibilities. Through fatigue indicator parameters (FIP), scatter within the SCG rates is related to variability in the microstructural features; the results demonstrate that this relationship between microstructure variability and uncertainty in fatigue behavior is critical for accurate fatigue life prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Things (IoT) is the next industrial revolution: we will interact naturally with real and virtual devices as a key part of our daily life. This technology shift is expected to be greater than the Web and Mobile combined. As extremely different technologies are needed to build connected devices, the Internet of Things field is a junction between electronics, telecommunications and software engineering. Internet of Things application development happens in silos, often using proprietary and closed communication protocols. There is the common belief that only if we can solve the interoperability problem we can have a real Internet of Things. After a deep analysis of the IoT protocols, we identified a set of primitives for IoT applications. We argue that each IoT protocol can be expressed in term of those primitives, thus solving the interoperability problem at the application protocol level. Moreover, the primitives are network and transport independent and make no assumption in that regard. This dissertation presents our implementation of an IoT platform: the Ponte project. Privacy issues follows the rise of the Internet of Things: it is clear that the IoT must ensure resilience to attacks, data authentication, access control and client privacy. We argue that it is not possible to solve the privacy issue without solving the interoperability problem: enforcing privacy rules implies the need to limit and filter the data delivery process. However, filtering data require knowledge of how the format and the semantics of the data: after an analysis of the possible data formats and representations for the IoT, we identify JSON-LD and the Semantic Web as the best solution for IoT applications. Then, this dissertation present our approach to increase the throughput of filtering semantic data by a factor of ten.