930 resultados para Higher order wave moments
Resumo:
In the conceptual framework of affective neuroscience, this thesis intends to advance the understanding of the plasticity mechanisms of other’s emotional facial expression representations. Chapter 1 outlines a description of the neurophysiological bases of Hebbian plasticity, reviews influential studies that adopted paired associative stimulation procedures, and introduces new lines of research where the impact of cortico-cortical paired associative stimulation protocols on higher order cognitive functions is investigated. The experiments in Chapter 2 aimed to test the modulatory influence of a perceptual-motor training, based on the execution of emotional expressions, on the subsequent emotion intensity judgements of others’ high (i.e., full visible) and low-intensity (i.e., masked) emotional expressions. As a result of the training-induced learning, participants showed a significant congruence effect, as indicated by relatively higher expression intensity ratings for the same emotion as the one that was previously trained. Interestingly, although judged as overall less emotionally intense, surgical facemasks did not prevent the emotion-specific effects of the training to occur, suggesting that covering the lower part of other’s face do not interact with the training-induced congruence effect. In Chapter 3 it was implemented a transcranial magnetic stimulation study targeting neural pathways involving re-entrant input from higher order brain regions into lower levels of the visual processing hierarchy. We focused on cortical visual networks within the temporo-occipital stream underpinning the processing of emotional faces and susceptible to plastic adaptations. Importantly, we tested the plasticity-induced effects in a state dependent manner, by administering ccPAS while presenting different facial expressions yet afferent to a specific emotion. Results indicated that the discrimination accuracy of emotion-specific expressions is enhanced following the ccPAS treatment, suggesting that a multi-coil TMS intervention might represent a suitable tool to drive brain remodeling at a neural network level, and consequently influence a specific behavior.
Resumo:
In this thesis, I study the notion of program equivalences, i.e. proving that two programs can be used interchangeably without altering the overall observable behaviour. This definition is highly dependent on the contexts in which these programs can be used; does the context have exceptions, parallelism, etc... So proofs also need to be adapted according to the expressiveness of those contexts. This thesis presents on the pi-calculus – a concurrent programming language – under various typing constraints. Types allows us to impose different disciplines like forcing a sequential execution, or ensuring linearity, meaning an object can be used once. In each case, the bisimulation, a standard proof technique for the pi-calculus, needs to be adapted accordingly to obtain a suitable equivalence. We then test how using the modified bisimulations can be used to reason about a language with higher-order functions and references, which once translated into the pi-calculus satisfies the typing constraints.
Resumo:
Fear of Missing Out (FoMO) is a pervasive apprehension that others might be having rewarding experiences from which one is absent. Consequently, individuals experiencing FoMO wish to stay constantly in contact with what others are doing and engage with social networking sites for this purpose. In recent times, FoMO has received increased attention from psychological research, as a minority of users experiencing high levels of FoMO - particularly young people - might develop a problematic social networking site use, defined as the maladaptive and excessive use of social networking sites, resulting in symptoms associated with other addictions. According to the theoretical framework of the Interaction of Person-Affect-Cognition- Execution (I-PACE) model, FoMO and certain motives for use may foster problematic use in individuals who display unmet psychosocial needs. However, to date, the I-PACE model has only conceptualized the general higher-order mechanisms related to the development of problematic use. Consistently, the overall purpose of this dissertation was to deepen the understanding of the mediating role of FoMO between specific predisposing variables and problematic social networking sites use. Adopting a psychological approach, two empirical and exploratory cross-sectional studies, conceived as independent research, were conducted through path analysis.
Resumo:
The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.
Resumo:
We show how to include in the CAPM moments of any order, extending the mean-variance or mean-variance-skewness versions available until now. Then, we present a simple way to modify the formulae, in order to avoid the appearance of utility parameters. The results can be easily applied to practical portfolio design, with econometric inference and testing based on generalised method of moments procedures. An empirical application to the Brazilian stock market is discussed.
Resumo:
In this paper, we present an analysis of the resonant response of modified triangular metallic nanoparticles with polynomial sides. The particles are illuminated by an incident plane wave and the method of moments is used to solve numerically the electromagnetic scattering problem. We investigate spectral response and near field distribution in function of the length and polynomial order of the nanoparticles. Our results show that in the analyzed wavelength range (0.5-1.8) µm these particles possess smaller number of resonances and their resonant wavelengths, near field enhancement and field confinement are higher than those of the conventional triangular particle with linear sides.
Resumo:
Very high field (29)Si-NMR measurements using a fully (29)Si-enriched URu(2)Si(2) single crystal were carried out in order to microscopically investigate the hidden order (HO) state and adjacent magnetic phases in the high field limit. At the lowest measured temperature of 0.4 K, a clear anomaly reflecting a Fermi surface instability near 22 T inside the HO state is detected by the (29)Si shift, (29)K(c). Moreover, a strong enhancement of (29)K(c) develops near a critical field H(c) ≃ 35.6 T, and the ^{29}Si-NMR signal disappears suddenly at H(c), indicating the total suppression of the HO state. Nevertheless, a weak and shifted (29)Si-NMR signal reappears for fields higher than H(c) at 4.2 K, providing evidence for a magnetic structure within the magnetic phase caused by the Ising-type anisotropy of the uranium ordered moments.
Resumo:
200 GeV corresponding to baryon chemical potentials (mu(B)) between 200 and 20 MeV. Our measurements of the products kappa sigma(2) and S sigma, which can be related to theoretical calculations sensitive to baryon number susceptibilities and long-range correlations, are constant as functions of collision centrality. We compare these products with results from lattice QCD and various models without a critical point and study the root s(NN) dependence of kappa sigma(2). From the measurements at the three beam energies, we find no evidence for a critical point in the QCD phase diagram for mu(B) below 200 MeV.
Resumo:
We propose a model for D(+)->pi(+)pi(-)pi(+) decays following experimental results which indicate that the two-pion interaction in the S wave is dominated by the scalar resonances f(0)(600)/sigma and f(0)(980). The weak decay amplitude for D(+)-> R pi(+), where R is a resonance that subsequently decays into pi(+)pi(-), is constructed in a factorization approach. In the S wave, we implement the strong decay R ->pi(+)pi(-) by means of a scalar form factor. This provides a unitary description of the pion-pion interaction in the entire kinematically allowed mass range m(pi pi)(2) from threshold to about 3 GeV(2). In order to reproduce the experimental Dalitz plot for D(+)->pi(+)pi(-)pi(+), we include contributions beyond the S wave. For the P wave, dominated by the rho(770)(0), we use a Breit-Wigner description. Higher waves are accounted for by using the usual isobar prescription for the f(2)(1270) and rho(1450)(0). The major achievement is a good reproduction of the experimental m(pi pi)(2) distribution, and of the partial as well as the total D(+)->pi(+)pi(-)pi(+) branching ratios. Our values are generally smaller than the experimental ones. We discuss this shortcoming and, as a by-product, we predict a value for the poorly known D ->sigma transition form factor at q(2)=m pi(2).
Resumo:
We consider the gravitational recoil due to nonreflection-symmetric gravitational wave emission in the context of axisymmetric Robinson-Trautman spacetimes. We show that regular initial data evolve generically into a final configuration corresponding to a Schwarzschild black hole moving with constant speed. For the case of (reflection-)symmetric initial configurations, the mass of the remnant black hole and the total energy radiated away are completely determined by the initial data, allowing us to obtain analytical expressions for some recent numerical results that have appeared in the literature. Moreover, by using the Galerkin spectral method to analyze the nonlinear regime of the Robinson-Trautman equations, we show that the recoil velocity can be estimated with good accuracy from some asymmetry measures (namely the first odd moments) of the initial data. The extension for the nonaxisymmetric case and the implications of our results for realistic situations involving head-on collision of two black holes are also discussed.
Resumo:
In this paper, 2 different approaches for estimating the directional wave spectrum based on a vessel`s 1st-order motions are discussed, and their predictions are compared to those provided by a wave buoy. The real-scale data were obtained in an extensive monitoring campaign based on an FPSO unit operating at Campos Basin, Brazil. Data included vessel motions, heading and tank loadings. Wave field information was obtained by means of a heave-pitch-roll buoy installed in the vicinity of the unit. `two of the methods most widely used for this kind of analysis are considered, one based on Bayesian statistical inference, the other consisting of a parametrical representation of the wave spectrum. The performance of both methods is compared, and their sensitivity to input parameters is discussed. This analysis complements a set of previous validations based on numerical and towing-tank results and allows for a preliminary evaluation of reliability when applying the methodology at full scale.
Resumo:
The higher education system in Europe is currently under stress and the debates over its reform and future are gaining momentum. Now that, for most countries, we are in a time for change, in the overall society and the whole education system, the legal and political dimensions have gained prominence, which has not been followed by a more integrative approach of the problem of order, its reform and the issue of regulation, beyond the typical static and classical cost-benefit analyses. The two classical approaches for studying (and for designing the policy measures of) the problem of the reform of the higher education system - the cost-benefit analysis and the legal scholarship description - have to be integrated. This is the argument of our paper that the very integration of economic and legal approaches, what Warren Samuels called the legal-economic nexus, is meaningful and necessary, especially if we want to address the problem of order (as formulated by Joseph Spengler) and the overall regulation of the system. On the one hand, and without neglecting the interest and insights gained from the cost-benefit analysis, or other approaches of value for money assessment, we will focus our study on the legal, social and political aspects of the regulation of the higher education system and its reform in Portugal. On the other hand, the economic and financial problems have to be taken into account, but in a more inclusive way with regard to the indirect and other socio-economic costs not contemplated in traditional or standard assessments of policies for the tertiary education sector. In the first section of the paper, we will discuss the theoretical and conceptual underpinning of our analysis, focusing on the evolutionary approach, the role of critical institutions, the legal-economic nexus and the problem of order. All these elements are related to the institutional tradition, from Veblen and Commons to Spengler and Samuels. The second section states the problem of regulation in the higher education system and the issue of policy formulation for tackling the problem. The current situation is clearly one of crisis with the expansion of the cohorts of young students coming to an end and the recurrent scandals in private institutions. In the last decade, after a protracted period of extension or expansion of the system, i. e., the continuous growth of students, universities and other institutions are competing harder to gain students and have seen their financial situation at risk. It seems that we are entering a period of radical uncertainty, higher competition and a new configuration that is slowly building up is the growth in intensity, which means upgrading the quality of the higher learning and getting more involvement in vocational training and life-long learning. With this change, and along with other deep ones in the Portuguese society and economy, the current regulation has shown signs of maladjustment. The third section consists of our conclusions on the current issue of regulation and policy challenge. First, we underline the importance of an evolutionary approach to a process of change that is essentially dynamic. A special attention will be given to the issues related to an evolutionary construe of policy analysis and formulation. Second, the integration of law and economics, through the notion of legal economic nexus, allows us to better define the issues of regulation and the concrete problems that the universities are facing. One aspect is the instability of the political measures regarding the public administration and on which the higher education system depends financially, legally and institutionally, to say the least. A corollary is the lack of clear strategy in the policy reforms. Third, our research criticizes several studies, such as the one made by the OECD in late 2006 for the Ministry of Science, Technology and Higher Education, for being too static and neglecting fundamental aspects of regulation such as the logic of actors, groups and organizations who are major players in the system. Finally, simply changing the legal rules will not necessary per se change the behaviors that the authorities want to change. By this, we mean that it is not only remiss of the policy maker to ignore some of the critical issues of regulation, namely the continuous non-respect by academic management and administrative bodies of universities of the legal rules that were once promulgated. Changing the rules does not change the problem, especially without the necessary debates form the different relevant quarters that make up the higher education system. The issues of social interaction remain as intact. Our treatment of the matter will be organized in the following way. In the first section, the theoretical principles are developed in order to be able to study more adequately the higher education transformation with a modest evolutionary theory and a legal and economic nexus of the interactions of the system and the policy challenges. After describing, in the second section, the recent evolution and current working of the higher education in Portugal, we will analyze the legal framework and the current regulatory practices and problems in light of the theoretical framework adopted. We will end with some conclusions on the current problems of regulation and the policy measures that are discusses in recent years.
Resumo:
An improved class of nonlinear bidirectional Boussinesq equations of sixth order using a wave surface elevation formulation is derived. Exact travelling wave solutions for the proposed class of nonlinear evolution equations are deduced. A new exact travelling wave solution is found which is the uniform limit of a geometric series. The ratio of this series is proportional to a classical soliton-type solution of the form of the square of a hyperbolic secant function. This happens for some values of the wave propagation velocity. However, there are other values of this velocity which display this new type of soliton, but the classical soliton structure vanishes in some regions of the domain. Exact solutions of the form of the square of the classical soliton are also deduced. In some cases, we find that the ratio between the amplitude of this wave and the amplitude of the classical soliton is equal to 35/36. It is shown that different families of travelling wave solutions are associated with different values of the parameters introduced in the improved equations.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.