880 resultados para theory and modeling
Resumo:
This dissertation explores the entanglement between the visionary capacity of feminist theory to shape sustainable futures and the active contribution of feminist speculative fiction to the conceptual debate about the climate crisis. Over the last few years, increasing critical attention has been paid to ecofeminist perspectives on climate change, that see as a core cause of the climate crisis the patriarchal domination of nature, considered to go hand in hand with the oppression of women. What remains to be thoroughly scrutinised is the linkage between ecofeminist theories and other ethical stances capable of countering colonising epistemologies of mastery and dominion over nature. This dissertation intervenes in the debate about the master narrative of the Anthropocene, and about the one-dimensional perspective that often characterises its literary representations, from a feminist perspective that also aims at decolonising the imagination; it looks at literary texts that consider patriarchal domination of nature in its intersections with other injustices that play out within the Anthropocene, with a particular focus on race, colonialism, and capitalism. After an overview of the linkages between gender and climate change and between feminism and environmental humanities, it introduces the genre of climate fiction examining its main tropes. In an attempt to find alternatives to the mainstream narrative of the Anthropocene (namely to its gender-neutrality, colour-blindness, and anthropocentrism), it focuses on contemporary works of speculative fiction by four Anglophone women authors that particularly address the inequitable impacts of climate change experienced not only by women, but also by sexualised, racialised, and naturalised Others. These texts were chosen because of their specific engagement with the relationship between climate change, global capitalism, and a flat trust in techno-fixes on the one hand, and structural inequalities generated by patriarchy, racism, and intersecting systems of oppression on the other.
Assessing brain connectivity through electroencephalographic signal processing and modeling analysis
Resumo:
Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.
Resumo:
This PhD thesis focuses on studying the classical scattering of massive/massless particles toward black holes, and investigating double copy relations between classical observables in gauge theories and gravity. This is done in the Post-Minkowskian approximation i.e. a perturbative expansion of observables controlled by the gravitational coupling constant κ = 32πGN, with GN being the Newtonian coupling constant. The investigation is performed by using the Worldline Quantum Field Theory (WQFT), displaying a worldline path integral describing the scattering objects and a QFT path integral in the Born approximation, describing the intermediate bosons exchanged in the scattering event by the massive/massless particles. We introduce the WQFT, by deriving a relation between the Kosower- Maybee-O’Connell (KMOC) limit of amplitudes and worldline path integrals, then, we use that to study the classical Compton amplitude and higher point amplitudes. We also present a nice application of our formulation to the case of Hard Thermal Loops (HTL), by explicitly evaluating hard thermal currents in gauge theory and gravity. Next we move to the investigation of the classical double copy (CDC), which is a powerful tool to generate integrands for classical observables related to the binary inspiralling problem in General Relativity. In order to use a Bern-Carrasco-Johansson (BCJ) like prescription, straight at the classical level, one has to identify a double copy (DC) kernel, encoding the locality structure of the classical amplitude. Such kernel is evaluated by using a theory where scalar particles interacts through bi-adjoint scalars. We show here how to push forward the classical double copy so to account for spinning particles, in the framework of the WQFT. Here the quantization procedure on the worldline allows us to fully reconstruct the quantum theory on the gravitational side. Next we investigate how to describe the scattering of massless particles off black holes in the WQFT.
Resumo:
La seguente tesi propone un’introduzione al geometric deep learning. Nella prima parte vengono presentati i concetti principali di teoria dei grafi ed introdotta una dinamica di diffusione su grafo, in analogia con l’equazione del calore. A seguire, iniziando dal linear classifier verranno introdotte le architetture che hanno portato all’ideazione delle graph convolutional networks. In conclusione, si analizzano esempi di alcuni algoritmi utilizzati nel geometric deep learning e si mostra una loro implementazione sul Cora dataset, un insieme di dati con struttura a grafo.
Resumo:
Le teorie della gravità scalare-tensore sono una classe di teorie alternative alla rel- atività generale in cui l’interazione gravitazionale è descritta sia dalla metrica, sia da un campo scalare. Ne costituisce un esempio caratteristico la teoria di Brans-Dicke, in- trodotta come estensione della relatività generale in modo da renderla conforme con il principio di Mach. Il presente lavoro di tesi è volto a presentare un’analisi di questa teoria nei suoi aspetti principali, studiandone i fondamenti teorici e il modello cosmologico derivante, sottolineandone inoltre i limiti e le criticità; in seguito vengono esposti i risultati degli esperimenti fino ad ora svolti per verificare fondamenti e previsioni del modello.
Resumo:
Ripple-based controls can strongly reduce the required output capacitance in PowerSoC converter thanks to a very fast dynamic response. Unfortunately, these controls are prone to sub-harmonic oscillations and several parameters affect the stability of these systems. This paper derives and validates a simulation-based modeling and stability analysis of a closed-loop V 2Ic control applied to a 5 MHz Buck converter using discrete modeling and Floquet theory to predict stability. This allows the derivation of sensitivity analysis to design robust systems. The work is extended to different V 2 architectures using the same methodology.
Resumo:
We present a new approach accounting for the nonadditivity of attractive parts of solid-fluid and fluidfluid potentials to improve the quality of the description of nitrogen and argon adsorption isotherms on graphitized carbon black in the framework of non-local density functional theory. We show that the strong solid-fluid interaction in the first monolayer decreases the fluid-fluid interaction, which prevents the twodimensional phase transition to occur. This results in smoother isotherm, which agrees much better with experimental data. In the region of multi-layer coverage the conventional non-local density functional theory and grand canonical Monte Carlo simulations are known to over-predict the amount adsorbed against experimental isotherms. Accounting for the non-additivity factor decreases the solid-fluid interaction with the increase of intermolecular interactions in the dense adsorbed fluid, preventing the over-prediction of loading in the region of multi-layer adsorption. Such an improvement of the non-local density functional theory allows us to describe experimental nitrogen and argon isotherms on carbon black quite accurately with mean error of 2.5 to 5.8% instead of 17 to 26% in the conventional technique. With this approach, the local isotherms of model pores can be derived, and consequently a more reliab * le pore size distribution can be obtained. We illustrate this by applying our theory against nitrogen and argon isotherms on a number of activated carbons. The fitting between our model and the data is much better than the conventional NLDFT, suggesting the more reliable PSD obtained with our approach.
Resumo:
This research involves the design, development, and theoretical demonstration of models resulting in integrated misbehavior resolution protocols for ad hoc networked devices. Game theory was used to analyze strategic interaction among independent devices with conflicting interests. Packet forwarding at the routing layer of autonomous ad hoc networks was investigated. Unlike existing reputation based or payment schemes, this model is based on repeated interactions. To enforce cooperation, a community enforcement mechanism was used, whereby selfish nodes that drop packets were punished not only by the victim, but also by all nodes in the network. Then, a stochastic packet forwarding game strategy was introduced. Our solution relaxed the uniform traffic demand that was pervasive in other works. To address the concerns of imperfect private monitoring in resource aware ad hoc networks, a belief-free equilibrium scheme was developed that reduces the impact of noise in cooperation. This scheme also eliminated the need to infer the private history of other nodes. Moreover, it simplified the computation of an optimal strategy. The belief-free approach reduced the node overhead and was easily tractable. Hence it made the system operation feasible. Motivated by the versatile nature of evolutionary game theory, the assumption of a rational node is relaxed, leading to the development of a framework for mitigating routing selfishness and misbehavior in Multi hop networks. This is accomplished by setting nodes to play a fixed strategy rather than independently choosing a rational strategy. A range of simulations was carried out that showed improved cooperation between selfish nodes when compared to older results. Cooperation among ad hoc nodes can also protect a network from malicious attacks. In the absence of a central trusted entity, many security mechanisms and privacy protections require cooperation among ad hoc nodes to protect a network from malicious attacks. Therefore, using game theory and evolutionary game theory, a mathematical framework has been developed that explores trust mechanisms to achieve security in the network. This framework is one of the first steps towards the synthesis of an integrated solution that demonstrates that security solely depends on the initial trust level that nodes have for each other.^
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
This thesis presents the development of hardware, theory, and experimental methods to enable a robotic manipulator arm to interact with soils and estimate soil properties from interaction forces. Unlike the majority of robotic systems interacting with soil, our objective is parameter estimation, not excavation. To this end, we design our manipulator with a flat plate for easy modeling of interactions. By using a flat plate, we take advantage of the wealth of research on the similar problem of earth pressure on retaining walls. There are a number of existing earth pressure models. These models typically provide estimates of force which are in uncertain relation to the true force. A recent technique, known as numerical limit analysis, provides upper and lower bounds on the true force. Predictions from the numerical limit analysis technique are shown to be in good agreement with other accepted models. Experimental methods for plate insertion, soil-tool interface friction estimation, and control of applied forces on the soil are presented. In addition, a novel graphical technique for inverting the soil models is developed, which is an improvement over standard nonlinear optimization. This graphical technique utilizes the uncertainties associated with each set of force measurements to obtain all possible parameters which could have produced the measured forces. The system is tested on three cohesionless soils, two in a loose state and one in a loose and dense state. The results are compared with friction angles obtained from direct shear tests. The results highlight a number of key points. Common assumptions are made in soil modeling. Most notably, the Mohr-Coulomb failure law and perfectly plastic behavior. In the direct shear tests, a marked dependence of friction angle on the normal stress at low stresses is found. This has ramifications for any study of friction done at low stresses. In addition, gradual failures are often observed for vertical tools and tools inclined away from the direction of motion. After accounting for the change in friction angle at low stresses, the results show good agreement with the direct shear values.
Resumo:
This thesis deals with the so-called Basis Set Superposition Error (BSSE) from both a methodological and a practical point of view. The purpose of the present thesis is twofold: (a) to contribute step ahead in the correct characterization of weakly bound complexes and, (b) to shed light the understanding of the actual implications of the basis set extension effects in the ab intio calculations and contribute to the BSSE debate. The existing BSSE-correction procedures are deeply analyzed, compared, validated and, if necessary, improved. A new interpretation of the counterpoise (CP) method is used in order to define counterpoise-corrected descriptions of the molecular complexes. This novel point of view allows for a study of the BSSE-effects not only in the interaction energy but also on the potential energy surface and, in general, in any property derived from the molecular energy and its derivatives A program has been developed for the calculation of CP-corrected geometry optimizations and vibrational frequencies, also using several counterpoise schemes for the case of molecular clusters. The method has also been implemented in Gaussian98 revA10 package. The Chemical Hamiltonian Approach (CHA) methodology has been also implemented at the RHF and UHF levels of theory for an arbitrary number interacting systems using an algorithm based on block-diagonal matrices. Along with the methodological development, the effects of the BSSE on the properties of molecular complexes have been discussed in detail. The CP and CHA methodologies are used for the determination of BSSE-corrected molecular complexes properties related to the Potential Energy Surfaces and molecular wavefunction, respectively. First, the behaviour of both BSSE-correction schemes are systematically compared at different levels of theory and basis sets for a number of hydrogen-bonded complexes. The Complete Basis Set (CBS) limit of both uncorrected and CP-corrected molecular properties like stabilization energies and intermolecular distances has also been determined, showing the capital importance of the BSSE correction. Several controversial topics of the BSSE correction are addressed as well. The application of the counterpoise method is applied to internal rotational barriers. The importance of the nuclear relaxation term is also pointed out. The viability of the CP method for dealing with charged complexes and the BSSE effects on the double-well PES blue-shifted hydrogen bonds is also studied in detail. In the case of the molecular clusters the effect of high-order BSSE effects introduced with the hierarchical counterpoise scheme is also determined. The effect of the BSSE on the electron density-related properties is also addressed. The first-order electron density obtained with the CHA/F and CHA/DFT methodologies was used to assess, both graphically and numerically, the redistribution of the charge density upon BSSE-correction. Several tools like the Atoms in Molecules topologycal analysis, density difference maps, Quantum Molecular Similarity, and Chemical Energy Component Analysis were used to deeply analyze, for the first time, the BSSE effects on the electron density of several hydrogen bonded complexes of increasing size. The indirect effect of the BSSE on intermolecular perturbation theory results is also pointed out It is shown that for a BSSE-free SAPT study of hydrogen fluoride clusters, the use of a counterpoise-corrected PES is essential in order to determine the proper molecular geometry to perform the SAPT analysis.
Resumo:
The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka-Volterra models are provided to show the effectiveness of this method. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The attentional blink phenomenon (AB) represents impaired identification of the second of two targets presented in rapid succession within a stream of stimuli. Despite the well-known association between attentional processes and psychometric intelligence (PI), evidence for a relationship between AB and PI is highly inconsistent. Theory and empirical findings suggest AB to be multifaceted. Hence, relations between AB and PI may be blurred when AB is measured as a single process. Furthermore, different aspects of PI might be differentially related to AB. The present study explored the relationship between processes underlying AB and general PI as well as specific aspects of PI (Reasoning, Speed, Memory, and Creativity) in 201 female students. Fixed-links modeling revealed three processes underlying AB: (1) a U-shaped process positively related to Speed and negatively related to Memory but unrelated to Reasoning, Creativity, and general PI, (2) an increasing process positively related to Reasoning, Speed, Memory, and general PI but not to Creativity, and (3) a decreasing process positively related to general PI and Memory but not to other aspects of PI. Our findings demonstrate that dissociating processes underlying AB and considering specific aspects of PI is required to understand the relationship between AB and PI.
Resumo:
Even the best school health education programs will be unsuccessful if they are not disseminated effectively in a manner that encourages classroom adoption and implementation. This study involved two components: (1) the development of a videotape intervention to be used in the dissemination phase of a 4-year, NCI-funded diffusion study and (2) the evaluation of that videotape intervention strategy in comparison with a print (information transfer) strategy. Conceptualization has been guided by Social Learning Theory, Diffusion Theory, and communication theory. Additionally, the PRECEDE Framework has been used. Seventh and 8th grade classroom teachers from Spring Branch Independent School District in west Houston participated in the evaluation of the videotape and print interventions using a 57-item preadoption survey instrument developed by the UT Center for Health Promotion Research and Development. Two-way ANOVA was used to study individual score differences for five outcome variables: Total Scale Score (comprised of 57 predisposing, enabling, and reinforcing items), Adoption Characteristics Subscale, Attitude Toward Innovation Subscale, Receptivity Toward Innovation, and Reinforcement Subscale. The aim of the study is to compare the effect upon score differences of video and print interventions alone and in combination. Seventy-three 7th and 8th grade classroom teachers completed the study providing baseline and post-intervention measures on factors related to the adoption and implementation of tobacco-use prevention programs. Two-way ANOVA, in relation to the study questions, found significant scoring differences for those exposed to the videotape intervention alone for both the Attitude Toward Innovation Subscale and the Receptivity to Adopt Subscale. No significant results were found to suggest that print alone influences favorable scoring differences between baseline and post-intervention testing. One interaction effect was found suggesting video and print combined are more effective for influencing favorable scoring differences for the Reinforcement for the Adoption Subscale.^ This research is unique in that it represents a newly emerging field in health promotion communications research with implications for Social Learning Theory, Diffusion Theory, and communication science that are applicable to the development of improved school health interventions. ^