940 resultados para 010300 NUMERICAL AND COMPUTATIONAL MATHEMATICS
Resumo:
Colloid self-assembly under external control is a new route to fabrication of advanced materials with novel microstructures and appealing functionalities. The kinetic processes of colloidal self-assembly have attracted great interests also because they are similar to many atomic level kinetic processes of materials. In the past decades, rapid technological progresses have been achieved on producing shape-anisotropic, patchy, core-shell structured particles and particles with electric/magnetic charges/dipoles, which greatly enriched the self-assembled structures. Multi-phase carrier liquids offer new route to controlling colloidal self-assembly. Therefore, heterogeneity is the essential characteristics of colloid system, while so far there still lacks a model that is able to efficiently incorporate these possible heterogeneities. This thesis is mainly devoted to development of a model and computational study on the complex colloid system through a diffuse-interface field approach (DIFA), recently developed by Wang et al. This meso-scale model is able to describe arbitrary particle shape and arbitrary charge/dipole distribution on the surface or body of particles. Within the framework of DIFA, a Gibbs-Duhem-type formula is introduced to treat Laplace pressure in multi-liquid-phase colloidal system and it obeys Young-Laplace equation. The model is thus capable to quantitatively study important capillarity related phenomena. Extensive computer simulations are performed to study the fundamental behavior of heterogeneous colloidal system. The role of Laplace pressure is revealed in determining the mechanical equilibrium of shape-anisotropic particles at fluid interfaces. In particular, it is found that the Laplace pressure plays a critical role in maintaining the stability of capillary bridges between close particles, which sheds light on a novel route to in situ firming compact but fragile colloidal microstructures via capillary bridges. Simulation results also show that competition between like-charge repulsion, dipole-dipole interaction and Brownian motion dictates the degree of aggregation of heterogeneously charged particles. Assembly and alignment of particles with magnetic dipoles under external field is studied. Finally, extended studies on the role of dipole-dipole interaction are performed for ferromagnetic and ferroelectric domain phenomena. The results reveal that the internal field generated by dipoles competes with external field to determine the dipole-domain evolution in ferroic materials.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
This article reports a combined thermodynamic, spectroscopic, and computational study on the interactions and structure of binary mixtures of hydrogenated and fluorinated substances that simultaneously interact through strong hydrogen bonding. Four binary mixtures of hydrogenated and fluorinated alcohols have been studied, namely, (ethanol + 2,2,2-trifluoroethanol (TFE)), (ethanol + 2,2,3,3,4,4,4-heptafluoro-1-butanol), (1-butanol (BuOH) + TFE), and (BuOH + 2,2,3,3,4,4,4-heptafluoro-1-butanol). Excess molar volumes and vibrational spectra of all four binary mixtures have been measured as a function of composition at 298 K, and molecular dynamics simulations have been performed. The systems display a complex behavior when compared with mixtures of hydrogenated alcohols and mixtures of alkanes and perfluoroalkanes. The combined analysis of the results from different approaches indicates that this results from a balance between preferential hydrogen bonding between the hydrogenated and fluorinated alcohols and the unfavorable dispersion forces between the hydrogenated and fluorinated chains. As the chain length increases, the contribution of dispersion increases and overcomes the contribution of H-bonds. In terms of the liquid structure, the simulations suggest the possibility of segregation between the hydrogenated and fluorinated segments, a hypothesis corroborated by the spectroscopic results. Furthermore, a quantitative analysis of the infrared spectra reveals that the presence of fluorinated groups induces conformational changes in the hydrogenated chains from the usually preferred all-trans to more globular arrangements involving gauche conformations. Conformational rearrangements at the CCOH dihedral angle upon mixing are also disclosed by the spectra.
Resumo:
Expressing the properties of the exit material as a function of the potential difference and mass flux (scraping rate) and solving the mechanical problem in order to obtain a velocity field to be fed into multi-physics numerical platforms.
Resumo:
This paper presents a study made in a field poorly explored in the Portuguese language – modality and its automatic tagging. Our main goal was to find a set of attributes for the creation of automatic tag- gers with improved performance over the bag-of-words (bow) approach. The performance was measured using precision, recall and F1. Because it is a relatively unexplored field, the study covers the creation of the corpus (composed by eleven verbs), the use of a parser to extract syntac- tic and semantic information from the sentences and a machine learning approach to identify modality values. Based on three different sets of attributes – from trigger itself and the trigger’s path (from the parse tree) and context – the system creates a tagger for each verb achiev- ing (in almost every verb) an improvement in F1 when compared to the traditional bow approach.
Resumo:
That humans and animals learn from interaction with the environment is a foundational idea underlying nearly all theories of learning and intelligence. Learning that certain outcomes are associated with specific actions or stimuli (both internal and external), is at the very core of the capacity to adapt behaviour to environmental changes. In the present work, appetitive and aversive reinforcement learning paradigms have been used to investigate the fronto-striatal loops and behavioural correlates of adaptive and maladaptive reinforcement learning processes, aiming to a deeper understanding of how cortical and subcortical substrates interacts between them and with other brain systems to support learning. By combining a large variety of neuroscientific approaches, including behavioral and psychophysiological methods, EEG and neuroimaging techniques, these studies aim at clarifying and advancing the knowledge of the neural bases and computational mechanisms of reinforcement learning, both in normal and neurologically impaired population.
Resumo:
Gait analysis allows to characterize motor function, highlighting deviations from normal motor behavior related to an underlying pathology. The widespread use of wearable inertial sensors has opened the way to the evaluation of ecological gait, and a variety of methodological approaches and algorithms have been proposed for the characterization of gait from inertial measures (e.g. for temporal parameters, motor stability and variability, specific pathological alterations). However, no comparative analysis of their performance (i.e. accuracy, repeatability) was available yet, in particular, analysing how this performance is affected by extrinsic (i.e. sensor location, computational approach, analysed variable, testing environmental constraints) and intrinsic (i.e. functional alterations resulting from pathology) factors. The aim of the present project was to comparatively analyze the influence of intrinsic and extrinsic factors on the performance of the numerous algorithms proposed in the literature for the quantification of specific characteristics (i.e. timing, variability/stability) and alterations (i.e. freezing) of gait. Considering extrinsic factors, the influence of sensor location, analyzed variable, and computational approach on the performance of a selection of gait segmentation algorithms from a literature review was analysed in different environmental conditions (e.g. solid ground, sand, in water). Moreover, the influence of altered environmental conditions (i.e. in water) was analyzed as referred to the minimum number of stride necessary to obtain reliable estimates of gait variability and stability metrics, integrating what already available in the literature for over ground gait in healthy subjects. Considering intrinsic factors, the influence of specific pathological conditions (i.e. Parkinson’s Disease) was analyzed as affecting the performance of segmentation algorithms, with and without freezing. Finally, the analysis of the performance of algorithms for the detection of gait freezing showed how results depend on the domain of implementation and IMU position.
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.
Resumo:
Nuclear cross sections are the pillars onto which the transport simulation of particles and radiations is built on. Since the nuclear data libraries production chain is extremely complex and made of different steps, it is mandatory to foresee stringent verification and validation procedures to be applied to it. The work here presented has been focused on the development of a new python based software called JADE, whose objective is to give a significant help in increasing the level of automation and standardization of these procedures in order to reduce the time passing between new libraries releases and, at the same time, increasing their quality. After an introduction to nuclear fusion (which is the field where the majority of the V\&V action was concentrated for the time being) and to the simulation of particles and radiations transport, the motivations leading to JADE development are discussed. Subsequently, the code general architecture and the implemented benchmarks (both experimental and computational) are described. After that, the results coming from the major application of JADE during the research years are presented. At last, after a final discussion on the objective reached by JADE, the possible brief, mid and long time developments for the project are discussed.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
In the frame of inductive power transfer (IPT) systems, arrays of magnetically coupled resonators have received increasing attention as they are cheap and versatile due to their simple structure. They consist of magnetically coupled coils, which resonate with their self-capacitance or lumped capacitive networks. Of great industrial interest are planar resonator arrays used to power a receiver that can be placed at any position above the array. A thorough circuit analysis has been carried out, first starting from traditional two-coil IPT devices. Then, resonator arrays have been introduced, with particular attention to the case of arrays with a receiver. To evaluate the system performance, a circuit model based on original analytical formulas has been developed and experimentally validated. The results of the analysis also led to the definition of a new doubly-fed array configuration with a receiver that can be placed above it at any position. A suitable control strategy aimed at maximising the transmitted power and the efficiency has been also proposed. The study of the array currents has been carried out resorting to the theory of magneto-inductive waves, allowing useful insight to be highlighted. The analysis has been completed with a numerical and experimental study on the magnetic field distribution originating from the array. Furthermore, an application of the resonator array as a position sensor has been investigated. The position of the receiver is estimated through the measurement of the array input impedance, for which an original analytical expression has been also obtained. The application of this sensing technique in an automotive dynamic IPT system has been discussed. The thesis concludes with an evaluation of the possible applications of two-dimensional resonator arrays in IPT systems. These devices can be used to improve system efficiency and transmitted power, as well as for magnetic field shielding.
Resumo:
This thesis investigates how individuals can develop, exercise, and maintain autonomy and freedom in the presence of information technology. It is particularly interested in how information technology can impose autonomy constraints. The first part identifies a problem with current autonomy discourse: There is no agreed upon object of reference when bemoaning loss of or risk to an individual’s autonomy. Here, thesis introduces a pragmatic conceptual framework to classify autonomy constraints. In essence, the proposed framework divides autonomy in three categories: intrinsic autonomy, relational autonomy and informational autonomy. The second part of the thesis investigates the role of information technology in enabling and facilitating autonomy constraints. The analysis identifies eleven characteristics of information technology, as it is embedded in society, so-called vectors of influence, that constitute risk to an individual’s autonomy in a substantial way. These vectors are assigned to three sets that correspond to the general sphere of the information transfer process to which they can be attributed to, namely domain-specific vectors, agent-specific vectors and information recipient-specific vectors. The third part of the thesis investigates selected ethical and legal implications of autonomy constraints imposed by information technology. It shows the utility of the theoretical frameworks introduced earlier in the thesis when conducting an ethical analysis of autonomy-constraining technology. It also traces the concept of autonomy in the European Data Lawsand investigates the impact of cultural embeddings of individuals on efforts to safeguard autonomy, showing intercultural flashpoints of autonomy differences. In view of this, the thesis approaches the exercise and constraint of autonomy in presence of information technology systems holistically. It contributes to establish a common understanding of (intuitive) terminology and concepts, connects this to current phenomena arising out of ever-increasing interconnectivity and computational power and helps operationalize the protection of autonomy through application of the proposed frameworks.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
Molecular materials are made by the assembly of specifically designed molecules to obtain bulk structures with desired solid-state properties, enabling the development of materials with tunable chemical and physical properties. These properties result from the interplay of intra-molecular constituents and weak intermolecular interactions. Thus, small changes in individual molecular and electronic structure can substantially change the properties of the material in bulk. The purpose of this dissertation is, thus, to discuss and to contribute to the structure-property relationships governing the electronic, optical and charge transport properties of organic molecular materials through theoretical and computational studies. In particular, the main focus is on the interplay of intra-molecular properties and inter-molecular interactions in organic molecular materials. In my three-years of research activity, I have focused on three major areas: 1) the investigation of isolated-molecule properties for the class of conjugated chromophores displaying diradical character which are building blocks for promising functional materials; 2) the determination of intra- and intermolecular parameters governing charge transport in molecular materials and, 3) the development and application of diabatization procedures for the analysis of exciton states in molecular aggregates. The properties of diradicaloids are extensively studied both regarding their ground state (diradical character, aromatic vs quinoidal structures, spin dynamics, etc.) and the low-lying singlet excited states including the elusive double-exciton state. The efficiency of charge transport, for specific classes of organic semiconductors (including diradicaloids), is investigated by combining the effects of intra-molecular reorganization energy, inter-molecular electronic coupling and crystal packing. Finally, protocols aimed at unravelling the nature of exciton states are introduced and applied to different molecular aggregates. The role of intermolecular interactions and charge transfer contributions in determining the exciton state character and in modulating the H- to J- aggregation is also highlighted.
Resumo:
The benzoquinone was found as an effective co-catalyst in the ruthenium/NaOEt-catalyzed Guerbet reaction. The co-catalyst behavior has therefore been investigated through experimental and computational methods. The reaction products distribution shows that the reaction speed is improved by the benzoquinone supplement since the beginning of the process, having a minimal effect on the selectivity toward alcoholic species. DFT calculations were performed to investigate two hypotheses for the kinetic effects: i) a hydrogen storage mechanism or ii) a basic co-catalysis of 4-hydroxiphenolate. The most promising results were found for the latter hypothesis, where a new mixed mechanism for the aldol condensation step of the Guerbet process involves the hydroquinone (i.e. the reduced form of benzoquinone) as proton source instead of ethanol. This mechanism was found to be energetically more favorable than an aldol condensation in absence of additive, suggesting that the hydroquinone derived from benzoquinone could be the key species affecting the kinetics of the overall process. To verify this theoretical hypothesis, new phenol derivatives were tested as additives in the Guerbet reaction. The outcomes confirmed that an aromatic acid (stronger than ethanol) could improve the reaction kinetics. Lastly, theoretical products distributions were simulated and compared to the experimental one, using the DFT computations to build the kinetic models.