987 resultados para theoretical methods
Resumo:
The steadily increasing diversity of colloidal systems demands for new theoretical approaches and a cautious experimental characterization. Here we present a combined rheological and microscopical study of colloids in their arrested state whereas we did not aim for a generalized treatise but rather focused on a few model colloids, liquid crystal based colloidal suspensions and sedimented colloidal films. We laid special emphasis on the understanding of the mutual influence of dominant interaction mechanisms, structural characteristics and the particle properties on the mechanical behavior of the colloid. The application of novel combinations of experimental techniques played an important role in these studies. Beside of piezo-rheometry we employed nanoindentation experiments and associated standardized analysis procedures. These rheometric methods were complemented by real space images using confocal microscopy. The flexibility of the home-made setup allowed for a combination of both techniques and thereby for a simultaneous rheological and three-dimensional structural analysis on a single particle level. Though, the limits of confocal microscopy are not reached by now. We show how hollow and optically anisotropic particles can be utilized to quantify contact forces and rotational motions for individual particles. In future such data can contribute to a better understanding of particle reorganization processes, such as the liquidation of colloidal gels and glasses under shear.
Resumo:
This thesis provides a thoroughly theoretical background in network theory and shows novel applications to real problems and data. In the first chapter a general introduction to network ensembles is given, and the relations with “standard” equilibrium statistical mechanics are described. Moreover, an entropy measure is considered to analyze statistical properties of the integrated PPI-signalling-mRNA expression networks in different cases. In the second chapter multilayer networks are introduced to evaluate and quantify the correlations between real interdependent networks. Multiplex networks describing citation-collaboration interactions and patterns in colorectal cancer are presented. The last chapter is completely dedicated to control theory and its relation with network theory. We characterise how the structural controllability of a network is affected by the fraction of low in-degree and low out-degree nodes. Finally, we present a novel approach to the controllability of multiplex networks
Resumo:
Diese Dissertation demonstriert und verbessert die Vorhersagekraft der Coupled-Cluster-Theorie im Hinblick auf die hochgenaue Berechnung von Moleküleigenschaften. Die Demonstration erfolgt mittels Extrapolations- und Additivitätstechniken in der Single-Referenz-Coupled-Cluster-Theorie, mit deren Hilfe die Existenz und Struktur von bisher unbekannten Molekülen mit schweren Hauptgruppenelementen vorhergesagt wird. Vor allem am Beispiel von cyclischem SiS_2, einem dreiatomigen Molekül mit 16 Valenzelektronen, wird deutlich, dass die Vorhersagekraft der Theorie sich heutzutage auf Augenhöhe mit dem Experiment befindet: Theoretische Überlegungen initiierten eine experimentelle Suche nach diesem Molekül, was schließlich zu dessen Detektion und Charakterisierung mittels Rotationsspektroskopie führte. Die Vorhersagekraft der Coupled-Cluster-Theorie wird verbessert, indem eine Multireferenz-Coupled-Cluster-Methode für die Berechnung von Spin-Bahn-Aufspaltungen erster Ordnung in 2^Pi-Zuständen entwickelt wird. Der Fokus hierbei liegt auf Mukherjee's Variante der Multireferenz-Coupled-Cluster-Theorie, aber prinzipiell ist das vorgeschlagene Berechnungsschema auf alle Varianten anwendbar. Die erwünschte Genauigkeit beträgt 10 cm^-1. Sie wird mit der neuen Methode erreicht, wenn Ein- und Zweielektroneneffekte und bei schweren Elementen auch skalarrelativistische Effekte berücksichtigt werden. Die Methode eignet sich daher in Kombination mit Coupled-Cluster-basierten Extrapolations-und Additivitätsschemata dafür, hochgenaue thermochemische Daten zu berechnen.
Resumo:
This thesis details the development of quantum chemical methods for the accurate theoretical description of molecular systems with a complicated electronic structure. In simple cases, a single Slater determinant, in which the electrons occupy a number of energetically lowest molecular orbitals, offers a qualitatively correct model. The widely used coupled-cluster method CCSD(T) efficiently includes electron correlation effects starting from this determinant and provides reaction energies in error by only a few kJ/mol. However, the method often fails when several electronic configurations are important, as, for instance, in the course of many chemical reactions or in transition metal compounds. Internally contracted multireference coupled-cluster methods (ic-MRCC methods) cure this deficiency by using a linear combination of determinants as a reference function. Despite their theoretical elegance, the ic-MRCC equations involve thousands of terms and are therefore derived by the computer. Calculations of energy surfaces of BeH2, HF, LiF, H2O, N2 and Be3 unveil the theory's high accuracy compared to other approaches and the quality of various hierarchies of approximations. New theoretical advances include size-extensive techniques for removing linear dependencies in the ic-MRCC equations and a multireference analog of CCSD(T). Applications of the latter method to O3, Ni2O2, benzynes, C6H7NO and Cr2 underscore its potential to become a new standard method in quantum chemistry.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.
Resumo:
Comparison of the crystal structure of a transition state analogue that was used to raise catalytic antibodies for the benzoyl ester hydrolysis of cocaine with structures calculated by ab initio, semiempirical, and solvation semiempirical methods reveals that modeling of solvation is crucial for replicating the crystal structure geometry. Both SM3 and SM2 calculations, starting from the crystal structure TSA I, converged on structures similar to the crystal structure. The 3-21G(*)/HF, 6-31G*/HF, PM3, and AM1 calculations converged on structures similar to each other, but these gas-phase structures were significantly extended relative to the condensed phase structures. Two transition states for the hydrolysis of the benzoyl ester of cocaine were located with the SM3 method. The gas phase calculations failed to locate reasonable transition state structures for this reaction. These results imply that accurate modeling of the potential energy surfaces for the hydrolysis of cocaine requires solvation methods.
Resumo:
Parents and children, starting at very young ages, discuss religious and spiritual issues¿where we come from, what happens to us after we die, is there a God, and so on. Unfortunately, few studies have analyzed the content and structure of parent-child conversation about religion and spirituality (Boyatzis & Janicki, 2003; Dollahite & Thatcher, 2009), and most studies have relied on self-report with no direct observation. The current study examined mother-child (M-C) spiritual discourse to learn about its content, structure, and frequency through a survey inventory in combination with direct video observation using a novel structured task. We also analyzed how mothers¿ religiosity along several major dimensions related to their communication behaviors within both methods. Mothers (N = 39, M age = 40) of children aged 3-12 completed a survey packet on M-C spiritual discourse and standard measures of mothers¿ religious fundamentalism, intrinsic religiosity, sanctification of parenting (how much the mother saw herself as doing God¿s work as a parent), and a new measure of parental openness to children¿s spirituality. Then, in a structured task in our lab, mothers (N = 33) and children (M age = 7.33) watched a short film or read a short book that explored death in an age-appropriate manner and then engaged in a videotaped conversation about the movie or book and their religious or spiritual beliefs. Frequency of M-C spiritual discourse was positively related to mothers¿ religious fundamentalism (r = .71, p = .00), intrinsic religiosity (r = .77, p = .00), and sanctification of parenting (r = .79, p = .00), but, surprisingly, was inversely related to mothers¿ v openness to child¿s spirituality (r = -.52, p = .00). Survey data showed that the two most common topics discussed were God (once a week) and religion as it relates to moral issues (once a week). According to mothers their children¿s most common method of initiating spiritual discourse was to repeat what he or she has heard parents or family say about religious issues (M = 2.97; once a week); mothers¿ most common method was to describe their own religious/spiritual beliefs (M = 2.92). Spiritual discourse most commonly occurred either at bedtime or mealtime as reported by 26% of mothers, with the most common triggers reported as daily routine/random thoughts (once a week) and observations of nature (once a week). Mothers¿ most important goals for spiritual discourse were to let their children know that they love them (M = 3.72; very important) and to help them become a good and moral person (M = 3.67; very important). A regression model showed that significant variance in frequency of mother-child spiritual discourse (R2 = .84, p = .00) was predicted by the mothers¿ importance of goals during discourse (ß = 0.46, p = .00), frequency that the mother¿s spirituality was deepened through spiritual discourse (ß = 0.39, p = .00), and the mother¿s fundamentalism (ß = 0.20, p = .05). In a separate regression, the mother¿s comfort in the structured task (ß = 0.70, p = .00), and the number of open-ended questions she asked (ß = -0.26, p = .03) predicted the reciprocity between mother and child (R2 = .62, p = .00). In addition, the mother¿s age (ß = 0.22, p = .059) and comfort during the task (ß = 0.73, p = .00) predicted the child¿s engagement within the structured task. Other findings and theoretical and methodological implications will be discussed.
Resumo:
The longitudinal dimension of schizophrenia and related severe mental illness is a key component of theoretical models of recovery. However, empirical longitudinal investigations have been underrepresented in the psychopathology of schizophrenia. Similarly, traditional approaches to longitudinal analysis of psychopathological data have had serious limitations. The utilization of modern longitudinal methods is necessary to capture the complexity of biopsychosocial models of treatment and recovery in schizophrenia. The present paper summarizes empirical data from traditional longitudinal research investigating recovery in symptoms, neurocognition, and social functioning. Studies conducted under treatment as usual conditions are compared to psychosocial intervention studies and potential treatment mechanisms of psychosocial interventions are discussed. Investigations of rehabilitation for schizophrenia using the longitudinal analytic strategies of growth curve and time series analysis are demonstrated. The respective advantages and disadvantages of these modern methods are highlighted. Their potential use for future research of treatment effects and recovery in schizophrenia is also discussed.
Resumo:
Students are now involved in a vastly different textual landscape than many English scholars, one that relies on the “reading” and interpretation of multiple channels of simultaneous information. As a response to these new kinds of literate practices, my dissertation adds to the growing body of research on multimodal literacies, narratology in new media, and rhetoric through an examination of the place of video games in English teaching and research. I describe in this dissertation a hybridized theoretical basis for incorporating video games in English classrooms. This framework for textual analysis includes elements from narrative theory in literary study, rhetorical theory, and literacy theory, and when combined to account for the multiple modalities and complexities of gaming, can provide new insights about those theories and practices across all kinds of media, whether in written texts, films, or video games. In creating this framework, I hope to encourage students to view texts from a meta-level perspective, encompassing textual construction, use, and interpretation. In order to foster meta-level learning in an English course, I use specific theoretical frameworks from the fields of literary studies, narratology, film theory, aural theory, reader-response criticism, game studies, and multiliteracies theory to analyze a particular video game: World of Goo. These theoretical frameworks inform pedagogical practices used in the classroom for textual analysis of multiple media. Examining a video game from these perspectives, I use analytical methods from each, including close reading, explication, textual analysis, and individual elements of multiliteracies theory and pedagogy. In undertaking an in-depth analysis of World of Goo, I demonstrate the possibilities for classroom instruction with a complex blend of theories and pedagogies in English courses. This blend of theories and practices is meant to foster literacy learning across media, helping students develop metaknowledge of their own literate practices in multiple modes. Finally, I outline a design for a multiliteracies course that would allow English scholars to use video games along with other texts to interrogate texts as systems of information. In doing so, students can hopefully view and transform systems in their own lives as audiences, citizens, and workers.
Resumo:
As an important Civil Engineering material, asphalt concrete (AC) is commonly used to build road surfaces, airports, and parking lots. With traditional laboratory tests and theoretical equations, it is a challenge to fully understand such a random composite material. Based on the discrete element method (DEM), this research seeks to develop and implement computer models as research approaches for improving understandings of AC microstructure-based mechanics. In this research, three categories of approaches were developed or employed to simulate microstructures of AC materials, namely the randomly-generated models, the idealized models, and image-based models. The image-based models were recommended for accurately predicting AC performance, while the other models were recommended as research tools to obtain deep insight into the AC microstructure-based mechanics. A viscoelastic micromechanical model was developed to capture viscoelastic interactions within the AC microstructure. Four types of constitutive models were built to address the four categories of interactions within an AC specimen. Each of the constitutive models consists of three parts which represent three different interaction behaviors: a stiffness model (force-displace relation), a bonding model (shear and tensile strengths), and a slip model (frictional property). Three techniques were developed to reduce the computational time for AC viscoelastic simulations. It was found that the computational time was significantly reduced to days or hours from years or months for typical three-dimensional models. Dynamic modulus and creep stiffness tests were simulated and methodologies were developed to determine the viscoelastic parameters. It was found that the DE models could successfully predict dynamic modulus, phase angles, and creep stiffness in a wide range of frequencies, temperatures, and time spans. Mineral aggregate morphology characteristics (sphericity, orientation, and angularity) were studied to investigate their impacts on AC creep stiffness. It was found that aggregate characteristics significantly impact creep stiffness. Pavement responses and pavement-vehicle interactions were investigated by simulating pavement sections under a rolling wheel. It was found that wheel acceleration, steadily moving, and deceleration significantly impact contact forces. Additionally, summary and recommendations were provided in the last chapter and part of computer programming codes wree provided in the appendixes.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.
Resumo:
Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.
Resumo:
Reducing risk that emerges from hazards of natural origin and societal vulnerability is a key challenge for the development of more resilient communities and the overall goal of sustainable development. The following chapter outlines a framework for multidimensional, holistic vulnerability assessment that is understood as part of risk evaluation and risk management in the context of Disaster Risk Management (DRM) and Climate Change Adaptation (CCA). As a heuristic, the framework is a thinking tool to guide systematic assessments of vulnerability and to provide a basis for comparative indicators and criteria development to assess key factors and various dimensions of vulnerability, particularly in regions in Europe, however, it can also be applied in other world regions. The framework has been developed within the context of the research project MOVE (Methods for the Improvement of Vulnerability Assessment in Europe; ) sponsored by the European Commission within the framework of the FP 7 program.