405 resultados para Parameterized layouts
Resumo:
Organic semiconductors with the unique combination of electronic and mechanical properties may offer cost-effective ways of realizing many electronic applications, e.g. large-area flexible displays, printed integrated circuits and plastic solar cells. In order to facilitate the rational compound design of organic semiconductors, it is essential to understand relevant physical properties e.g. charge transport. This, however, is not straightforward, since physical models operating on different time and length scales need to be combined. First, the material morphology has to be known at an atomistic scale. For this atomistic molecular dynamics simulations can be employed, provided that an atomistic force field is available. Otherwise it has to be developed based on the existing force fields and first principle calculations. However, atomistic simulations are typically limited to the nanometer length- and nanosecond time-scales. To overcome these limitations, systematic coarse-graining techniques can be used. In the first part of this thesis, it is demonstrated how a force field can be parameterized for a typical organic molecule. Then different coarse-graining approaches are introduced together with the analysis of their advantages and problems. When atomistic morphology is available, charge transport can be studied by combining the high-temperature Marcus theory with kinetic Monte Carlo simulations. The approach is applied to the hole transport in amorphous films of tris(8-hydroxyquinoline)aluminium (Alq3). First the influence of the force field parameters and the corresponding morphological changes on charge transport is studied. It is shown that the energetic disorder plays an important role for amorphous Alq3, defining charge carrier dynamics. Its spatial correlations govern the Poole-Frenkel behavior of the charge carrier mobility. It is found that hole transport is dispersive for system sizes accessible to simulations, meaning that calculated mobilities depend strongly on the system size. A method for extrapolating calculated mobilities to the infinite system size is proposed, allowing direct comparison of simulation results and time-of-flight experiments. The extracted value of the nondispersive hole mobility and its electric field dependence for amorphous Alq3 agree well with the experimental results.
Resumo:
The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.
Resumo:
Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.
Esperienza di creazione di entrate lessicografiche combinatorie: metodi e dati dal progetto CombiNet
Resumo:
The present dissertation aims at simulating the construction of lexicographic layouts for an Italian combinatory dictionary based on real linguistic data, extracted from corpora by using computational methods. This work is based on the assumption that the intuition of the native speaker, or the lexicographer, who manually extracts and classifies all the relevant data, are not adequate to provide sufficient information on the meaning and use of words. Therefore, a study of the real use of language is required and this is particularly true for dictionaries that collect the combinatory behaviour of words, where the task of the lexicographer is to identify typical combinations where a word occurs. This study is conducted in the framework of the CombiNet project aimed at studying Italian Word Combinationsand and at building an online, corpus-based combinatory lexicographic resource for the Italian language. This work is divided into three chapters. Chapter 1 describes the criteria considered for the classification of word combinations according to the work of Ježek (2011). Chapter 1 also contains a brief comparison between the most important Italian combinatory dictionaries and the BBI Dictionary of Word Combinations in order to describe how word combinations are considered in these lexicographic resources. Chapter 2 describes the main computational methods used for the extraction of word combinations from corpora, taking into account the advantages and disadvantages of the two methods. Chapter 3 mainly focuses on the practical word carried out in the framework of the CombiNet project, with reference to the tools and resources used (EXTra, LexIt and "La Repubblica" corpus). Finally, the data extracted and the lexicographic layout of the lemmas to be included in the combinatory dictionary are commented, namely the words "acqua" (water), "braccio" (arm) and "colpo" (blow, shot, stroke).
Resumo:
The following research thesis is about a retrofit project made in Denmark, Copenhagen, and carried out on one of the buildings belonging to the Royal Danish Academy. The key assumption and base of the entire research process is that, up to now, the standard procedure in retrofit cases like this provides as comparative method between de facto and design, the use of Energy Simulation software. These programs generally divide the space into different thermal zones, assigning to each of them different levels of employment, activities, set-point temperatures set for cooling and heating analysis and so on, but always providing average and constant values, usually taken in the middle point of the single thermal zone. Therefore, the project and its research path stems from the attempt to investigate the potentialities of this kind of designing for retrofit process, as previously anticipated not antithetical but complementary to that classic energy-based retrofit, thus passing from the building scale, and all its thermal zones, to the users' scale, related to humans and microclimates. The main software used in this process is Autodesk Simulation CFD. The idea behind the project is that in certain situations, for example, it will not be necessary to add throughout insulation layers (previously parameterized and optimized with Design Builder), and that even in Winter conditions, due maybe to the users' activities, the increased level of clothing (clo) and the heat produced by equipments, thermal comfort could be achieved also in areas characterized by considerably lower MRT. After the analysis of the State of Art and its simulations, the project has still been supported by the tool itself, the CFD Software, in an iterative process aimed at achieving visible improvements in terms of MRT, on spaces with different needs and characteristics, both in Winter and Summer regimes.
Resumo:
The ability of the pm3 semiempirical quantum mechanical method to reproduce hydrogen bonding in nucleotide base pairs was assessed. Results of pm3 calculations on the nucleotides 2′-deoxyadenosine 5′-monophosphate (pdA), 2′-deoxyguanosine 5′-monophosphate (pdG), 2′-deoxycytidine 5′-monophosphate (pdC), and 2′-deoxythymidine 5′-monophosphate (pdT) and the base pairs pdA–pdT, pdG–pdC, and pdG(syn)–pdC are presented and discussed. The pm3 method is the first of the parameterized nddo quantum mechanical models with any ability to reproduce hydrogen bonding between nucleotide base pairs. Intermolecular hydrogen bond lengths between nucleotides displaying Watson–Crick base pairing are 0.1–0.2 Å less than experimental results. Nucleotide bond distances, bond angles, and torsion angles about the glycosyl bond (χ), the C4′C5′ bond (γ), and the C5′O5′ bond (β) agree with experimental results. There are many possible conformations of nucleotides. pm3 calculations reveal that many of the most stable conformations are stabilized by intramolecular CHO hydrogen bonds. These interactions disrupt the usual sugar puckering. The stacking interactions of a dT–pdA duplex are examined at different levels of gradient optimization. The intramolecular hydrogen bonds found in the nucleotide base pairs disappear in the duplex, as a result of the additional constraints on the phosphate group when part of a DNA backbone. Sugar puckering is reproduced by the pm3 method for the four bases in the dT–pdA duplex. pm3 underestimates the attractive stacking interactions of base pairs in a B-DNA helical conformation. The performance of the pm3 method implemented in SPARTAN is contrasted with that implemented in MOPAC. At present, accurate ab initio calculations are too timeconsuming to be of practical use, and molecular mechanics methods cannot be used to determine quantum mechanical properties such as reaction-path calculations, transition-state structures, and activation energies. The pm3 method should be used with extreme caution for examination of small DNA systems. Future parameterizations of semiempirical methods should incorporate base stacking interactions into the parameterization data set to enhance the ability of these methods.
Resumo:
Rock-pocket and honeycomb defects impair overall stiffness, accelerate aging, reduce service life, and cause structural problems in hardened concrete members. Traditional methods for detecting such deficient volumes involve visual observations or localized nondestructive methods, which are labor-intensive, time-consuming, highly sensitive to test conditions, and require knowledge of and accessibility to defect locations. The authors propose a vibration response-based nondestructive technique that combines experimental and numerical methodologies for use in identifying the location and severity of internal defects of concrete members. The experimental component entails collecting mode shape curvatures from laboratory beam specimens with size-controlled rock pocket and honeycomb defects, and the numerical component entails simulating beam vibration response through a finite element (FE) model parameterized with three defect-identifying variables indicating location (x, coordinate along the beam length) and severity of damage (alpha, stiffness reduction and beta, mass reduction). Defects are detected by comparing the FE model predictions to experimental measurements and inferring the low number of defect-identifying variables. This method is particularly well-suited for rapid and cost-effective quality assurance for precast concrete members and for inspecting concrete members with simple geometric forms.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system. With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system.
Resumo:
The marine aragonite cycle has been included in the global biogeochemical model PISCES to study the role of aragonite in shallow water CaCO3 dissolution. Aragonite production is parameterized as a function of mesozooplankton biomass and aragonite saturation state of ambient waters. Observation-based estimates of marine carbonate production and dissolution are well reproduced by the model and about 60% of the combined CaCO3 water column dissolution from aragonite and calcite is simulated above 2000 m. In contrast, a calcite-only version yields a much smaller fraction. This suggests that the aragonite cycle should be included in models for a realistic representation of CaCO3 dissolution and alkalinity. For the SRES A2 CO2 scenario, production rates of aragonite are projected to notably decrease after 2050. By the end of this century, global aragonite production is reduced by 29% and total CaCO3 production by 19% relative to pre-industrial. Geographically, the effect from increasing atmospheric CO2, and the subsequent reduction in saturation state, is largest in the subpolar and polar areas where the modeled aragonite production is projected to decrease by 65% until 2100.
Resumo:
We present the first-order corrected dynamics of fluid branes carrying higher-form charge by obtaining the general form of their equations of motion to pole-dipole order in the absence of external forces. Assuming linear response theory, we characterize the corresponding effective theory of stationary bent charged (an)isotropic fluid branes in terms of two sets of response coefficients, the Young modulus and the piezoelectric moduli. We subsequently find large classes of examples in gravity of this effective theory, by constructing stationary strained charged black brane solutions to first order in a derivative expansion. Using solution generating techniques and bent neutral black branes as a seed solution, we obtain a class of charged black brane geometries carrying smeared Maxwell charge in Einstein-Maxwell-dilaton gravity. In the specific case of ten-dimensional space-time we furthermore use T-duality to generate bent black branes with higher-form charge, including smeared D-branes of type II string theory. By subsequently measuring the bending moment and the electric dipole moment which these geometries acquire due to the strain, we uncover that their form is captured by classical electroelasticity theory. In particular, we find that the Young modulus and the piezoelectric moduli of our strained charged black brane solutions are parameterized by a total of 4 response coefficients, both for the isotropic as well as anisotropic cases.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Evaluation of control and surveillance strategies for classical swine fever using a simulation model
Resumo:
Classical swine fever (CSF) outbreaks can cause enormous losses in naïve pig populations. How to best minimize the economic damage and number of culled animals caused by CSF is therefore an important research area. The baseline CSF control strategy in the European Union and Switzerland consists of culling all animals in infected herds, movement restrictions for animals, material and people within a given distance to the infected herd and epidemiological tracing of transmission contacts. Additional disease control measures such as pre-emptive culling or vaccination have been recommended based on the results from several simulation models; however, these models were parameterized for areas with high animal densities. The objective of this study was to explore whether pre-emptive culling and emergency vaccination should also be recommended in low- to moderate-density areas such as Switzerland. Additionally, we studied the influence of initial outbreak conditions on outbreak severity to improve the efficiency of disease prevention and surveillance. A spatial, stochastic, individual-animal-based simulation model using all registered Swiss pig premises in 2009 (n=9770) was implemented to quantify these relationships. The model simulates within-herd and between-herd transmission (direct and indirect contacts and local area spread). By varying the four parameters (a) control measures, (b) index herd type (breeding, fattening, weaning or mixed herd), (c) detection delay for secondary cases during an outbreak and (d) contact tracing probability, 112 distinct scenarios were simulated. To assess the impact of scenarios on outbreak severity, daily transmission rates were compared between scenarios. Compared with the baseline strategy (stamping out and movement restrictions) vaccination and pre-emptive culling neither reduced outbreak size nor duration. Outbreaks starting in a herd with weaning piglets or fattening pigs caused higher losses regarding to the number of culled premises and were longer lasting than those starting in the two other index herd types. Similarly, larger transmission rates were estimated for these index herd type outbreaks. A longer detection delay resulted in more culled premises and longer duration and better transmission tracing increased the number of short outbreaks. Based on the simulation results, baseline control strategies seem sufficient to control CSF in low-medium animal-dense areas. Early detection of outbreaks is crucial and risk-based surveillance should be focused on weaning piglet and fattening pig premises.
Resumo:
Any functionally important mutation is embedded in an evolutionary matrix of other mutations. Cladistic analysis, based on this, is a method of investigating gene effects using a haplotype phylogeny to define a set of tests which localize causal mutations to branches of the phylogeny. Previous implementations of cladistic analysis have not addressed the issue of analyzing data from related individuals, though in human studies, family data are usually needed to obtain unambiguous haplotypes. In this study, a method of cladistic analysis is described in which haplotype effects are parameterized in a linear model which accounts for familial correlations. The method was used to study the effect of apolipoprotein (Apo) B gene variation on total-, LDL-, and HDL-cholesterol, triglyceride, and Apo B levels in 121 French families. Five polymorphisms defined Apo B haplotypes: the signal peptide Insertion/deletion, Bsp 1286I, XbaI, MspI, and EcoRI. Eleven haplotypes were found, and a haplotype phylogeny was constructed and used to define a set of tests of haplotype effects on lipid and apo B levels.^ This new method of cladistic analysis, the parametric method, found significant effects for single haplotypes for all variables. For HDL-cholesterol, 3 clusters of evolutionarily-related haplotypes affecting levels were found. Haplotype effects accounted for about 10% of the genetic variance of triglyceride and HDL-cholesterol levels. The results of the parametric method were compared to those of a method of cladistic analysis based on permutational testing. The permutational method detected fewer haplotype effects, even when modified to account for correlations within families. Simulation studies exploring these differences found evidence of systematic errors in the permutational method due to the process by which haplotype groups were selected for testing.^ The applicability of cladistic analysis to human data was shown. The parametric method is suggested as an improvement over the permutational method. This study has identified candidate haplotypes for sequence comparisons in order to locate the functional mutations in the Apo B gene which may influence plasma lipid levels. ^