937 resultados para Higher order interior point method
Resumo:
Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec
Resumo:
Initial convergence of the perturbation series expansion for vibrational nonlinear optical (NLO) properties was analyzed. The zero-point vibrational average (ZPVA) was obtained through first-order in mechanical plus electrical anharmonicity. Results indicated that higher-order terms in electrical and mechanical anharmonicity can make substantial contributions to the pure vibrational polarizibility of typical NLO molecules
Resumo:
Left unilateral spatial neglect resulting from right brain damage is characterized by loss of awareness for stimuli in the contralesional side of space, despite intact visual pathways. We examined using fMRI whether patients with neglect are more likely to consciously detect in the neglected hemifield, emotionally negative complex scenes rather than visually similar neutral pictures and if so, what neural mechanisms mediate this effect. Photographs of emotional and neutral scenes taken from the IAPS were presented in a divided visual field paradigm. As expected, the detection rate for emotional stimuli presented in the neglected field was higher than for neutral ones. Successful detection of emotional scenes as opposed to neutral stimuli in the left visual field (LVF) produced activations in the parahippocampal and anterior cingulate areas in the right hemisphere. Detection of emotional stimuli presented in the intact right visual field (RVF) activated a distributed network of structures in the left hemisphere, including anterior and posterior cingulate cortex, insula, as well as visual striate and extrastriate areas. LVF-RVF contrasts for emotional stimuli revealed activations in right and left attention related prefrontal areas whereas RVF-LVF comparison showed activations in the posterior cingulate and extrastriate visual cortex in the left hemisphere. An additional analysis contrasting detected vs. undetected emotional LVF stimuli showed involvement of left anterior cingulate, right frontal and extrastriate areas. We hypothesize that beneficial role of emotion in overcoming neglect is achieved by activation of frontal and limbic lobe networks, which provide a privileged access of emotional stimuli to attention by top-down modulation of processing in the higher-order extrastriate visual areas. Our results point to the importance of top-down regulatory role of the frontal attentional systems, which might enhance visual activations and lead to greater salience of emotional stimuli for perceptual awareness.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.
Resumo:
Identifiability of the so-called ω-slice algorithm is proven for ARMA linear systems. Although proofs were developed in the past for the simpler cases of MA and AR models, they were not extendible to general exponential linear systems. The results presented in this paper demonstrate a unique feature of the ω-slice method, which is unbiasedness and consistency when order is overdetermined, regardless of the IIR or FIR nature of the underlying system, and numerical robustness.
Resumo:
Työn tavoitteena on ollut selvittää runkoelementtitehtaan materiaalien hankinnanorganisointi ja ohjaus nykytilanteessa. Tutkimuksessa on pyritty löytämään materiaaliprosessin kannalta toimintaa rajoittavia pullonkauloja sekä etsitty kehitystoimenpiteitä ongelmakohtiin prosessiajattelun näkökulmasta. Tarkastelun kohteena on ollut yrityksen operatiivinen materiaaliprosessi nimikkeiden tilauksesta varastointiin. Työssä on käytetty kvalitatiivista tutkimusmenetelmää ja empiirisen osuuden tiedot on hankittu haastatteluilla ja laatuohjeistuksesta. Yrityksen nykytilanne on mallinnettu prosessikaavioiden avulla, ja on selvitetty mitkä ovat prosessin tieto- ja materiaalivirrat sekä mitkä ovat tärkeimmät toiminnot materiaaliketjussa. Prosessianalyysin ja haastatteluiden pohjalta määriteltiin kehitysehdotukset prosessin suorituskyvyn tehostamiseksi. Nykytilan kartoituksen jälkeen suurimmat ongelmat materiaaliprosessissa liittyvät tilausten ajoitusten hallintaan, muutoksien vaikutukseen prosessissa sekä vastuiden ja kokonaishallinnan puuttumiseen. Ongelmat johtuvat pääosin rakennusalan projektimaisesta luonteesta. Yhdeksi kehityskohteeksi nousi myös tiedonhallinnan tehostaminen, etenkin prosessin vaiheiden automatisointi tietojärjestelmiä hyödyntäen. Toimintaan on pyritty etsimään ratkaisuja prosessiajattelun avulla, mikä osoittautui sopivaksi menetelmäksi toiminnan kehittämisessä. Tutkimuksen tuloksena syntyi kehitysehdotuksia, joiden pohjalta muodostettiin uusi materiaalien ohjauksen toimintamalli. Toimintamallissa tärkeimpänä on ennakkotiedon hyödyntäminen tilaussuunnittelun tukena. Alustavat materiaalimäärät välitetään ennakkotietona myös toimittajille, jotka voivat paremmin suunnitella omaa tuotantokapasiteettiaan. Tilausten suunnittelu tapahtuu tarkentuvasti ja lopullinen materiaalimäärä ja tarveajankohta välitetään kotiinkutsun yhteydessä. Toimintamalliin liittyy lisäksi materiaalien vastaanoton ja varastoinnin kehittäminen sekä muutoksien hallinta tietojärjestelmää paremmin hyödyntäen. Kriittisintä materiaaliprosessissa tulee olemaan prosessin tiedonhallinta ja siihen liittyvät vastuukysymykset.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
The topic of this thesis is the simulation of a combination of several control and data assimilation methods, meant to be used for controlling the quality of paper in a paper machine. Paper making is a very complex process and the information obtained from the web is sparse. A paper web scanner can only measure a zig zag path on the web. An assimilation method is needed to process estimates for Machine Direction (MD) and Cross Direction (CD) profiles of the web. Quality control is based on these measurements. There is an increasing need for intelligent methods to assist in data assimilation. The target of this thesis is to study how such intelligent assimilation methods are affecting paper web quality. This work is based on a paper web simulator, which has been developed in the TEKES funded MASI NoTes project. The simulator is a valuable tool in comparing different assimilation methods. The thesis contains the comparison of four different assimilation methods. These data assimilation methods are a first order Bayesian model estimator, an ARMA model based on a higher order Bayesian estimator, a Fourier transform based Kalman filter estimator and a simple block estimator. The last one can be considered to be close to current operational methods. From these methods Bayesian, ARMA and Kalman all seem to have advantages over the commercial one. The Kalman and ARMA estimators seems to be best in overall performance.
Resumo:
Methane-rich landfill gas is generated when biodegradable organic wastes disposed of in landfills decompose under anaerobic conditions. Methane is a significant greenhouse gas, and landfills are its major source in Finland. Methane production in landfill depends on many factors such as the composition of waste and landfill conditions, and it can vary a lot temporally and spatially. Methane generation from waste can be estimated with various models. In this thesis three spreadsheet applications, a reaction equation and a triangular model for estimating the gas generation were introduced. The spreadsheet models introduced are IPCC Waste Model (2006), Metaanilaskentamalli by Jouko Petäjä of Finnish Environment Institute and LandGEM (3.02) of U.S. Environmental Protection Agency. All these are based on the first order decay (FOD) method. Gas recovery methods and gas emission measurements were also examined. Vertical wells and horizontal trenches are the most commonly used gas collection systems. Emission measurements chamber method, tracer method, soil core and isotope measurements, micrometeorological mass-balance and eddy covariance methods and gas measuring FID-technology were discussed. Methane production at Ämmässuo landfill of HSY Helsinki Region Environmental Services Authority was estimated with methane generation models and the results were compared with the volumes of collected gas. All spreadsheet models underestimated the methane generation at some point. LandGEM with default parameters and Metaanilaskentamalli with modified parameters corresponded best with the gas recovery numbers. Reason for the differences between evaluated and collected volumes could be e.g. that the parameter values of the degradable organic carbon (DOC) and the fraction of decomposable degradable organic carbon (DOCf) do not represent the real values well enough. Notable uncertainty is associated with the modelling results and model parameters. However, no simple explanation for the discovered differences can be given within this thesis.
Resumo:
The absolute nodal coordinate formulation was originally developed for the analysis of structures undergoing large rotations and deformations. This dissertation proposes several enhancements to the absolute nodal coordinate formulation based finite beam and plate elements. The main scientific contribution of this thesis relies on the development of elements based on the absolute nodal coordinate formulation that do not suffer from commonly known numerical locking phenomena. These elements can be used in the future in a number of practical applications, for example, analysis of biomechanical soft tissues. This study presents several higher-order Euler–Bernoulli beam elements, a simple method to alleviate Poisson’s and transverse shear locking in gradient deficient plate elements, and a nearly locking free gradient deficient plate element. The absolute nodal coordinate formulation based gradient deficient plate elements developed in this dissertation describe most of the common numerical locking phenomena encountered in the formulation of a continuum mechanics based description of elastic energy. Thus, with these fairly straightforwardly formulated elements that are comprised only of the position and transverse direction gradient degrees of freedom, the pathologies and remedies for the numerical locking phenomena are presented in a clear and understandable manner. The analysis of the Euler–Bernoulli beam elements developed in this study show that the choice of higher gradient degrees of freedom as nodal degrees of freedom leads to a smoother strain field. This improves the rate of convergence.
Resumo:
Tämän kandidaatinyön tavoitteena on selvittää keinoja joilla ETO-yhtiö voi kehittää tuotettaan ja tuotantoaan kohti massakustomointi. Lisäksi selvitetään mitkä asiat vaikuttavat asiakastilauksen kytkentäpisteen asettamiseen siirtyessä massakustomointiin. Työ on tehty kirjallisuuskatsauksena. Esitettyjen tietojen ja tulosten pohjana on alan kirjallisuus sekä julkaistut artikkelit. Työn perusteella voidaan todeta että parhaimmat keinot massakustomoinnin tavoitteluun ETO-yhtiölle ovat; tuotannon ja tuotteiden kehittäminen siten että pystytään hyödyntämään modularisointia ja komponenttien standardointia, lisäksi tuotesuunnitteluun käytettävää aikaa tulee vähentää automatisoimalla tuotesuunnittelua tai käyttämällä standardi suunnitelmia. ETO-yhtiössä siirtyessä massakustomointiin tulee asiakastilauksen kytkentäpisteen paikkaa asetettaessa ottaa huomioon tuotannon ja suunnittelun ulottuvuus kytkettynä asiakkaan vaatimuksiin.
Resumo:
This paper criticizes the conventional theory of choice for being grounded on a minimal set of rationality axioms. We claim that this theory does not take due account of the fact that agents are driven by motives other than the pursuit of material self-interest. Our departure point is logic of commitments and planned action, which helps us to identify some puzzles in the conventional theory of choice. As a way out, we discuss the Kantian perspective and the notions of metapreference and metaranking. We then build a model of choice which points to the possibility of a systematic treatment of higher order preferences and incommensurable objectives.
Resumo:
Interior of Chapman College Chapel, Orange, California. The wooden-shingled church, constructed in 1909 for the congregation of Trinity Episcopal Church, is located on the northeast corner of East Maple Avenue and North Grand Street. Chapman College (now Chapman University) purchased the church for their chapel when the congregation moved to a new church on Canal Street.