966 resultados para Two-point boundary value problems
Resumo:
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.
Resumo:
Tämän kandidaatin tutkielman tarkoituksena on perehtyä suomalaisen rakennusyhtiön toimittajayhteistyön nykytilaan, yhteistyön tiivistämiseen liittyviin mahdollisuuksiin ja haasteisiin sekä toimittajakannan segmentointiin. Työ pyrkii antamaan kokonaiskuvan yrityksen yhteistyösuhteiden tilasta sekä muodostamaan näkemyksiä mahdollisista ongelmakohdista. Näin ollen pyritään tarjoamaan yritykselle myös mahdollisuus tarttua toimittajasuhteisiin liittyviin asioihin, jotka eivät ole vielä yrityksen haluamalla tasolla. Tutkimuksen tulokset perustuvat aikaisemmista tutkimuksista muodostuvaan teoriaosuuteen sekä haastatteluiden avulla kerättyyn empiriaosioon. Haastattelut on suoritettu puolistrukturoituina ja materiaali on analysoitu teemoittelua hyödyntäen. Tutkimus osoitti, että kehittämällä toimittajasuhteita oikeanlaisten toimittajien kanssa, on yrityksen mahdollista saavuttaa paljon kaivattua kilpailuetua. Hyötyjen tavoitteleminen ei kuitenkaan ole ongelmatonta projektiluontoisella rakennusalalla, jossa toimittajasuhteista on haastavaakehittää jatkuvia suhteita vaihtuvien projektien vuoksi. Kohdeyrityksessä pyritään jo nyt hyödyntämään toimittajayhteistyöstä kumpuavia etuja, mutta paljon on vielä tekemättä. Nykyään hinta vaikuttaa vielä liikaa toimittajavalintaan, yhteistyön kehittämiselle ei ole olemassa tunnettua järjestelmää ja yhteistyösuhteiden merkitystä tulisi läpi organisaation painottaa vahvemmin. Yrityksen henkilöstö tiedostaa haasteet, joita yhteistyösuhteiden kehittämiseen liittyy, mutta yleinen mielipide on kuitenkin se, että toimittajasuhteiden kehittäminen on kannattavaa haasteellisuudesta huolimatta. Tulevaisuudessa yrityksen kannattaa yhä vahvemmin lähteä tavoittelemaan yhteistyösuhteita kehittämällä saavutettavissa olevia kilpailullisia etuja, koska osaamista ja resursseja siihen löytyy. Ennen kehittämisprosessia johdon kannattaa kuitenkin kiinnittää huomiota tutkimuksessa ilmenneisiin eriäviin mielipiteisiin muun muassa henkilösuhteiden vaikutuksesta ja organisaatiotason ymmärryksestä, jotta toimittajayhteistyön hyödyt saadaan realisoitumaan halutulla tavalla.
Resumo:
In a global society, all educational sectors need to recognise internationalism as a core, foundational principle. Whilst most educational sectors are taking up that challenge, vocational education and training (VET) is still being pulled towards the national agenda in terms of its structures and systems, and the policies driving it, disadvantaging those who graduate from VET, those who teach in it, and the businesses and countries that connect with it. This paper poses questions about the future of internationalisation in the sector. It examines whether there is a way to create a VET system that meets its primary point of value, to produce skilled workers for the local labour market, while still benefitting those graduates by providing international skills and knowledge, gained from VET institutions that are international in their outlook. The paper examines some of the key barriers created by systems and structures in VET to internationalisation and suggests that the efforts which have been made to address the problem have had limited success. It suggests that only a model which gives freedom to those with a direct vested interest, students, teachers, trainers and employers, to pursue international co-operation and liaison will have the opportunity to succeed. (DIPF/Orig.)
A class of domain decomposition preconditioners for hp-discontinuous Galerkin finite element methods
Resumo:
In this article we address the question of efficiently solving the algebraic linear system of equations arising from the discretization of a symmetric, elliptic boundary value problem using hp-version discontinuous Galerkin finite element methods. In particular, we introduce a class of domain decomposition preconditioners based on the Schwarz framework, and prove bounds on the condition number of the resulting iteration operators. Numerical results confirming the theoretical estimates are also presented.
Resumo:
Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.
Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.
To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.
All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.
Resumo:
Background: Umbilical arterial blood gas (UABG) analysis is more objective than other methods for predicting neonatal outcome. Acidemic neonates may be at risk for unfavorable outcome after birth, but all neonates with abnormal arterial blood gas (ABG) analysis do not always have poor outcome. Objectives: This study was carried out to determine the short term outcome of the neonates born with an abnormal ABG. Patients and Methods: In a cohort prospective study 120 high risk mother-neonate pairs were enrolled and UABG was taken immediately after birth. All neonates with an umbilical cord pH less than 7.2 were considered as case group and more than 7.2 as controls. Outcomes like need to resuscitation, admission to newborn services and/or NICU), seizure occurrence, hypoxic ischemic encephalopathy (HIE), delayed initiation of oral feeding and length of hospital stay were recorded and compared between the two groups. P value less than 0.05 was considered as being significant. Results: Comparison of short term outcomes between normal and abnormal ABG groups were as the fallowing: need for advanced resuscitation 4 vs. 0 (P = 0.001), NICU admission 16 vs. 4 (P = 0.001), convulsion 2 vs. 0 (P = 0.496), HIE 17 vs. 4 (P = 0.002), delay to start oral feeding 16 vs. 4 (P = 0.001), mean hospital stay 4 vs. 3 days (P = 0.001). None of the neonates died in study groups. Conclusions: An umbilical cord PH less than 7.2 immediately after birth can be used as a prognostic factor for unfavorable short term outcome in newborns.
Resumo:
Background: Hyperphenylalaninemia (HPA) and Phenylkeonuria (PKU) are metabolic errors caused by deficiency of phenylalanine hydroxylase enzyme, which results in increased level of phenylalanine. This increase is toxic to the growing brain. Objectives: The purpose of this study was to compare the intellectual and developmental status in HPA and PKU children with normal population in national screening program. Patients and Methods: In a historical cohort study, 41 PKU patients who had the inclusion criteria and 41 healthy children were evaluated. Wechsler preschool and primary scale of intelligence-3rd edition (WPPI-3) was used in order to assess the intellectual status of children 4 years and older and Ages and stages questionnaire (ASQ) was used to assess the developmental status of children 5 years and younger. Results: In intellectual test comparison, the two groups showed significant difference in Wechsler’s performance intelligence score and some performance subscales (P-value < 0.01). In comparison of developmental status, no significant difference was observed between the two groups (P-value > 0.05). Conclusions: Even with early diagnosis and treatment of PKU patients, these children show some deficiencies intellectually compared to normal children. This study emphasizes on necessity for screening intellectual and developmental status of PKU patients so that effective medical or educational measures can taken in case of deficiencies.
Resumo:
The purpose of this study was to analyze and compare the technical performance profile of the four-time Costa Rican Senior Basketball League championship team. A total of 142 games was recorded throughout the 2007, 2008 and 2009 seasons. Performance indicators selected were: two and three-point shots (converted, missed, effectiveness rates), free throws (converted, missed, effectiveness rates), points, offensive and defensive rebounds, fouls, turnovers, assists and ball steals. The information was described based on absolute and relative frequency values. Data was compared by season and by playing period based on the following non-parametric techniques: U-test, Friedman test and Chi-square. In all cases, SPSS version 15.0 was used with a significance level of p ≤ 0.05. Results showed a better profile of technical performance in the 2008 season, characterized by better percentages of two-point shots, free throws, fewer turnovers and more ball steals and assists. In relation to the playing period, the team showed a better technical performance profile during the second half of the matches. In general, the effectiveness rate of two-point shots and free throws was above 60% in both playing periods, while the three-point shot percentage ranged between 26.4% and 29.2%. In conclusion, the team showed a similar technical performance profile to that reported in the literature, as well as a clear evidence of the importance of recording and following up on technical performance indicators in basketball.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
The aim of this novel experimental study is to investigate the behaviour of a 2m x 2m model of a masonry groin vault, which is built by the assembly of blocks made of a 3D-printed plastic skin filled with mortar. The choice of the groin vault is due to the large presence of this vulnerable roofing system in the historical heritage. Experimental tests on the shaking table are carried out to explore the vault response on two support boundary conditions, involving four lateral confinement modes. The data processing of markers displacement has allowed to examine the collapse mechanisms of the vault, based on the arches deformed shapes. There then follows a numerical evaluation, to provide the orders of magnitude of the displacements associated to the previous mechanisms. Given that these displacements are related to the arches shortening and elongation, the last objective is the definition of a critical elongation between two diagonal bricks and consequently of a diagonal portion. This study aims to continue the previous work and to take another step forward in the research of ground motion effects on masonry structures.
Resumo:
In this Thesis, we present a series of works that encompass the fundamental steps of cosmological analyses based on galaxy clusters, spanning from mass calibration to deriving cosmological constraints through counts and clustering. Firstly, we focus on the 3D two-point correlation function (2PCF) of the galaxy cluster sample by Planck Collaboration XXVII (2016). The masses of these clusters are expected to be underestimated, as they are derived from a scaling relation calibrated through X-ray observations. We derived a mass bias which disagrees with simulation predictions, consistent with what derived by Planck Collaboration VI (2020). Furthermore, in this Thesis we analyse the cluster counts and 2PCF, respectively, of the photometric galaxy cluster sample developed by Maturi et al. (2019), based on the third data release of KiDS (KiDS-DR3, de Jong et al. 2017). We derived constraints on fundamental cosmological parameters which are consistent and competitive, in terms of uncertainties, with other state-of-the-art cosmological analyses. Then, we introduce a novel approach to establish galaxy colour-redshift relations for cluster weak-lensing analyses, regardless of the specific photometric bands in use. This method optimises the selection completeness of cluster background galaxies while maintaining a defined purity threshold. Based on the galaxy sample by Bisigello et al. (2020), we calibrated two colour selections, one relying on the ground-based griz bands, and the other including the griz and Euclid YJH bands. In addition, we present the preliminary work on the weak-lensing mass calibration of the clusters detected by Maturi et al. (in prep.) in the fourth data release of KiDS (KiDS-1000, Kuijken et al. 2019). This mass calibration will enable the cosmological analyses based on cluster counts and clustering, from which we expect remarkable improvements in the results compared to those derived in KiDS-DR3.
Resumo:
The investigations of the large-scale structure of our Universe provide us with extremely powerful tools to shed light on some of the open issues of the currently accepted Standard Cosmological Model. Until recently, constraining the cosmological parameters from cosmic voids was almost infeasible, because the amount of data in void catalogues was not enough to ensure statistically relevant samples. The increasingly wide and deep fields in present and upcoming surveys have made the cosmic voids become promising probes, despite the fact that we are not yet provided with a unique and generally accepted definition for them. In this Thesis we address the two-point statistics of cosmic voids, in the very first attempt to model its features with cosmological purposes. To this end, we implement an improved version of the void power spectrum presented by Chan et al. (2014). We have been able to build up an exceptionally robust method to tackle with the void clustering statistics, by proposing a functional form that is entirely based on first principles. We extract our data from a suite of high-resolution N-body simulations both in the LCDM and alternative modified gravity scenarios. To accurately compare the data to the theory, we calibrate the model by accounting for a free parameter in the void radius that enters the theory of void exclusion. We then constrain the cosmological parameters by means of a Bayesian analysis. As far as the modified gravity effects are limited, our model is a reliable method to constrain the main LCDM parameters. By contrast, it cannot be used to model the void clustering in the presence of stronger modification of gravity. In future works, we will further develop our analysis on the void clustering statistics, by testing our model on large and high-resolution simulations and on real data, also addressing the void clustering in the halo distribution. Finally, we also plan to combine these constraints with those of other cosmological probes.
Resumo:
In this paper the two main drawbacks of the heat balance integral methods are examined. Firstly we investigate the choice of approximating function. For a standard polynomial form it is shown that combining the Heat Balance and Refined Integral methods to determine the power of the highest order term will either lead to the same, or more often, greatly improved accuracy on standard methods. Secondly we examine thermal problems with a time-dependent boundary condition. In doing so we develop a logarithmic approximating function. This new function allows us to model moving peaks in the temperature profile, a feature that previous heat balance methods cannot capture. If the boundary temperature varies so that at some time t & 0 it equals the far-field temperature, then standard methods predict that the temperature is everywhere at this constant value. The new method predicts the correct behaviour. It is also shown that this function provides even more accurate results, when coupled with the new CIM, than the polynomial profile. Analysis primarily focuses on a specified constant boundary temperature and is then extended to constant flux, Newton cooling and time dependent boundary conditions.
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.