899 resultados para Error correction methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.

We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.

We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.

The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.

Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.

The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to identify the relationship between subjective well-being and economic insecurity for public and private sector workers in Ireland using the European Social Survey 2010-2012. Life satisfaction and job satisfaction are the indicators used to measure subjective well-being. Economic insecurity is approximated by regional unemployment rates and self-perceived job insecurity. Potential sample selection bias and endogeneity bias are accounted for. It is traditionally believed that public sector workers are relatively more protected against insecurity due to very institution of public sector employment. The institution of public sector employment is made up of stricter dismissal practices (Luechinger et al., 2010a) and less volatile employment (Freeman, 1987) where workers are subsequently less likely to be affected by business cycle downturns (Clark and Postal-Vinay, 2009). It is found in the literature that economic insecurity depresses the well-being of public sector workers to a lesser degree than private sector workers (Luechinger et al., 2010a; Artz and Kaya, 2014). These studies provide the rationale for this thesis in testing for similar relationships in an Irish context. Sample selection bias arises when a selection into a particular category is not random (Heckman, 1979). An example of this is non-random selection into public sector employment based on personal characteristics (Heckman, 1979; Luechinger et al., 2010b). If selection into public sector employment is not corrected for this can lead to biased and inconsistent estimators (Gujarati, 2009). Selection bias of public sector employment is corrected for by using a standard Two-Step Heckman Probit OLS estimation method. Following Luechinger et al. (2010b), the propensity for individuals to select into public sector employment is estimated by a binomial probit model with the inclusion of the additional regressor Irish citizenship. Job satisfaction is then estimated by Ordinary Least Squares (OLS) with the inclusion of a sample correction term similar as is done in Clark (1997). Endogeneity is where an independent variable included in the model is determined within in the context of the model (Chenhall and Moers, 2007). The econometric definition states that an endogenous independent variable is one that is correlated with the error term (Wooldridge, 2010). Endogeneity is expected to be present due to a simultaneous relationship between job insecurity and job satisfaction whereby both variables are jointly determined (Theodossiou and Vasileiou, 2007). Simultaneity, as an instigator of endogeneity, is corrected for using Instrumental Variables (IV) techniques. Limited Information Methods and Full Information Methods of estimation of simultaneous equations models are assed and compared. The general results show that job insecurity depresses the subjective well-being of all workers in both the public and private sectors in Ireland. The magnitude of this effect differs among sectoral workers. The subjective well-being of private sector workers is more adversely affected by job insecurity than the subjective well-being of public sector workers. This is observed in basic ordered probit estimations of both a life satisfaction equation and a job satisfaction equation. The marginal effects from the ordered probit estimation of a basic job satisfaction equation show that as job insecurity increases the probability of reporting a 9 on a 10-point job satisfaction scale significantly decreases by 3.4% for the whole sample of workers, 2.8% for public sector workers and 4.0% for private sector workers. Artz and Kaya (2014) explain that as a result of many austerity policies implemented to reduce government expenditure during the economic recession, workers in the public sector may for the first time face worsening perceptions of job security which can have significant implications for their well-being (Artz and Kaya, 2014). This can be observed in the marginal effects where job insecurity negatively impacts the well-being of public sector workers in Ireland. However, in accordance with Luechinger et al. (2010a) the results show that private sector workers are more adversely impacted by economic insecurity than public sector workers. This suggests that in a time of high economic volatility, the institution of public sector employment held and was able to protect workers against some of the well-being consequences of rising insecurity. In estimating the relationship between subjective well-being and economic insecurity advanced econometric issues arise. The results show that when selection bias is corrected for, any statistically significant relationship between job insecurity and job satisfaction disappears for public sector workers. Additionally, in order to correct for endogeneity bias the simultaneous equations model for job satisfaction and job insecurity is estimated by Limited Information and Full Information Methods. The results from two different estimators classified as Limited Information Methods support the general findings of this research. Moreover, the magnitude of the endogeneity-corrected estimates are twice as large as those not corrected for endogeneity bias which is similarly found in Geishecker (2010, 2012). As part of the analysis into the effect of economic insecurity on subjective well-being, the effects of other socioeconomic variables and work-related variables are examined for public and private sector workers in Ireland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are writing to comment on the work of Tamburini et al. (2003, doi:10.1029/2000PA000616). During the course of subsequent discussions between the authors and ourselves, it has become clear that the published sedimentary nitrogen isotopic values for Ocean Drilling Program (ODP) Site 724 are in error. Our reanalysis of sediment samples from the same intervals has revealed a significant offset from the original d15N data, requiring a revised assessment of their initial interpretation. The purposes of this comment are to (1) address the origin of these errors; (2) outline a protocol for future validation of nitrogen isotopic analyses; and (3) provide revised interpretations of the sedimentary d15N data in terms of the regional relative contributions of denitrification and nitrogen fixation and mean state of the southwest monsoon. (2) Nitrogen isotopic values measured on late Quaternary sediments at Arabian Sea ODP Site 724 by Tamburini et al. (2003, doi:10.1029/2000PA000616) are inexplicably different from a number of published records of d15N from very nearby on the Oman margin (Altabet et al., 1995, doi:10.1038/373506a0; 1999, doi:10.1029/1999PA900035; 2002, doi:10.1038/415159a; Higginson et al., 2004, doi:10.1016/j.gca.2004.03.015) and elsewhere in the Arabian Sea (Reichart et al., 1998, doi:10.1029/98PA02203). These data were generated using similar instrumentation (elemental analyzer coupled with an isotope ratio mass spectrometer) and analytical methodology to those already published. Concerned by this clear discrepancy, we analyzed aliquots of sediment from the same depth intervals for nitrogen abundance and bulk sedimentary nitrogen isotopes. We have been unable to duplicate the values published by Tamburini et al. (2003, doi:10.1029/2000PA000616 ), even after analysis of multiple replicates and due consideration of natural sediment heterogeneities and postrecovery sample storage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les gènes, qui servent à encoder les fonctions biologiques des êtres vivants, forment l'unité moléculaire de base de l'hérédité. Afin d'expliquer la diversité des espèces que l'on peut observer aujourd'hui, il est essentiel de comprendre comment les gènes évoluent. Pour ce faire, on doit recréer le passé en inférant leur phylogénie, c'est-à-dire un arbre de gènes qui représente les liens de parenté des régions codantes des vivants. Les méthodes classiques d'inférence phylogénétique ont été élaborées principalement pour construire des arbres d'espèces et ne se basent que sur les séquences d'ADN. Les gènes sont toutefois riches en information, et on commence à peine à voir apparaître des méthodes de reconstruction qui utilisent leurs propriétés spécifiques. Notamment, l'histoire d'une famille de gènes en terme de duplications et de pertes, obtenue par la réconciliation d'un arbre de gènes avec un arbre d'espèces, peut nous permettre de détecter des faiblesses au sein d'un arbre et de l'améliorer. Dans cette thèse, la réconciliation est appliquée à la construction et la correction d'arbres de gènes sous trois angles différents: 1) Nous abordons la problématique de résoudre un arbre de gènes non-binaire. En particulier, nous présentons un algorithme en temps linéaire qui résout une polytomie en se basant sur la réconciliation. 2) Nous proposons une nouvelle approche de correction d'arbres de gènes par les relations d'orthologie et paralogie. Des algorithmes en temps polynomial sont présentés pour les problèmes suivants: corriger un arbre de gènes afin qu'il contienne un ensemble d'orthologues donné, et valider un ensemble de relations partielles d'orthologie et paralogie. 3) Nous montrons comment la réconciliation peut servir à "combiner'' plusieurs arbres de gènes. Plus précisément, nous étudions le problème de choisir un superarbre de gènes selon son coût de réconciliation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Czech schools two teaching methods of reading are used: the analytic-synthetic (conventional) and genetic (created in the 1990s). They differ in theoretical foundations and in methodology. The aim of this paper is to describe the above mentioned theoretical approaches and present the results of study that followed the differences in the development of initial reading skills between these methods. A total of 452 first grade children (age 6-8) were assessed by a battery of reading tests at the beginning and at the end of the first grade and at the beginning of the second grade. 350 pupils participated all three times. Based on data analysis the developmental dynamics of reading skills in both methods and the main differences in several aspects of reading abilities (e.g. the speed of reading, reading technique, error rate in reading) are described. The main focus is on the reading comprehension development. Results show that pupils instructed using genetic approach scored significantly better on used reading comprehension tests, especially in the first grade. Statistically significant differences occurred between classes independently of each method. Therefore, other factors such as teacher´s role and class composition are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the a posteriori and a priori error analysis of discontinuous Galerkin interior penalty methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes. In particular, we discuss the question of error estimation for linear target functionals, such as the outflow flux and the local average of the solution. Based on our a posteriori error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement. The theoretical results are illustrated by a series of numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes.   Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Oesophageal adenocarcinoma has increased dramatically in incidence over the past three decades with a particularly high burden of disease at the gastro-oesophageal junction. Many cases occur in individuals without known gastro-oesophageal reflux disease and in the absence of Barrett’s oesophagus suggesting that mechanisms other than traditional reflux may be important. Distal squamous mucosa may be prone to acid damage even in the absence of traditional reflux by the mechanism of distal opening of the lower oesophageal sphincter. This is splaying of the distal segment of lower oesophageal sphincter allowing acid ingress without traditional reflux. It has been suggested that the cardiac mucosa at the gastro-oesophageal junction, separating oesophageal squamous mucosa and acid secreting columnar mucosa of the stomach may be an abnormal mucosa arising as a consequence of acid damage. By this theory the cardiac mucosa is metaplastic and akin to ultra-short Barrett’s oesophagus. Obesity is a known risk factor for adenocarcinoma at the gastro-oesophageal junction and its rise has paralleled that of oesophageal cancer. Some of this excess risk undoubtedly operates through stress on the gastro-oesophageal junction and a predisposition to reflux. However we sought to explore the impact of obesity on the gastro-oesophageal junction in healthy volunteers without reflux and in particular to determine the characteristics of the cardiac mucosa and mechanisms of reflux in this group. Methods: 61 healthy volunteers with normal and increased waist circumference were recruited. 15 were found to have a hiatus hernia during the study protocol and were analysed separately. Volunteers had comprehensive pathological, physiological and anatomical assessments of the gastro-oesophageal junction including endoscopy with biopsies, MRI scanning before and after a standardised meal, prolonged recording of pH and manometry before and after a meal and screening by fluoroscopy to identify the squamo-columnar junction. In the course of the early manometric assessments a potential error associated with the manometry system recordings was identified. We therefore also sought to document and address this on the benchtop and in vivo. Key Findings: 1. In documenting the behaviour of the manoscan we described an immediate effect of temperature change on the pressure recorded by the sensors; ‘thermal effect’ and an ongoing drift of the recorded pressure with time; ‘baseline drift’. Thermal effect was well compensated within the standard operation of the system but baseline drift not addressed. Applying a linear correction to recorded data substantially reduced the error associated with baseline drift. 2. In asymptomatic healthy volunteers there was lengthening of the cardiac mucosa in association with central obesity and age. Furthermore, the cardiac mucosa in healthy volunteers demonstrated an almost identical immunophenotype to non-IM Barrett’s mucosa, which is considered to arise by metaplasia of oesophageal squamous mucosa. These findings support the hypothesis that the cardia is metaplastic in origin. 3. We have demonstrated a plausible mechanism of damage to distal squamous mucosa in association with obesity. In those with a large waist circumference we observed increased ingress of acid within but not across the lower oesophageal sphincter; ‘intrasphincteric reflux’ 4. The 15 healthy volunteers with a hiatus hernia were compared to 15 controls matched for age, gender and waist circumference. Those with a hiatus hernia had a longer cardiac mucosa and although they did not have excess traditional reflux they had excess distal acid exposure by short segment acid reflux and intrasphincteric acid reflux. Conclusions: These findings are likely to be relevant to adenocarcinoma of the gastro-oesophageal junction

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: Quantitative Methods (QM) is a compulsory course in the Social Science program in CEGEP. Many QM instructors assign a number of homework exercises to give students the opportunity to practice the statistical methods, which enhances their learning. However, traditional written exercises have two significant disadvantages. The first is that the feedback process is often very slow. The second disadvantage is that written exercises can generate a large amount of correcting for the instructor. WeBWorK is an open-source system that allows instructors to write exercises which students answer online. Although originally designed to write exercises for math and science students, WeBWorK programming allows for the creation of a variety of questions which can be used in the Quantitative Methods course. Because many statistical exercises generate objective and quantitative answers, the system is able to instantly assess students’ responses and tell them whether they are right or wrong. This immediate feedback has been shown to be theoretically conducive to positive learning outcomes. In addition, the system can be set up to allow students to re-try the problem if they got it wrong. This has benefits both in terms of student motivation and reinforcing learning. Through the use of a quasi-experiment, this research project measured and analysed the effects of using WeBWorK exercises in the Quantitative Methods course at Vanier College. Three specific research questions were addressed. First, we looked at whether students who did the WeBWorK exercises got better grades than students who did written exercises. Second, we looked at whether students who completed more of the WeBWorK exercises got better grades than students who completed fewer of the WeBWorK exercises. Finally, we used a self-report survey to find out what students’ perceptions and opinions were of the WeBWorK and the written exercises. For the first research question, a crossover design was used in order to compare whether the group that did WeBWorK problems during one unit would score significantly higher on that unit test than the other group that did the written problems. We found no significant difference in grades between students who did the WeBWorK exercises and students who did the written exercises. The second research question looked at whether students who completed more of the WeBWorK exercises would get significantly higher grades than students who completed fewer of the WeBWorK exercises. The straight-line relationship between number of WeBWorK exercises completed and grades was positive in both groups. However, the correlation coefficients for these two variables showed no real pattern. Our third research question was investigated by using a survey to elicit students’ perceptions and opinions regarding the WeBWorK and written exercises. Students reported no difference in the amount of effort put into completing each type of exercise. Students were also asked to rate each type of exercise along six dimensions and a composite score was calculated. Overall, students gave a significantly higher score to the written exercises, and reported that they found the written exercises were better for understanding the basic statistical concepts and for learning the basic statistical methods. However, when presented with the choice of having only written or only WeBWorK exercises, slightly more students preferred or strongly preferred having only WeBWorK exercises. The results of this research suggest that the advantages of using WeBWorK to teach Quantitative Methods are variable. The WeBWorK system offers immediate feedback, which often seems to motivate students to try again if they do not have the correct answer. However, this does not necessarily translate into better performance on the written tests and on the final exam. What has been learned is that the WeBWorK system can be used by interested instructors to enhance student learning in the Quantitative Methods course. Further research may examine more specifically how this system can be used more effectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the challenges to biomedical engineers proposed by researchers in neuroscience is brain machine interaction. The nervous system communicates by interpreting electrochemical signals, and implantable circuits make decisions in order to interact with the biological environment. It is well known that Parkinson’s disease is related to a deficit of dopamine (DA). Different methods has been employed to control dopamine concentration like magnetic or electrical stimulators or drugs. In this work was automatically controlled the neurotransmitter concentration since this is not currently employed. To do that, four systems were designed and developed: deep brain stimulation (DBS), transmagnetic stimulation (TMS), Infusion Pump Control (IPC) for drug delivery, and fast scan cyclic voltammetry (FSCV) (sensing circuits which detect varying concentrations of neurotransmitters like dopamine caused by these stimulations). Some softwares also were developed for data display and analysis in synchronously with current events in the experiments. This allowed the use of infusion pumps and their flexibility is such that DBS or TMS can be used in single mode and other stimulation techniques and combinations like lights, sounds, etc. The developed system allows to control automatically the concentration of DA. The resolution of the system is around 0.4 µmol/L with time correction of concentration adjustable between 1 and 90 seconds. The system allows controlling DA concentrations between 1 and 10 µmol/L, with an error about +/- 0.8 µmol/L. Although designed to control DA concentration, the system can be used to control, the concentration of other substances. It is proposed to continue the closed loop development with FSCV and DBS (or TMS, or infusion) using parkinsonian animals models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is concerned with the design and analysis of hp-version discontinuous Galerkin (DG) finite element methods for boundary-value problems involving the biharmonic operator. The first part extends the unified approach of Arnold, Brezzi, Cockburn & Marini (SIAM J. Numer. Anal. 39, 5 (2001/02), 1749-1779) developed for the Poisson problem, to the design of DG methods via an appropriate choice of numerical flux functions for fourth order problems; as an example we retrieve the interior penalty DG method developed by Suli & Mozolevski (Comput. Methods Appl. Mech. Engrg. 196, 13-16 (2007), 1851-1863). The second part of this work is concerned with a new a-priori error analysis of the hp-version interior penalty DG method, when the error is measured in terms of both the energy-norm and L2-norm, as well certain linear functionals of the solution, for elemental polynomial degrees $p\ge 2$. Also, provided that the solution is piecewise analytic in an open neighbourhood of each element, exponential convergence is also proven for the p-version of the DG method. The sharpness of the theoretical developments is illustrated by numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the hydrodynamic stability problem associated with the incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the eigenvalue problem in channel and pipe geometries. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to eigenvalue/stability problems. The underlying analysis consists of constructing both a dual eigenvalue problem and a dual problem for the original base solution. In this way, errors stemming from both the numerical approximation of the original nonlinear flow problem, as well as the underlying linear eigenvalue problem are correctly controlled. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les gènes, qui servent à encoder les fonctions biologiques des êtres vivants, forment l'unité moléculaire de base de l'hérédité. Afin d'expliquer la diversité des espèces que l'on peut observer aujourd'hui, il est essentiel de comprendre comment les gènes évoluent. Pour ce faire, on doit recréer le passé en inférant leur phylogénie, c'est-à-dire un arbre de gènes qui représente les liens de parenté des régions codantes des vivants. Les méthodes classiques d'inférence phylogénétique ont été élaborées principalement pour construire des arbres d'espèces et ne se basent que sur les séquences d'ADN. Les gènes sont toutefois riches en information, et on commence à peine à voir apparaître des méthodes de reconstruction qui utilisent leurs propriétés spécifiques. Notamment, l'histoire d'une famille de gènes en terme de duplications et de pertes, obtenue par la réconciliation d'un arbre de gènes avec un arbre d'espèces, peut nous permettre de détecter des faiblesses au sein d'un arbre et de l'améliorer. Dans cette thèse, la réconciliation est appliquée à la construction et la correction d'arbres de gènes sous trois angles différents: 1) Nous abordons la problématique de résoudre un arbre de gènes non-binaire. En particulier, nous présentons un algorithme en temps linéaire qui résout une polytomie en se basant sur la réconciliation. 2) Nous proposons une nouvelle approche de correction d'arbres de gènes par les relations d'orthologie et paralogie. Des algorithmes en temps polynomial sont présentés pour les problèmes suivants: corriger un arbre de gènes afin qu'il contienne un ensemble d'orthologues donné, et valider un ensemble de relations partielles d'orthologie et paralogie. 3) Nous montrons comment la réconciliation peut servir à "combiner'' plusieurs arbres de gènes. Plus précisément, nous étudions le problème de choisir un superarbre de gènes selon son coût de réconciliation.