968 resultados para boundary element methods
Resumo:
A clear and rigorous definition of muscle moment-arms in the context of musculoskeletal systems modelling is presented, using classical mechanics and screw theory. The definition provides an alternative to the tendon excursion method, which can lead to incorrect moment-arms if used inappropriately due to its dependency on the choice of joint coordinates. The definition of moment-arms, and the presented construction method, apply to musculoskeletal models in which the bones are modelled as rigid bodies, the joints are modelled as ideal mechanical joints and the muscles are modelled as massless, frictionless cables wrapping over the bony protrusions, approximated using geometric surfaces. In this context, the definition is independent of any coordinate choice. It is then used to solve a muscle-force estimation problem for a simple 2D conceptual model and compared with an incorrect application of the tendon excursion method. The relative errors between the two solutions vary between 0% and 100%.
Resumo:
The water content dynamics in the upper soil surface during evaporation is a key element in land-atmosphere exchanges. Previous experimental studies have suggested that the soil water content increases at the depth of 5 to 15 cm below the soil surface during evapo- ration, while the layer in the immediate vicinity of the soil surface is drying. In this study, the dynamics of water content profiles exposed to solar radiative forcing was monitored at a high temporal resolution using dielectric methods both in the presence and absence of evaporation. A 4-d comparison of reported moisture content in coarse sand in covered and uncovered buckets using a commercial dielectric-based probe (70 MHz ECH2O-5TE, Decagon Devices, Pullman, WA) and the standard 1-GHz time domain reflectometry method. Both sensors reported a positive correlation between temperature and water content in the 5- to 10-cm depth, most pronounced in the morning during heating and in the afternoon during cooling. Such positive correlation might have a physical origin induced by evaporation at the surface and redistribution due to liquid water fluxes resulting from the temperature- gradient dynamics within the sand profile at those depths. Our experimental data suggest that the combined effect of surface evaporation and temperature-gradient dynamics should be considered to analyze experimental soil water profiles. Additional effects related to the frequency of operation and to protocols for temperature compensation of the dielectric sensors may also affect the probes' response during large temperature changes.
Resumo:
Magnetic resonance angiography (MRA) provides a noninvasive means to detect the presence, location and severity of atherosclerosis throughout the vascular system. In such studies, and especially those in the coronary arteries, the vessel luminal area is typically measured at multiple cross-sectional locations along the course of the artery. The advent of fast volumetric imaging techniques covering proximal to mid segments of coronary arteries necessitates automatic analysis tools requiring minimal manual interactions to robustly measure cross-sectional area along the three-dimensional track of the arteries in under-sampled and non-isotropic datasets. In this work, we present a modular approach based on level set methods to track the vessel centerline, segment the vessel boundaries, and measure transversal area using two user-selected endpoints in each coronary of interest. Arterial area and vessel length are measured using our method and compared to the standard Soap-Bubble reformatting and analysis tool in in-vivo non-contrast enhanced coronary MRA images.
Resumo:
Electrical deep brain stimulation (DBS) is an efficient method to treat movement disorders. Many models of DBS, based mostly on finite elements, have recently been proposed to better understand the interaction between the electrical stimulation and the brain tissues. In monopolar DBS, clinically widely used, the implanted pulse generator (IPG) is used as reference electrode (RE). In this paper, the influence of the RE model of monopolar DBS is investigated. For that purpose, a finite element model of the full electric loop including the head, the neck and the superior chest is used. Head, neck and superior chest are made of simple structures such as parallelepipeds and cylinders. The tissues surrounding the electrode are accurately modelled from data provided by the diffusion tensor magnetic resonance imaging (DT-MRI). Three different configurations of RE are compared with a commonly used model of reduced size. The electrical impedance seen by the DBS system and the potential distribution are computed for each model. Moreover, axons are modelled to compute the area of tissue activated by stimulation. Results show that these indicators are influenced by the surface and position of the RE. The use of a RE model corresponding to the implanted device rather than the usually simplified model leads to an increase of the system impedance (+48%) and a reduction of the area of activated tissue (-15%).
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
BACKGROUND & AIMS: Trace elements (TE) are involved in the immune and antioxidant defences which are of particular importance during critical illness. Determining plasma TE levels is costly. The present quality control study aimed at assessing the economic impact of a computer reminded blood sampling versus a risk guided on-demand monitoring of plasma concentrations of selenium, copper, and zinc. METHODS: Retrospective analysis of 2 cohorts of patients admitted during 6 months periods in 2006 and 2009 to the ICU of a University hospital. INCLUSION CRITERIA: to receive intravenous micronutrient supplements and/or to have a TE sampling during ICU stay. The TE samplings were triggered by computerized reminder in 2006 versus guided by nutritionists in 2009. RESULTS: During the 2 periods 636 patients met the inclusion criteria out of 2406 consecutive admissions, representing 29.7% and 24.9% respectively of the periods' admissions. The 2009 patients had higher SAPS2 scores (p = 0.02) and lower BMI compared to 2006 (p = 0.007). The number of laboratory determinations was drastically reduced in 2009, particularly during the first week, despite the higher severity of the cohort, resulting in à 55% cost reduction. CONCLUSIONS: The monitoring of TE concentrations guided by a nutritionist resulted in a reduction of the sampling frequency, and targeting on the sickest high risk patients, requiring a nutritional prescription adaptation. This control leads to cost reduction compared to an automated sampling prescription.
Resumo:
The increase of publicly available sequencing data has allowed for rapid progress in our understanding of genome composition. As new information becomes available we should constantly be updating and reanalyzing existing and newly acquired data. In this report we focus on transposable elements (TEs) which make up a significant portion of nearly all sequenced genomes. Our ability to accurately identify and classify these sequences is critical to understanding their impact on host genomes. At the same time, as we demonstrate in this report, problems with existing classification schemes have led to significant misunderstandings of the evolution of both TE sequences and their host genomes. In a pioneering publication Finnegan (1989) proposed classifying all TE sequences into two classes based on transposition mechanisms and structural features: the retrotransposons (class I) and the DNA transposons (class II). We have retraced how ideas regarding TE classification and annotation in both prokaryotic and eukaryotic scientific communities have changed over time. This has led us to observe that: (1) a number of TEs have convergent structural features and/or transposition mechanisms that have led to misleading conclusions regarding their classification, (2) the evolution of TEs is similar to that of viruses by having several unrelated origins, (3) there might be at least 8 classes and 12 orders of TEs including 10 novel orders. In an effort to address these classification issues we propose: (1) the outline of a universal TE classification, (2) a set of methods and classification rules that could be used by all scientific communities involved in the study of TEs, and (3) a 5-year schedule for the establishment of an International Committee for Taxonomy of Transposable Elements (ICTTE).
Resumo:
Partial-thickness tears of the supraspinatus tendon frequently occur at its insertion on the greater tubercule of the humerus, causing pain and reduced strength and range of motion. The goal of this work was to quantify the loss of loading capacity due to tendon tears at the insertion area. A finite element model of the supraspinatus tendon was developed using in vivo magnetic resonance images data. The tendon was represented by an anisotropic hyperelastic constitutive law identified with experimental measurements. A failure criterion was proposed and calibrated with experimental data. A partial-thickness tear was gradually increased, starting from the deep articular-sided fibres. For different values of tendon tear thickness, the tendon was mechanically loaded up to failure. The numerical model predicted a loss in loading capacity of the tendon as the tear thickness progressed. Tendon failure was more likely when the tendon tear exceeded 20%. The predictions of the model were consistent with experimental studies. Partial-thickness tears below 40% tear are sufficiently stable to persist physiotherapeutic exercises. Above 60% tear surgery should be considered to restore shoulder strength.
Resumo:
BACKGROUND: Although the importance of accurate femoral reconstruction to achieve a good functional outcome is well documented, quantitative data on the effects of a displacement of the femoral center of rotation on moment arms are scarce. The purpose of this study was to calculate moment arms after nonanatomical femoral reconstruction. METHODS: Finite element models of 15 patients including the pelvis, the femur, and the gluteal muscles were developed. Moment arms were calculated within the native anatomy and compared to distinct displacement of the femoral center of rotation (leg lengthening of 10 mm, loss of femoral offset of 20%, anteversion ±10°, and fixed anteversion at 15°). Calculations were performed within the range of motion observed during a normal gait cycle. RESULTS: Although with all evaluated displacements of the femoral center of rotation, the abductor moment arm remained positive, some fibers initially contributing to extension became antagonists (flexors) and vice versa. A loss of 20% of femoral offset led to an average decrease of 15% of abductor moment. Femoral lengthening and changes in femoral anteversion (±10°, fixed at 15°) led to minimal changes in abductor moment arms (maximum change of 5%). Native femoral anteversion correlated with the changes in moment arms induced by the 5 variations of reconstruction. CONCLUSION: Accurate reconstruction of offset is important to maintaining abductor moment arms, while changes of femoral rotation had minimal effects. Patients with larger native femoral anteversion appear to be more susceptible to femoral head displacements.
Resumo:
This paper presents a new numerical program able to model syntectonic sedimentation. The new model combines a discrete element model of the tectonic deformation of a sedimentary cover and a process-based model of sedimentation in a single framework. The integration of these two methods allows us to include the simulation of both sedimentation and deformation processes in a single and more effective model. The paper describes briefly the antecedents of the program, Simsafadim-Clastic and a discrete element model, in order to introduce the methodology used to merge both programs to create the new code. To illustrate the operation and application of the program, analysis of the evolution of syntectonic geometries in an extensional environment and also associated with thrust fault propagation is undertaken. Using the new code, much more complex and realistic depositional structures can be simulated together with a more complex analysis of the evolution of the deformation within the sedimentary cover, which is seen to be affected by the presence of the new syntectonic sediments.
Resumo:
This thesis examines and explains the procedure used to redesign the attachment of permanent magnets to the surface of the rotor of a synchronous generator. The methodology followed to go from the actual assembly to converge to the final purposed innovation was based on the systematic approach design. This meant that first a series of steps had to be predefined as a frame of reference later to be used to compare and select proposals, and finally to obtain the innovation that was sought. Firstly, a series of patents was used as the background for the upcoming ideas. To this end, several different patented assemblies had been found and categorized according the main element onto which this thesis if focused, meaning the attachment element or method. After establishing the technological frame of reference, a brainstorm was performed to obtain as many ideas as possible. Then these ideas were classified, regardless of their degree of complexity or usability, since at this time the quantity of the ideas was the important issue. Subsequently, they were compared and evaluated from different points of view. The comparison and evaluation in this case was based on the use of a requirement list, which established the main needs that the design had to fulfill. Then the selection could be done by grading each idea in accordance with these requirements. In this way, one was able to obtain the idea or ideas that best fulfilled these requirements. Once all of the ideas were compared and evaluated, the best or most suitable idea or ideas were separated. Finally, the selected idea or ideas was/were analyzed in extension and a number of improvements were made. Consequently, a final idea was refined and made more suitable at its performance, manufacture, and life cycle assessment. Therefore, in the end, the design process gave a solution to the problem pointed out at the beginning.
Resumo:
Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Työssä tutkittiin soodakattiloiden ilmakanavien hyödyntämistä jäykistävänä rakenteena. Työssä käsiteltiin yksittäisiä jäykistämättömiä ja jäykistettyjä levykenttiä ja niiden lommahduskestävyyttä Eurokoodi standardin mukaisesti ja elementtimenetelmän avulla. Lisäksi käsiteltiin lommahduksen teoriaa ja levykenttien käyttäytymistä yleisellä tasolla erilaisilla kuormituksilla ja reunaehdoilla. Työn tavoitteena oli selvittää kuinka lommahdus tutkitaan Eurokoodin mukaisesti ja elementtimenetelmää hyödyntäen, kun levykentän kuormituksena on poikittainen kuormitus tason suuntaisen kuormituksen lisäksi. Työssä tutkittiin kahden eri elementtimenetelmään pohjautuvan ratkaisuvaihtoehdon käyttöä lommahduslaskennassa. Työssä kehitettiin Eurokoodin sovellettu yhteisvaikutuskaavan käyttö lineaarisen ominaisarvotehtävän ratkaisun lisänä, jossa otetaan huomioon painekuorman vaikutus levykentän lommahduksessa. Kehitettyä menetelmää sovellettiin ilmakanavan esimerkkirakenteen mitoituksessa.