128 resultados para Parallel numerical algorithms
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
RESUME Durant les dernières années, les méthodes électriques ont souvent été utilisées pour l'investigation des structures de subsurface. L'imagerie électrique (Electrical Resistivity Tomography, ERT) est une technique de prospection non-invasive et spatialement intégrée. La méthode ERT a subi des améliorations significatives avec le développement de nouveaux algorithmes d'inversion et le perfectionnement des techniques d'acquisition. La technologie multicanale et les ordinateurs de dernière génération permettent la collecte et le traitement de données en quelques heures. Les domaines d'application sont nombreux et divers: géologie et hydrogéologie, génie civil et géotechnique, archéologie et études environnementales. En particulier, les méthodes électriques sont souvent employées dans l'étude hydrologique de la zone vadose. Le but de ce travail est le développement d'un système de monitorage 3D automatique, non- invasif, fiable, peu coûteux, basé sur une technique multicanale et approprié pour suivre les variations de résistivité électrique dans le sous-sol lors d'événements pluvieux. En raison des limitations techniques et afin d'éviter toute perturbation physique dans la subsurface, ce dispositif de mesure emploie une installation non-conventionnelle, où toutes les électrodes de courant sont placées au bord de la zone d'étude. Le dispositif le plus approprié pour suivre les variations verticales et latérales de la résistivité électrique à partir d'une installation permanente a été choisi à l'aide de modélisations numériques. Les résultats démontrent que le dispositif pôle-dipôle offre une meilleure résolution que le dispositif pôle-pôle et plus apte à détecter les variations latérales et verticales de la résistivité électrique, et cela malgré la configuration non-conventionnelle des électrodes. Pour tester l'efficacité du système proposé, des données de terrain ont été collectées sur un site d'étude expérimental. La technique de monitorage utilisée permet de suivre le processus d'infiltration 3D pendant des événements pluvieux. Une bonne corrélation est observée entre les résultats de modélisation numérique et les données de terrain, confirmant par ailleurs que le dispositif pôle-dipôle offre une meilleure résolution que le dispositif pôle-pôle. La nouvelle technique de monitorage 3D de résistivité électrique permet de caractériser les zones d'écoulement préférentiel et de caractériser le rôle de la lithologie et de la pédologie de manière quantitative dans les processus hydrologiques responsables d'écoulement de crue. ABSTRACT During the last years, electrical methods were often used for the investigation of subsurface structures. Electrical resistivity tomography (ERT) has been reported to be a useful non-invasive and spatially integrative prospecting technique. The ERT method provides significant improvements, with the developments of new inversion algorithms, and the increasing efficiency of data collection techniques. Multichannel technology and powerful computers allow collecting and processing resistivity data within few hours. Application domains are numerous and varied: geology and hydrogeology, civil engineering and geotechnics, archaeology and environmental studies. In particular, electrical methods are commonly used in hydrological studies of the vadose zone. The aim of this study was to develop a multichannel, automatic, non-invasive, reliable and inexpensive 3D monitoring system designed to follow electrical resistivity variations in soil during rainfall. Because of technical limitations and in order to not disturb the subsurface, the proposed measurement device uses a non-conventional electrode set-up, where all the current electrodes are located near the edges of the survey grid. Using numerical modelling, the most appropriate arrays were selected to detect vertical and lateral variations of the electrical resistivity in the framework of a permanent surveying installation system. The results show that a pole-dipole array has a better resolution than a pole-pole array and can successfully follow vertical and lateral resistivity variations despite the non-conventional electrode configuration used. Field data are then collected at a test site to assess the efficiency of the proposed monitoring technique. The system allows following the 3D infiltration processes during a rainfall event. A good correlation between the results of numerical modelling and field data results can be observed since the field pole-dipole data give a better resolution image than the pole-pole data. The new device and technique makes it possible to better characterize the zones of preferential flow and to quantify the role of lithology and pedology in flood- generating hydrological processes.
Resumo:
The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Brain deformations induced by space-occupying lesions may result in unpredictable position and shape of functionally important brain structures. The aim of this study is to propose a method for segmentation of brain structures by deformation of a segmented brain atlas in presence of a space-occupying lesion. Our approach is based on an a priori model of lesion growth (MLG) that assumes radial expansion from a seeding point and involves three steps: first, an affine registration bringing the atlas and the patient into global correspondence; then, the seeding of a synthetic tumor into the brain atlas providing a template for the lesion; finally, the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. The method was applied on two meningiomas inducing a pure displacement of the underlying brain structures, and segmentation accuracy of ventricles and basal ganglia was assessed. Results show that the segmented structures were consistent with the patient's anatomy and that the deformation accuracy of surrounding brain structures was highly dependent on the accurate placement of the tumor seeding point. Further improvements of the method will optimize the segmentation accuracy. Visualization of brain structures provides useful information for therapeutic consideration of space-occupying lesions, including surgical, radiosurgical, and radiotherapeutic planning, in order to increase treatment efficiency and prevent neurological damage.
Resumo:
In this paper, we present and apply a new three-dimensional model for the prediction of canopy-flow and turbulence dynamics in open-channel flow. The approach uses a dynamic immersed boundary technique that is coupled in a sequentially staggered manner to a large eddy simulation. Two different biomechanical models are developed depending on whether the vegetation is dominated by bending or tensile forces. For bending plants, a model structured on the Euler-Bernoulli beam equation has been developed, whilst for tensile plants, an N-pendula model has been developed. Validation against flume data shows good agreement and demonstrates that for a given stem density, the models are able to simulate the extraction of energy from the mean flow at the stem-scale which leads to the drag discontinuity and associated mixing layer.
Resumo:
AIM: Although acute pain is frequently reported by patients admitted to the emergency room, it is often insufficiently evaluated by physicians and is thus undertreated. With the aim of improving the care of adult patients with acute pain, we developed and implemented abbreviated clinical practice guidelines (CG) for the staff of nurses and physicians in our hospital's emergency room. METHODS: Our algorithm is based upon the practices described in the international literature and uses a simultaneous approach of treating acute pain in a rapid and efficacious manner along with diagnostic and therapeutic procedures. RESULTS: Pain was assessed using either a visual analogue scale (VAS) or a numerical rating scale (NRS) at ER admission and again during the hospital stay. Patients were treated with paracetamol and/or NSAID (VAS/NRS <4) or intravenous morphine (VAS/NRS > or =04). The algorithm also outlines a specific approach for patients with headaches to minimise the risks inherent to a non-specific treatment. In addition, our algorithm addresses the treatment of paroxysmal pain in patients with chronic pain as well as acute pain in drug addicts. It also outlines measures for pain prevention prior to minor diagnostic or therapeutic procedures. CONCLUSIONS: Based on published guidelines, an abbreviated clinical algorithm (AA) was developed and its simple format permitted a widespread implementation. In contrast to international guidelines, our algorithm favours giving nursing staff responsibility for decision making aspects of pain assessment and treatment in emergency room patients.
Resumo:
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.
Resumo:
QUESTIONS UNDER STUDY AND PRINCIPLES: Estimating glomerular filtration rate (GFR) in hospitalised patients with chronic kidney disease (CKD) is important for drug prescription but it remains a difficult task. The purpose of this study was to investigate the reliability of selected algorithms based on serum creatinine, cystatin C and beta-trace protein to estimate GFR and the potential added advantage of measuring muscle mass by bioimpedance. In a prospective unselected group of patients hospitalised in a general internal medicine ward with CKD, GFR was evaluated using inulin clearance as the gold standard and the algorithms of Cockcroft, MDRD, Larsson (cystatin C), White (beta-trace) and MacDonald (creatinine and muscle mass by bioimpedance). 69 patients were included in the study. Median age (interquartile range) was 80 years (73-83); weight 74.7 kg (67.0-85.6), appendicular lean mass 19.1 kg (14.9-22.3), serum creatinine 126 μmol/l (100-149), cystatin C 1.45 mg/l (1.19-1.90), beta-trace protein 1.17 mg/l (0.99-1.53) and GFR measured by inulin 30.9 ml/min (22.0-43.3). The errors in the estimation of GFR and the area under the ROC curves (95% confidence interval) relative to inulin were respectively: Cockcroft 14.3 ml/min (5.55-23.2) and 0.68 (0.55-0.81), MDRD 16.3 ml/min (6.4-27.5) and 0.76 (0.64-0.87), Larsson 12.8 ml/min (4.50-25.3) and 0.82 (0.72-0.92), White 17.6 ml/min (11.5-31.5) and 0.75 (0.63-0.87), MacDonald 32.2 ml/min (13.9-45.4) and 0.65 (0.52-0.78). Currently used algorithms overestimate GFR in hospitalised patients with CKD. As a consequence eGFR targeted prescriptions of renal-cleared drugs, might expose patients to overdosing. The best results were obtained with the Larsson algorithm. The determination of muscle mass by bioimpedance did not provide significant contributions.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.