973 resultados para low-dimensional system
Resumo:
An in vitro translation system has been prepared from Plasmodium falciparum by saponin lysis of infected-erythrocytes to free parasites which were homogeneized with glass beads, centrifuged to obtain a S-30 fraction followed by Sephadex G-25 gel filtration. This treatment produced a system with very low contamination of host proteins (<1%). The system, optimized for Mg2+ and K+, translates endogenous mRNA and is active for 80 min which suggests that their protein factors and mRNA are quite stable.
Resumo:
The feasibility of three-dimensional (3D) whole-heart imaging of the coronary venous (CV) system was investigated. The hypothesis that coronary magnetic resonance venography (CMRV) can be improved by using an intravascular contrast agent (CA) was tested. A simplified model of the contrast in T(2)-prepared steady-state free precession (SSFP) imaging was applied to calculate optimal T(2)-preparation durations for the various deoxygenation levels expected in venous blood. Non-contrast-agent (nCA)- and CA-enhanced images were compared for the delineation of the coronary sinus (CS) and its main tributaries. A quantitative analysis of the resulting contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) in both approaches was performed. Precontrast visualization of the CV system was limited by the poor CNR between large portions of the venous blood and the surrounding tissue. Postcontrast, a significant increase in CNR between the venous blood and the myocardium (Myo) resulted in a clear delineation of the target vessels. The CNR improvement was 347% (P < 0.05) for the CS, 260% (P < 0.01) for the mid cardiac vein (MCV), and 430% (P < 0.05) for the great cardiac vein (GCV). The improvement in SNR was on average 155%, but was not statistically significant for the CS and the MCV. The signal of the Myo could be significantly reduced to about 25% (P < 0.001).
Resumo:
The Northern Snake Range (Nevada) represents a spectacular example of a metamorphic core complex and exposes a complete section from the mylonitic footwall into the hanging wall of a fossil detachment system. Paired geochronological and stable isotopic data of mylonitic quartzite within the detachment footwall reveal that ductile deformation and infiltration of meteoric fluids occurred between 27 and 23 Ma. Ar-40/Ar-39 ages display complex recrystallization-cooling relationships but decrease systematically from 26.9 +/- 0.2 Ma at the top to 21.3 +/- 0.2 Ma at the bottom of footwall mylonite. Hydrogen isotope (delta D) values in white mica are very low (-150 to -145 %) within the top 80-90 m of detachment footwall, in contrast to values obtained from the deeper part of the section where values range from -77 to -64 %, suggesting that time-integrated interaction between rock and meteoric fluid was restricted to the uppermost part of the mylonitic footwall. Pervasive mica-water hydrogen isotope exchange is difficult to reconcile with models of Ar-40 loss during mylonitization solely by volume diffusion. Rather, we interpret the Ar-40/Ar-39 ages of white mica with low-delta D values to date syn-mylonitic hydrogen and argon isotope exchange, and we conclude that the hydrothermal system of the Northern Snake Range was active during late Oligocene (27-23 Ma) and has been exhumed by the combined effects of ductile strain, extensional detachment faulting, and erosion.
Resumo:
PURPOSE: To determine the diagnostic value of the intravascular contrast agent gadocoletic acid (B-22956) in three-dimensional, free breathing coronary magnetic resonance angiography (MRA) for stenosis detection in patients with suspected or known coronary artery disease. METHODS: Eighteen patients underwent three-dimensional, free breathing coronary MRA of the left and right coronary system before and after intravenous application of a single dose of gadocoletic acid (B-22956) using three different dose regimens (group A 0.050 mmol/kg; group B 0.075 mmol/kg; group C 0.100 mmol/kg). Precontrast scanning followed a coronary MRA standard non-contrast T2 preparation/turbo-gradient echo sequence (T2Prep); for postcontrast scanning an inversion-recovery gradient echo sequence was used (real-time navigator correction for both scans). In pre- and postcontrast scans quantitative analysis of coronary MRA data was performed to determine the number of visible side branches, vessel length and vessel sharpness of each of the three coronary arteries (LAD, LCX, RCA). The number of assessable coronary artery segments was determined to calculate sensitivity and specificity for detection of stenosis > or = 50% on a segment-to-segment basis (16-segment-model) in pre- and postcontrast scans with x-ray coronary angiography as the standard of reference. RESULTS: Dose group B (0.075 mmol/kg) was preferable with regard to improvement of MR angiographic parameters: in postcontrast scans all MR angiographic parameters increased significantly except for the number of visible side branches of the left circumflex artery. In addition, assessability of coronary artery segments significantly improved postcontrast in this dose group (67 versus 88%, p < 0.01). Diagnostic performance (sensitivity, specificity, accuracy) was 83, 77 and 78% for precontrast and 86, 95 and 94% for postcontrast scans. CONCLUSIONS: The use of gadocoletic acid (B-22956) results in an improvement of MR angiographic parameters, asssessability of coronary segments and detection of coronary stenoses > or = 50%.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
The greenhead ant Rhytidoponera metallica has long been recognized as posing a potential challenge to kin selection theory because it has large queenless colonies where apparently many of the morphological workers are mated and reproducing. However this species has never been studied genetically and important elements of its breeding system and kin structure remain uncertain. We used microsatellite markers to measure the relatedness among nestmates unravel the fine-scale population genetic structure and infer the breeding system of R. metallica. The genetic relatedness among worker nestmates is very low but significantly greater than zero (r = 0.082 +/- 0.015) which demonstrates that nests contain many distantly related breeders. The inbreeding coefficient is very close to and not significantly different from zero indicating random mating and lack of microgeographic genetic differentiation. On average. closely located nests are not more similar genetically than distant nests which is surprising as new colonies form by budding and female dispersal is restricted. Lack of inbreeding and absence of population viscosity indicates high gene flow mediated by males. Overall the genetic pattern detected in R. metallica suggests that a high number of moderately related workers mate with unrelated males from distant nests. This breeding system results in the lowest relatedness among nestmates reported for social insect species where breeders and helpers are not morphologically differentiated. [References: 69]
Resumo:
An increase of urokinase-type plasminogen activator (uPA) and a decrease of tissue-type PA (tPA) have been associated with the transition from normal to adenomatous colorectal mucosa. Serial sections from 25 adenomas were used to identify PA-related caseinolytic activities by in situ zymography, blocking selectively uPA or tPA. The distribution of uPA, tPA, and type 1 PA inhibitor mRNAs was investigated by nonradioactive in situ hybridization, and the receptor for uPA was detected by immunostaining. Low- and high-grade epithelial cell dysplasia was mapped histologically. Results show that 23 of 25 adenomas expressed uPA-related lytic activity located predominantly in the periphery whereas tPA-related activity was mainly in central areas of adenomas. In 15 of 25 adenomas, uPA mRNA was expressed in stromal cells clustered in foci that coincided with areas of uPA lytic activity. The probability of finding uPA mRNA-reactive cells was significantly higher in areas with high-grade epithelial dysplasia. uPA receptor was mainly stromal and expressed at the periphery. Type 1 PA inhibitor mRNA cellular expression was diffuse in the stroma, in endothelial cells, and in a subpopulation of alpha-smooth muscle cell actin-reactive cells. These results show that a stromal up-regulation of the uPA/plasmin system is associated with foci of severe dysplasia in a subset of colorectal adenomas.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).
Resumo:
The aim of the present study was to determine whether an increase in resting energy expenditure (REE) contributes to the impaired nutritional status of Gambian children infected by a low level of infection with pathogenic helminths. The REE of 24 children infected with hookworm, Ascaris, Strongyloides, or Trichuris (mean +/- SEM age = 11.9 +/- 0.1 years) and eight controls without infection (mean +/- SEM age = 11.8 +/- 0.1 years) were measured by indirect calorimetry with a hood system (test A). This measurement was repeated after treatment with 400 mg of albendazole (patients) or a placebo (controls) (test B). When normalized for fat free mass, REE in test A was not different in the patients (177 +/- 2 kJ/kg x day) and in the controls (164 +/- 7 kJ/kg x day); furthermore, REE did not change significantly after treatment in the patients (173 +/- 3 kJ/kg x day) or in the controls (160 +/- 8 kJ/kg x day). There was no significant difference in the respiratory quotient between patients and controls, nor between tests A and B. It is concluded that a low level of helminth infection does not affect significantly the energy metabolism of Gambian children.
Resumo:
OBJECTIVES: Mannan-binding lectin (MBL) acts as a pattern-recognition molecule directed against oligomannan, which is part of the cell wall of yeasts and various bacteria. We have previously shown an association between MBL deficiency and anti-Saccharomyces cerevisiae mannan antibody (ASCA) positivity. This study aims at evaluating whether MBL deficiency is associated with distinct Crohn's disease (CD) phenotypes. METHODS: Serum concentrations of MBL and ASCA were measured using ELISA (enzyme-linked immunosorbent assay) in 427 patients with CD, 70 with ulcerative colitis, and 76 healthy controls. CD phenotypes were grouped according to the Montreal Classification as follows: non-stricturing, non-penetrating (B1, n=182), stricturing (B2, n=113), penetrating (B3, n=67), and perianal disease (p, n=65). MBL was classified as deficient (<100 ng/ml), low (100-500 ng/ml), and normal (500 ng/ml). RESULTS: Mean MBL was lower in B2 and B3 CD patients (1,503+/-1,358 ng/ml) compared with that in B1 phenotypes (1,909+/-1,392 ng/ml, P=0.013). B2 and B3 patients more frequently had low or deficient MBL and ASCA positivity compared with B1 patients (P=0.004 and P<0.001). Mean MBL was lower in ASCA-positive CD patients (1,562+/-1,319 ng/ml) compared with that in ASCA-negative CD patients (1,871+/-1,320 ng/ml, P=0.038). In multivariate logistic regression modeling, low or deficient MBL was associated significantly with B1 (negative association), complicated disease (B2+B3), and ASCA. MBL levels did not correlate with disease duration. CONCLUSIONS: Low or deficient MBL serum levels are significantly associated with complicated (stricturing and penetrating) CD phenotypes but are negatively associated with the non-stricturing, non-penetrating group. Furthermore, CD patients with low or deficient MBL are significantly more often ASCA positive, possibly reflecting delayed clearance of oligomannan-containing microorganisms by the innate immune system in the absence of MBL.
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great number of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. For normalised correlation criteria, previous experiments shown that the result is not altered in presence of nonuniform illumination. Usually, hardware for motion estimation has been limited to simple correlation criteria. The main goal of this paper is to propose a VLSI architecture for motion estimation using a matching criteria more complex than Sum of Absolute Differences (SAD) criteria. Today hardware devices provide many facilities for the integration of more and more complex designs as well as the possibility to easily communicate with general purpose processors
Resumo:
An association between anorexia nerviosa (AN) and low bone mass has been demonstrated. Bone loss associated with AN involves hormonal and nutritional impairments, though their exact contribution is not clearly established. We compared bone mass in AN patients with women of similar weight with no criteria for AN, and a third group of healthy, normal-weight, age-matched women. The study included forty-eight patients with AN, twenty-two healthy eumenorrhoeic women with low weight (LW group; BMI < 18.5 kg/m2) and twenty healthy women with BMI >18.5 kg/m2 (control group), all of similar age. We measured lean body mass, percentage fat mass, total bone mineral content (BMC) and bone mineral density in lumbar spine (BMD LS) and in total (tBMD). We measured anthropometric parameters, leptin and growth hormone. The control group had greater tBMD and BMD LS than the other groups, with no differences between the AN and LW groups. No differences were found in tBMD, BMD LS and total BMC between the restrictive (n 25) and binge-purge type (n 23) in AN patients. In AN, minimum weight (P = 0.002) and percentage fat mass (P = 0.02) explained BMD LS variation (r2 0.48) and minimum weight (r2 0.42; P = 0.002) for tBMD in stepwise regression analyses. In the LW group, BMI explained BMD LS (r2 0.72; P = 0.01) and tBMD (r2 0.57; P = 0.04). We concluded that patients with AN had similar BMD to healthy thin women. Anthropometric parameters could contribute more significantly than oestrogen deficiency in the achievement of peak bone mass in AN patients.