992 resultados para Depth Estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ground-penetrating radar (GPR) has the potential to provide valuable information on hydrological properties of the vadose zone because of their strong sensitivity to soil water content. In particular, recent evidence has suggested that the stochastic inversion of crosshole GPR data within a coupled geophysical-hydrological framework may allow for effective estimation of subsurface van-Genuchten-Mualem (VGM) parameters and their corresponding uncertainties. An important and still unresolved issue, however, is how to best integrate GPR data into a stochastic inversion in order to estimate the VGM parameters and their uncertainties, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first introduce a fully Bayesian inversion called Markov-chain-Monte-carlo (MCMC) strategy to perform the stochastic inversion of steady-state GPR data to estimate the VGM parameters and their uncertainties. Within this study, the choice of the prior parameter probability distributions from which potential model configurations are drawn and tested against observed data was also investigated. Analysis of both synthetic and field data collected at the Eggborough (UK) site indicates that the geophysical data alone contain valuable information regarding the VGM parameters. However, significantly better results are obtained when these data are combined with a realistic, informative prior. A subsequent study explore in detail the dynamic infiltration case, specifically to what extent time-lapse ZOP GPR data, collected during a forced infiltration experiment at the Arrenaes field site (Denmark), can help to quantify VGM parameters and their uncertainties using the MCMC inversion strategy. The findings indicate that the stochastic inversion of time-lapse GPR data does indeed allow for a substantial refinement in the inferred posterior VGM parameter distributions. In turn, this significantly improves knowledge of the hydraulic properties, which are required to predict hydraulic behaviour. Finally, another aspect that needed to be addressed involved the comparison of time-lapse GPR data collected under different infiltration conditions (i.e., natural loading and forced infiltration conditions) to estimate the VGM parameters using the MCMC inversion strategy. The results show that for the synthetic example, considering data collected during a forced infiltration test helps to better refine soil hydraulic properties compared to data collected under natural infiltration conditions. When investigating data collected at the Arrenaes field site, further complications arised due to model error and showed the importance of also including a rigorous analysis of the propagation of model error with time and depth when considering time-lapse data. Although the efforts in this thesis were focused on GPR data, the corresponding findings are likely to have general applicability to other types of geophysical data and field environments. Moreover, the obtained results allow to have confidence for future developments in integration of geophysical data with stochastic inversions to improve the characterization of the unsaturated zone but also reveal important issues linked with stochastic inversions, namely model errors, that should definitely be addressed in future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new formula for glomerular filtration rate estimation in pediatric population from 2 to 18 years has been developed by the University Unit of Pediatric Nephrology. This Quadratic formula, accessible online, allows pediatricians to adjust drug dosage and/or follow-up renal function more precisely and in an easy manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

THESIS ABSTRACT : Low-temperature thermochronology relies on application of radioisotopic systems whose closure temperatures are below temperatures at which the dated phases are formed. In that sense, the results are interpreted as "cooling ages" in contrast to "formation ages". Owing to the low closure-temperatures, it is possible to reconstruct exhumation and cooling paths of rocks during their residence at shallow levels of the crust, i.e. within first ~10 km of depth. Processes occurring at these shallow depths such as final exhumation, faulting and relief formation are fundamental for evolution of the mountain belts. This thesis aims at reconstructing the tectono-thermal history of the Aar massif in the Central Swiss Alps by means of zircon (U-Th)/He, apatite (U-Th)/He and apatite fission track thermochronology. The strategy involved acquisition of a large number of samples from a wide range of elevations in the deeply incised Lötschen valley and a nearby NEAT tunnel. This unique location allowed to precisely constrain timing, amount and mechanisms of exhumation of the main orographic feature of the Central Alps, evaluate the role of topography on the thermochronological record and test the impact of hydrothermal activity. Samples were collected from altitudes ranging between 650 and 3930 m and were grouped into five vertical profiles on the surface and one horizontal in the tunnel. Where possible, all three radiometric systems were applied to each sample. Zircon (U-Th)/He ages range from 5.1 to 9.4 Ma and are generally positively correlated with altitude. Age-elevation plots reveal a distinct break in slope, which translates into exhumation rate increasing from ~0.4 to ~3 km/Ma at 6 Ma. This acceleration is independently confirmed by increased cooling rates on the order of 100°C/Ma constrained on the basis of age differences between the zircon (U-Th)/He and the remaining systems. Apatite fission track data also plot on a steep age-elevation curve indicating rapid exhumation until the end of the Miocene. The 6 Ma event is interpreted as reflecting tectonically driven uplift of the Aar massif. The late Miocene timing implies that the increase of precipitation in the Pliocene did not trigger rapid exhumation in the Aar massif. The Messinian salinity crisis in the Mediterranean could not directly intensify erosion of the Aar but associated erosional output from the entire Alps may have tapered the orogenic wedge and caused reactivation of thrusting in the Aar massif. The high exhumation rates in the Messinian were followed by a decrease to ~1.3 km/Ma as evidenced by ~8 km of exhumation during last 6 Ma. The slowing of exhumation is also apparent from apatite (U-Th)1He age-elevation data in the northern part of the Lötschen valley where they plot on a ~0.5km/Ma line and range from 2.4 to 6.4 Ma However, from the apatite (U-Th)/He and fission track data from the NEAT tunnel, there is an indication of a perturbation of the record. The apatite ages are youngest under the axis of the valley, in contrast to an expected pattern where they would be youngest in the deepest sections of the tunnel due to heat advection into ridges. The valley however, developed in relatively soft schists while the ridges are built of solid granitoids. In line with hydrological observations from the tunnel, we suggest that the relatively permeable rocks under the valley floor, served as conduits of geothermal fluids that caused reheating leading to partial Helium loss and fission track annealing in apatites. In consequence, apatite ages from the lowermost samples are too young and the calculated exhumation rates may underestimate true values. This study demonstrated that high-density sampling is indispensable to provide meaningful thermochronological data in the Alpine setting. The multi-system approach allows verifying plausibility of the data and highlighting sources of perturbation. RÉSUMÉ DE THÈSE : La thermochronologie de basse température dépend de l'utilisation de systèmes radiométriques dont la température de fermeture est nettement inférieure à la température de cristallisation du minéral. Les résultats obtenus sont par conséquent interprétés comme des âges de refroidissement qui diffèrent des âges de formation obtenus par le biais d'autres systèmes de datation. Grâce aux températures de refroidissement basses, il est aisé de reconstruire les chemins de refroidissement et d'exhumation des roches lors de leur résidence dans la croute superficielle (jusqu'à 10 km). Les processus qui entrent en jeu à ces faibles profondeurs tels que l'exhumation finale, la fracturation et le faillage ainsi que la formation du relief sont fondamentaux dans l'évolution des chaînes de montagne. Ces dernières années, il est devenu clair que l'enregistrement thermochronologique dans les orogènes peut être influencé par le relief et réinitialisé par l'advection de la chaleur liée à la circulation de fluides géothermaux après le refroidissement initial. L'objectif de cette thèse est de reconstruire l'histoire tectono-thermique du massif de l'Aar dans les Alpes suisses Centrales à l'aide de trois thermochronomètres; (U-Th)/He sur zircon, (U-Th)/He sur apatite et les traces de fission sur apatite. Afin d'atteindre cet objectif, nous avons récolté un grand nombre d'échantillons provenant de différentes altitudes dans la vallée fortement incisée de Lötschental ainsi que du tunnel de NEAT. Cette stratégie d'échantillonnage nous a permis de contraindre de manière précise la chronologie, les quantités et les mécanismes d'exhumation de cette zone des Alpes Centrales, d'évaluer le rôle de la topographie sur l'enregistrement thermochronologique et de tester l'impact de l'hydrothermalisme sur les géochronomètres. Les échantillons ont été prélevés à des altitudes comprises entre 650 et 3930m selon 5 profils verticaux en surface et un dans le tunnel. Quand cela à été possible, les trois systèmes radiométriques ont été appliqués aux échantillons. Les âges (U-Th)\He obtenus sur zircons sont compris entre 5.l et 9.4 Ma et sont corrélés de manière positive avec l'altitude. Les graphiques représentant l'âge et l'élévation montrent une nette rupture de la pente qui traduisent un accroissement de la vitesse d'exhumation de 0.4 à 3 km\Ma il y a 6 Ma. Cette accélération de l'exhumation est confirmée par les vitesses de refroidissement de l'ordre de 100°C\Ma obtenus à partir des différents âges sur zircons et à partir des autres systèmes géochronologiques. Les données obtenues par traces de fission sur apatite nous indiquent également une exhumation rapide jusqu'à la fin du Miocène. Nous interprétons cet évènement à 6 Ma comme étant lié à l'uplift tectonique du massif de l'Aar. Le fait que cet évènement soit tardi-miocène implique qu'une augmentation des précipitations au Pliocène n'a pas engendré cette exhumation rapide du massif de l'Aar. La crise Messinienne de la mer méditerranée n'a pas pu avoir une incidence directe sur l'érosion du massif de l'Aar mais l'érosion associée à ce phénomène à pu réduire le coin orogénique alpin et causer la réactivation des chevauchements du massif de l'Aar. L'exhumation rapide Miocène a été suivie pas une diminution des taux d'exhumation lors des derniers 6 Ma (jusqu'à 1.3 km\Ma). Cependant, les âges (U-Th)\He sur apatite ainsi que les traces de fission sur apatite des échantillons du tunnel enregistrent une perturbation de l'enregistrement décrit ci-dessus. Les âges obtenus sur les apatites sont sensiblement plus jeunes sous l'axe de la vallée en comparaison du profil d'âges attendus. En effet, on attendrait des âges plus jeunes sous les parties les plus profondes du tunnel à cause de l'advection de la chaleur dans les flancs de la vallée. La vallée est creusée dans des schistes alors que les flancs de celle-ci sont constitués de granitoïdes plus durs. En accord avec les observations hydrologiques du tunnel, nous suggérons que la perméabilité élevée des roches sous l'axe de la vallée à permi l'infiltration de fluides géothermaux qui a généré un réchauffement des roches. Ce réchauffement aurait donc induit une perte d'Hélium et un recuit des traces de fission dans les apatites. Ceci résulterait en un rajeunissement des âges apatite et en une sous-estimation des vitesses d'exhumation sous l'axe de la vallée. Cette étude à servi à démontrer la nécessité d'un échantillonnage fin et précis afin d'apporter des données thermochronologiques de qualité dans le contexte alpin. Cette approche multi-système nous a permi de contrôler la pertinence des données acquises ainsi que d'identifier les sources possibles d'erreurs lors d'études thermochronologiques. RÉSUMÉ LARGE PUBLIC Lors d'une orogenèse, les roches subissent un cycle comprenant une subduction, de la déformation, du métamorphisme et, finalement, un retour à la surface (ou exhumation). L'exhumation résulte de la déformation au sein de la zone de collision, menant à un raccourcissement et un apaissessement de l'édifice rocheux, qui se traduit par une remontée des roches, création d'une topographie et érosion. Puisque l'érosion agit comme un racloir sur la partie supérieure de l'édifice, des tentatives de corrélation entre les épisodes d'exhumation rapide et les périodes d'érosion intensive, dues aux changements climatiques, ont été effectuées. La connaissance de la chronologie et du lieu précis est d'une importance capitale pour une quelconque reconstruction de l'évolution d'une chaîne de montagne. Ces critères sont donnés par un retraçage des changements de la température de la roche en fonction du temps, nous donnant le taux de refroidissement. L'instant auquel les roches ont refroidit, passant une certaine température, est contraint par l'application de techniques de datation par radiométrie. Ces méthodes reposent sur la désintégration des isotopes radiogéniques, tels que l'uranium et le potassium, tous deux abondants dans les roches de la croûte terrestre. Les produits de cette désintégration ne sont pas retenus dans les minéraux hôtes jusqu'au moment du refroidissement de la roche sous une température appelée 'de fermeture' , spécifique à chaque système de datation. Par exemple, la désintégration radioactive des atomes d'uranium et de thorium produit des atomes d'hélium qui s'échappent d'un cristal de zircon à des températures supérieures à 200°C. En mesurant la teneur en uranium-parent, l'hélium accumulé et en connaissant le taux de désintégration, il est possible de calculer à quel moment la roche échantillonnée est passée sous la température de 200°C. Si le gradient géothermal est connu, les températures de fermeture peuvent être converties en profondeurs actuelles (p. ex. 200°C ≈ 7km), et le taux de refroidissement en taux d'exhumation. De plus, en datant par système radiométrique des échantillons espacés verticalement, il est possible de contraindre directement le taux d'exhumation de la section échantillonnée en observant les différences d'âges entre des échantillons voisins. Dans les Alpes suisses, le massif de l'Aar forme une structure orographique majeure. Avec des altitudes supérieures à 4000m et un relief spectaculaire de plus de 2000m, le massif domine la partie centrale de la chaîne de montagne. Les roches aujourd'hui exposées à la surface ont été enfouies à plus de 10 km de profond il y a 20 Ma, mais la topographie actuelle du massif de l'Aar semble surtout s'être développée par un soulèvement actif depuis quelques millions d'années, c'est-à-dire depuis le Néogène supérieur. Cette période comprend un changement climatique soudain ayant touché l'Europe il y a environ 5 Ma et qui a occasionné de fortes précipitations, entraînant certainement une augmentation de l'érosion et accélérant l'exhumation des Alpes. Dans cette étude, nous avons employé le système de datation (U-TH)/He sur zircon, dont la température de fermeture de 200°C est suffisamment basse pour caractériser l'exhumation du Néogène sup. /Pliocène. Les échantillons proviennent du Lötschental et du tunnel ferroviaire le plus profond du monde (NEAT) situé dans la partie ouest du massif de l'Aar. Considérés dans l'ensemble, ces échantillons se répartissent sur un dénivelé de 3000m et des âges de 5.1 à 9.4 Ma. Les échantillons d'altitude supérieure (et donc plus vieux) documentent un taux d'exhumation de 0.4 km/Ma jusqu'à il y a 6 Ma, alors que les échantillons situés les plus bas ont des âges similaires allant de 6 à 5.4 Ma, donnant un taux jusqu'à 3km /Ma. Ces données montrent une accélération dramatique de l'exhumation du massif de l'Aar il y a 6 Ma. L'exhumation miocène sup. du massif prédate donc le changement climatique Pliocène. Cependant, lors de la crise de salinité d'il y a 6-5.3 Ma (Messinien), le niveau de la mer Méditerranée est descendu de 3km. Un tel abaissement de la surface d'érosion peut avoir accéléré l'exhumation des Alpes, mais le bassin sud alpin était trop loin du massif de l'Aar pour influencer son érosion. Nous arrivons à la conclusion que la datation (U-Th)/He permet de contraindre précisément la chronologie et l'exhumation du massif de l'Aar. Concernant la dualité tectonique-érosion, nous suggérons que, dans le cas du massif de l'Aar, la tectonique prédomine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The clinical demand for a device to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called Pulse Wave Velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides non-occlusive beat-by-beat estimations of Mean Arterial Pressure (MAP) by measuring the Pulse Transit Time (PTT) of arterial pressure pulses travelling from the ascending aorta towards the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3 and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society (BHS) requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way towards an ambulatory-compliant, continuous and non-occlusive BP monitoring system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models - in the form of Bayesian networks - address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates useable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most leadership and management researchers ignore one key design and estimation problem rendering parameter estimates uninterpretable: Endogeneity. We discuss the problem of endogeneity in depth and explain conditions that engender it using examples grounded in the leadership literature. We show how consistent causal estimates can be derived from the randomized experiment, where endogeneity is eliminated by experimental design. We then review the reasons why estimates may become biased (i.e., inconsistent) in non-experimental designs and present a number of useful remedies for examining causal relations with non-experimental data. We write in intuitive terms using nontechnical language to make this chapter accessible to a large audience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new registration algorithm, called Temporal Di eomorphic Free Form Deformation (TDFFD), and its application to motion and strain quanti cation from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity eld as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement eld is then recovered through forward Eulerian integration of the non-stationary velocity eld. The strain tensor iscomputed locally using the spatial derivatives of the reconstructed displacement eld. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared di erences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on theincompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, bothon displacement and velocity elds, on a set of synthetic 3D US images with di erent noise levels. TDFFDshowed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFDwas applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, theimprovement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quanti ed by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.