947 resultados para Radar absorber measurements


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purine nucleotide pyrophosphotransferase was purified to apparent homogeneity from a culture filtrate of Streptomyces morookaensis. It is a monomeric protein with a molecular weight of 24 000-25 000, and its isoelectric point is 6.9. The enzyme synthesizes purine nucleoside 5'-phosphate (mono, di, or tri) 3'-diphosphates such as pppApp, ppApp, pApp, pppGpp, ppGpp and pppIpp by transferring a pyrophosphoryl group from the 5'-position of ATP, dATP and ppApp to the 3'-position of purine nucleotides. The purified enzyme catalysed the formation of 435 mumol of pppApp and 620 mumol of pppGpp from ATP and GTP per min mg protein under the standard conditions. The enzyme requires absolutely a divalent cation for activity, and optimum pH for the enzyme activity lay above 10 for Mg2+, for Co2+ and Zn2+ from 9 to 9.5, and for Fe2+ from 7.5 to 8. The following Michaelis constants were determined: AMP, 2.78 mM; ADP, 3.23 mM; GMP, 0.89 mM; GDP, 0.46 mM and GTP, 1.54 mM, in the case of ATP donor. The enzyme is inhibited by guanine, guanosine, dGDP, dGTP, N-bromosuccinimide, iodacetate, sodium borate and mercuric acetate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluid that fills boreholes in crosswell electrical resistivity investigations provides the necessary electrical contact between the electrodes and the rock formation but it is also the source of image artifacts in standard inversions that do not account for the effects of the boreholes. The image distortions can be severe for large resistivity contrasts between the rock formation and borehole fluid and for large borehole diameters. We have carried out 3D finite-element modeling using an unstructured-grid approach to quantify the magnitude of borehole effects for different resistivity contrasts, borehole diameters, and electrode configurations. Relatively common resistivity contrasts of 100:1 and borehole diameters of 10 and 20 cm yielded, for a bipole length of 5 m, apparent resistivity underestimates of approximately 12% and 32% when using AB-MN configurations and apparent resistivity overestimates of approximately 24% and 95% when using AM-BN configurations. Effects are generally more severe at shorter bipole spacings. We report the results obtained by either including or ignoring the boreholes in inversions of 3D field data from a test site in Switzerland, where approximately 10,000 crosswell resistivity-tomography measurements were made across six acquisition planes among four boreholes. Inversions of raw data that ignored the boreholes filled with low-resistivity fluid paradoxically produced high-resistivity artifacts around the boreholes. Including correction factors based on the modeling results fora ID model with and without the boreholes did not markedly improve the images. The only satisfactory approach was to use a 3D inversion code that explicitly incorporated the boreholes in the actual inversion. This new approach yielded an electrical resistivity image that was devoid of artifacts around the boreholes and that correlated well with coincident crosswell radar images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Recently morphometric measurements of the ascending aorta have been done with ECG-gated MDCT to help the development of future endovascular therapies (TCT) [1]. However, the variability of these measurements remains unknown. It will be interesting to know the impact of CAD (computer aided diagnosis) with automated segmentation of the vessel and automatic measurements of diameter on the management of ascending aorta aneurysms. Methods and Materials: Thirty patients referred for ECG-gated CT thoracic angiography (64-row CT scanner) were evaluated. Measurements of the maximum and minimum ascending aorta diameters were obtained automatically with a commercially available CAD and semi-manually by two observers separately. The CAD algorithms segment the iv-enhanced lumen of the ascending aorta into perpendicular planes along the centreline. The CAD then determines the largest and the smallest diameters. Both observers repeated the automatic measurements and the semimanual measurements during a different session at least one month after the first measurements. The Bland and Altman method was used to study the inter/intraobserver variability. A Wilcoxon signed-rank test was also used to analyse differences between observers. Results: Interobserver variability for semi-manual measurements between the first and second observers was between 1.2 to 1.0 mm for maximal and minimal diameter, respectively. Intraobserver variability of each observer ranged from 0.8 to 1.2 mm, the lowest variability being produced by the more experienced observer. CAD variability could be as low as 0.3 mm, showing that it can perform better than human observers. However, when used in nonoptimal conditions (streak artefacts from contrast in the superior vena cava or weak lumen enhancement), CAD has a variability that can be as high as 0.9 mm, reaching variability of semi-manual measurements. Furthermore, there were significant differences between both observers for maximal and minimal diameter measurements (p<0.001). There was also a significant difference between the first observer and CAD for maximal diameter measurements with the former underestimating the diameter compared to the latter (p<0.001). As for minimal diameters, they were higher when measured by the second observer than when measured by CAD (p<0.001). Neither the difference of mean minimal diameter between the first observer and CAD nor the difference of mean maximal diameter between the second observer and CAD was significant (p=0.20 and 0.06, respectively). Conclusion: CAD algorithms can lessen the variability of diameter measurements in the follow-up of ascending aorta aneurysms. Nevertheless, in non-optimal conditions, it may be necessary to correct manually the measurements. Improvements of the algorithms will help to avoid such a situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to compare the use of different variables to measure the clinical wear of two denture tooth materials in two analysis centers. METHODS: Twelve edentulous patients were provided with full dentures. Two different denture tooth materials (experimental material and control) were placed randomly in accordance with the split-mouth design. For wear measurements, impressions were made after an adjustment phase of 1-2 weeks and after 6, 12, 18, and 24 months. The occlusal wear of the posterior denture teeth of 11 subjects was assessed in two study centers by use of plaster replicas and 3D laser-scanning methods. In both centers sequential scans of the occlusal surfaces were digitized and superimposed. Wear was described by use of four different variables. Statistical analysis was performed after log-transformation of the wear data by use of the Pearson and Lin correlation and by use of a mixed linear model. RESULTS: Mean occlusal vertical wear of the denture teeth after 24 months was between 120μm and 212μm, depending on wear variable and material. For three of the four variables, wear of the experimental material was statistically significantly less than that of the control. Comparison of the two study centers, however, revealed correlation of the wear variables was only moderate whereas strong correlation was observed among the different wear variables evaluated by each center. SIGNIFICANCE: Moderate correlation was observed for clinical wear measurements by optical 3D laser scanning in two different study centers. For the two denture tooth materials, wear measurements limited to the attrition zones led to the same qualitative assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of wave propagation at sonic frequency in soil leads to elasticity parameter determination. These parameters are compatible to those measured simultaneously by static loading. The acquisition of in situ elasticity parameter combined with laboratory description of the elastoplastic behaviour can lead to in situ elastoplastic curves. - L'étude de la propagation des ondes acoustiques permet la détermination des paramètres d'élasticité dans les sols. Ces paramètres sont cohérents avec des mesures statiques simultanées. L'acquisition des paramètres d'élasticité in situ associée à une description du comportement élasto-plastique mesuré en laboratoire permet d'obtenir des courbes d'élastoplasticité in situ.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Prospective studies have shown that quantitative ultrasound (QUS) techniques predict the risk of fracture of the proximal femur with similar standardised risk ratios to dual-energy x-ray absorptiometry (DXA). Few studies have investigated these devices for the prediction of vertebral fractures. The Basel Osteoporosis Study (BOS) is a population-based prospective study to assess the performance of QUS devices and DXA in predicting incident vertebral fractures. METHODS: 432 women aged 60-80 years were followed-up for 3 years. Incident vertebral fractures were assessed radiologically. Bone measurements using DXA (spine and hip) and QUS measurements (calcaneus and proximal phalanges) were performed. Measurements were assessed for their value in predicting incident vertebral fractures using logistic regression. RESULTS: QUS measurements at the calcaneus and DXA measurements discriminated between women with and without incident vertebral fracture, (20% height reduction). The relative risks (RRs) for vertebral fracture, adjusted for age, were 2.3 for the Stiffness Index (SI) and 2.8 for the Quantitative Ultrasound Index (QUI) at the calcaneus and 2.0 for bone mineral density at the lumbar spine. The predictive value (AUC (95% CI)) of QUS measurements at the calcaneus remained highly significant (0.70 for SI, 0.72 for the QUI, and 0.67 for DXA at the lumbar spine) even after adjustment for other confounding variables. CONCLUSIONS: QUS of the calcaneus and bone mineral density measurements were shown to be significant predictors of incident vertebral fracture. The RRs for QUS measurements at the calcaneus are of similar magnitude as for DXA measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND AIMS: little is known regarding the reproducibility of body fat measuring devices; hence, we assessed the between and within-device reproducibility, and the within-day variability of body fat measurements. METHODS: body fat percentage was measured twice on seventeen female students aged between 18 and 20 with a body mass index of 21.9 ± 2.5 kg/m2 (mean ± SD) using seven bipolar bioelectrical impedance devices. Each participant was also measured each hour between 7:00 and 22:00. RESULTS: the correlation between first and second measurements was very high (Spearman r between 0.985 and 1.000, p<0.001), as well as between devices (Spearman r between 0.916 and 0.991, p<0.001). Repeated measurements analysis showed no differences were between devices (p=0.59) or readings (first vs. second: p=0.74). Conversely, significant differences were found between assessment periods throughout the day, measurements made in the morning being lower than those made in the afternoon (F test for repeated values= 6.58, p<0.001). CONCLUSIONS: the between and within-device reproducibility for measuring body fat is high, enabling the use of multiple devices in a single study. Conversely, small but significant changes in body fat measurements occur during the day, urging body fat measurements to be performed at fixed times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapport de synthèse : Mesures de l'aorte ascendante par scanner synchronisé au rythme cardiaque: une étude pilote pour établir des valeurs normatives dans le cadre des futures thérapies par transcathéter. Objectif : L'objectif de cette étude est d'établir les valeurs morphométriques normatives de l'aorte ascendante à l'aide de l'angiographie par scanner synchronisé au rythme cardiaque, afin d'aider au développement des futurs traitements par transcathéter. Matériels et méthodes : Chez soixante-dix-sept patients (âgé de 22 à 83 ans, âge moyen: 54,7 ans), une angiographie par scanner synchronisé au rythme cardiaque a été réalisée pour évaluation des vaisseaux coronaires. Les examens ont été revus afin d'étudier l'anatomie de la chambre de chasse du ventricule gauche jusqu'au tronc brachio-céphalique droit. A l'aide de programmes de reconstructions multiplanaires et de segmentation automatique, différents diamètres et longueurs considérés comme importants pour les futurs traitements par transcathéter ont été mesurés. Les valeurs sont exprimées en moyennes, médianes, maximums, minimums, écart-types et en coefficients de variation. Les variations de diamètre de l'aorte ascendante durant le cycle cardiaque ont été aussi considérées. Résultats : Le diamètre moyen de la chambre de chasse du ventricule gauche était de 20.3+/-3.4 mm. Au niveau du sinus coronaire de l'aorte, il était de 34.2+/-4.1 mm et au niveau de la jonction sinotubulaire il était de 29.7+/-3.4 mm. Le diamètre moyen de l'aorte ascendante était de 32.7+/-3.8 mm. Le coefficient de variation de ces mesures variait de 12 à 17%. La distance moyenne entre l'insertion proximale des valvules aortiques et le départ du tronc brachio-céphalique droit était de 92.6+/-11.8 mm. La distance moyenne entre l'insertion proximale des valvules aortiques et l'origine de l'artère coronaire proximale était de 12.1+/-3.7 mm avec un coefficient de variation de 31%. La distance moyenne entre les deux ostia coronaires était de 7.2+/-3.1 mm avec un coefficient de variation de 43%. La longueur moyenne du petit arc de l'aorte ascendante entre l'artère coronaire gauche et le tronc brachio-céphalique droit était de 52.9+/-9.5 mm. La longueur moyenne de la continuité fibreuse entre la valve aortique et la valvule mitrale antérieure était de 14.6+/-3.3 mm avec un coefficient de variation de 23%. L'aire moyenne de la valve aortique était de 582.0+/-131.9 mm2. La variation du diamètre antéro-postérieur et transverse de l'aorte ascendante était respectivement de 8.4% et de 7.3%. Conclusion Il existe d'importantes variations inter-individuelles dans les mesures de l'aorte ascendante avec cependant des variations intra-individuelles faibles durant le cycle cardiaque. De ce fait, une approche personnalisée pour chaque patient est recommandée dans la confection des futures endoprothèses de l'aorte ascendante. Le scanner synchronisé au rythme cardiaque jouera un rôle prépondérant dans le bilan préthérapeutique. Abstract : The aim of this study was to provide an insight into normative values of the ascending aorta in regards to novel endovascular procedures using ECG-gated multi-detector CT angiography. Seventy-seven adult patients without ascending aortic abnormalities were evaluated. Measurements at relevant levels of the aortic root and ascending aorta were obtained. Diameter variations of the ascending aorta during cardiac cycle were also considered. Mean diameters (mm) were as follows: LV outflow tract 20.3+/-3.4, coronary sinus 34.2+/-4.1, sinotubular junction 29.7+-3.4 and mid ascending aorta 32.7+/-3.8 with coefficients of variation (CV) ranging from 12 to 17%. Mean distances (mm) were: from the plane passing through the proximal insertions of the aortic valve cusps to the right brachio-cephalic artery (BCA) 92.6111.8, from the plane passing through the proximal insertions of the aortic valve cusps to the proximal coronary ostium 12.1+/-3.7, and between both coronary ostia 7.2+/-3.1, minimal arc of the ascending aorta from left coronary ostium to right BCA 52.9 X9.5, and the fibrous continuity between the aortic valve and the anterior leaflet of the mitral valve 14.óf3.3, CV 13-43%. Mean aortic valve area was 582+-131.9 mm2. The variations of the antero-posterior and transverse diameters of the ascending aorta during the cardiac cycle were 8.4% and 7.3%, respectively. Results showed large inter-individual variations in diameters and distances but with limited intra-individual variations during the cardiac cycle. A personalized approach for planning endovascular devices must be considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The geometry and connectivity of fractures exert a strong influence on the flow and transport properties of fracture networks. We present a novel approach to stochastically generate three-dimensional discrete networks of connected fractures that are conditioned to hydrological and geophysical data. A hierarchical rejection sampling algorithm is used to draw realizations from the posterior probability density function at different conditioning levels. The method is applied to a well-studied granitic formation using data acquired within two boreholes located 6 m apart. The prior models include 27 fractures with their geometry (position and orientation) bounded by information derived from single-hole ground-penetrating radar (GPR) data acquired during saline tracer tests and optical televiewer logs. Eleven cross-hole hydraulic connections between fractures in neighboring boreholes and the order in which the tracer arrives at different fractures are used for conditioning. Furthermore, the networks are conditioned to the observed relative hydraulic importance of the different hydraulic connections by numerically simulating the flow response. Among the conditioning data considered, constraints on the relative flow contributions were the most effective in determining the variability among the network realizations. Nevertheless, we find that the posterior model space is strongly determined by the imposed prior bounds. Strong prior bounds were derived from GPR measurements and helped to make the approach computationally feasible. We analyze a set of 230 posterior realizations that reproduce all data given their uncertainties assuming the same uniform transmissivity in all fractures. The posterior models provide valuable statistics on length scales and density of connected fractures, as well as their connectivity. In an additional analysis, effective transmissivity estimates of the posterior realizations indicate a strong influence of the DFN structure, in that it induces large variations of equivalent transmissivities between realizations. The transmissivity estimates agree well with previous estimates at the site based on pumping, flowmeter and temperature data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TCP flows from applications such as the web or ftp are well supported by a Guaranteed Minimum Throughput Service (GMTS), which provides a minimum network throughput to the flow and, if possible, an extra throughput. We propose a scheme for a GMTS using Admission Control (AC) that is able to provide different minimum throughput to different users and that is suitable for "standard" TCP flows. Moreover, we consider a multidomain scenario where the scheme is used in one of the domains, and we propose some mechanisms for the interconnection with neighbor domains. The whole scheme uses a small set of packet classes in a core-stateless network where each class has a different discarding priority in queues assigned to it. The AC method involves only edge nodes and uses a special probing packet flow (marked as the highest discarding priority class) that is sent continuously from ingress to egress through a path. The available throughput in the path is obtained at the egress using measurements of flow aggregates, and then it is sent back to the ingress. At the ingress each flow is detected using an implicit way and then it is admission controlled. If it is accepted, it receives the GMTS and its packets are marked as the lowest discarding priority classes; otherwise, it receives a best-effort service. The scheme is evaluated through simulation in a simple "bottleneck" topology using different traffic loads consisting of "standard" TCP flows that carry files of varying sizes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An international exercise, registered as EUROMET project no. 907, was launched to measure both the activity of a solution of (124)Sb and the photon emission intensities of its decay. The same solution was sent by LNE-LNHB to eight participating laboratories. In order to identify possible biases, the participants were asked to use all possible activity measurement methods available in their laboratory and then to determine their reference value for comparison. Thus, measurement results from 4pibeta-gamma coincidence/anti-coincidence counting, CIEMAT/NIST liquid-scintillation counting, 4pigamma counting with well-type ionization chambers and well-type crystal detectors were given. The results are compared and show a maximum discrepancy of about 1.6%: possible explanations are proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary