937 resultados para pixel-stack


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contexte¦- Les métastases hépatiques hypovasculaires sont parfois difficile à détecter car très polymorphiques et fréquemment irrégulières. Leurs contrastes sur CT scan hépatique sont souvent faibles.¦- Lors d'un diagnostic, le radiologue ne fixe pas sa vision fovéale sur chaque pixel de l'image. Les expériences de psychophysique avec eye-tracker montrent en effet que le radiologue se concentre sur quelques points spécifiques de l'image appelés fixations. Dans ce travail, nous nous intéresserons aux capacités de détection de l'oeil lorsque l'observateur effectue une saccade entre deux points de fixation. Plus particulièrement, nous nous intéresserons à caractériser les capacités de l'oeil à détecter les signaux se trouvant en dehors de sa vision fovéale, dans ce qu'on appelle, la vision périphérique.¦Objectifs¦- Caractériser l'effet de l'excentricité de la vision sur la détectabilité des contrastes dans le cas de métastases hépatiques hypovasculaires.¦- Récolter des données expérimentales en vue de créer un modèle mathématique qui permettra, à terme, de qualifier le système d'imagerie.¦- → objectifs du TM en soit :¦o prendre en main l'eyetracker¦o traduire une problématique médicale en une expérience scientifique reproductible, quantifiable et qualifiable.¦Méthode¦Nous effectuons une expérience 2AFC (2 Alternative Forced-Choice experiment) afin d'estimer la détectabilité du signal. Pour cela, nous forcerons l'observateur à maintenir son point de fixation à un endroit défini et vérifié par l'eye-tracker. La position del'excentricité du signal tumoral généré sur une coupe de CT hépatique sera le paramètre varié. L'observateur se verra présenté tour à tour deux coupes de CT hépatique, l'une comportant le signal tumoral standardisé et l'autre ne comportant pas le signal. L'observateur devra déterminer quelle image contient la pathologie avec la plus grande probabilité.¦- Cette expérience est un modèle simplifié de la réalité. En effet, le radiologue ne fixe pas un seul point lors de sa recherche mais effectue un "scanpath". Une seconde expérience, dite en free search sera effectuée dans la mesure du temps à disposition. Lors de cette expérience, le signal standardisé sera connu de l'observateur et il n'y aura plus de point de fixation forcée. L'eyetracker suivra le scanpath effectué par l'oeil de l'observateur lors de la recherche du signal sur une coupe de CT scan hépatique. L'intérêt de cette expérience réside dans l'observation de la corrélation entre les saccades et la découverte du signal. Elle permet aussi de vérifier les résultats obtenus lors de la première expérience.¦Résultats escomptés¦- Exp1 : Quantifier l'importance de l'excentricité en radiologie et aider à améliorer la performance de recherche.¦- Exp 2 : tester la validité des résultats obtenus par la première expérience.¦Plus value escomptée¦- Récolte de données pour créer un modèle mathématique capable de déterminer la qualité de l'image radiologique.¦- Possibilité d'extension à la recherche dans les trois dimensions du CT scan hépatique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El procés de fusió de dues o més imatges de la mateixa escena en una d'única i més gran és conegut com a Image Mosaicing. Un cop finalitzat el procés de construcció d'un mosaic, els límits entre les imatges són habitualment visibles, degut a imprecisions en els registres fotomètric i geomètric. L'Image Blending és l'etapa del procediment de mosaicing a la que aquests artefactes són minimitzats o suprimits. Existeixen diverses metodologies a la literatura que tracten aquests problemes, però la majoria es troben orientades a la creació de panorames terrestres, imatges artístiques d'alta resolució o altres aplicacions a les quals el posicionament de la càmera o l'adquisició de les imatges no són etapes rellevants. El treball amb imatges subaquàtiques presenta desafiaments importants, degut a la presència d'scattering (reflexions de partícules en suspensió) i atenuació de la llum i a condicions físiques extremes a milers de metres de profunditat, amb control limitat dels sistemes d'adquisició i la utilització de tecnologia d'alt cost. Imatges amb il·luminació artificial similar, sense llum global com la oferta pel sol, han de ser unides sense mostrar una unió perceptible. Les imatges adquirides a gran profunditat presenten una qualitat altament depenent de la profunditat, i la seva degradació amb aquest factor és molt rellevant. El principal objectiu del treball és presentar dels principals problemes de la imatge subaquàtica, seleccionar les estratègies més adequades i tractar tota la seqüència adquisició-procesament-visualització del procés. Els resultats obtinguts demostren que la solució desenvolupada, basada en una Estratègia de Selecció de Límit Òptim, Fusió en el Domini del Gradient a les regions comunes i Emfatització Adaptativa d'Imatges amb baix nivell de detall permet obtenir uns resultats amb una alta qualitat. També s'ha proposat una estratègia, amb possibilitat d'implementació paral·lela, que permet processar mosaics de kilòmetres d'extensió amb resolució de centímetres per píxel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present sharpened lower bounds on the size of cut free proofs for first-order logic. Prior lower bounds for eliminating cuts from a proof established superexponential lower bounds as a stack of exponentials, with the height of the stack proportional to the maximum depth d of the formulas in the original proof. Our new lower bounds remove the constant of proportionality, giving an exponential stack of height equal to d − O(1). The proof method is based on more efficiently expressing the Gentzen-Solovay cut formulas as low depth formulas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: To determine optimum spatial resolution when imaging peripheral arteries with magnetic resonance angiography (MRA). MATERIALS AND METHODS: Eight vessel diameters ranging from 1.0 to 8.0 mm were simulated in a vascular phantom. A total of 40 three-dimensional flash MRA sequences were acquired with incremental variations of fields of view, matrix size, and slice thickness. The accurately known eight diameters were combined pairwise to generate 22 "exact" degrees of stenosis ranging from 42% to 87%. Then, the diameters were measured in the MRA images by three independent observers and with quantitative angiography (QA) software and used to compute the degrees of stenosis corresponding to the 22 "exact" ones. The accuracy and reproducibility of vessel diameter measurements and stenosis calculations were assessed for vessel size ranging from 6 to 8 mm (iliac artery), 4 to 5 mm (femoro-popliteal arteries), and 1 to 3 mm (infrapopliteal arteries). Maximum pixel dimension and slice thickness to obtain a mean error in stenosis evaluation of less than 10% were determined by linear regression analysis. RESULTS: Mean errors on stenosis quantification were 8.8% +/- 6.3% for 6- to 8-mm vessels, 15.5% +/- 8.2% for 4- to 5-mm vessels, and 18.9% +/- 7.5% for 1- to 3-mm vessels. Mean errors on stenosis calculation were 12.3% +/- 8.2% for observers and 11.4% +/- 15.1% for QA software (P = .0342). To evaluate stenosis with a mean error of less than 10%, maximum pixel surface, the pixel size in the phase direction, and the slice thickness should be less than 1.56 mm2, 1.34 mm, 1.70 mm, respectively (voxel size 2.65 mm3) for 6- to 8-mm vessels; 1.31 mm2, 1.10 mm, 1.34 mm (voxel size 1.76 mm3), for 4- to 5-mm vessels; and 1.17 mm2, 0.90 mm, 0.9 mm (voxel size 1.05 mm3) for 1- to 3-mm vessels. CONCLUSION: Higher spatial resolution than currently used should be selected for imaging peripheral vessels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’aigua i l’energia formen un binomi indissociable. En relació al cicle de l’aigua, des de fa varies dècades s’han desenvolupat diferents formes per recuperar part de l’energia relacionada amb l’aigua, per exemple a partir de centrals hidroelèctriques. No obstant, l’ús d’aquesta aigua també porta associat un gran consum energètic, relacionat sobretot amb el transport, la distribució, la depuració, etc... La depuració d’aigües residuals porta associada una elevada demanda energètica (Obis et al.,2009). En termes energètics, tot i que la despesa elèctrica d’una EDAR varia en funció de diferents paràmetres com la configuració i la capacitat de la planta, la càrrega a tractar, etc... es podria considerar que el rati mig seria d’ aproximadament 0.5 KWh•m-3.Els principals costos d’explotació estan relacionats tant amb la gestió de fangs (28%) com amb el consum elèctric (25%) (50% tractament biològic). Tot i que moltes investigacions relacionades amb el tractament d’aigua residual estan encaminades en disminuir els costos d’operació, des de fa poques dècades s’està investigant la viabilitat de que l’aigua residual fins i tot sigui una font d’energia, canviant la perspectiva, i començant a veure l’aigua residual no com a una problemàtica sinó com a un recurs. Concretament s’estima que l’aigua domèstica conté 9.3 vegades més energia que la necessària per el seu tractament mitjançant processos aerobis (Shizas et al., 2004). Un dels processos més desenvolupats relacionats amb el tractament d’aigües residuals i la producció energètica és la digestió anaeròbia. No obstant, aquesta tecnologia permet el tractament d’altes càrregues de matèria orgànica generant un efluent ric en nitrogen que s’haurà de tractar amb altres tecnologies. Per altre banda, recentment s’està investigant una nova tecnologia relacionada amb el tractament d’aigües residuals i la producció energètica: les piles biològiques (microbial fuel cells, MFC). Aquesta tecnologia permet obtenir directament energia elèctrica a partir de la degradació de substrats biodegradables (Rabaey et al., 2005). Les piles biològiques, més conegudes com a Microbial Fuel Cells (acrònim en anglès, MFC), són una emergent tecnologia que està centrant moltes mirades en el camp de l’ investigació, i que es basa en la producció d’energia elèctrica a partir de substrats biodegradables presents en l’aigua residual (Logan., 2008). Els fonaments de les piles biològiques és molt semblant al funcionament d’una pila Daniell, en la qual es separa en dos compartiments la reacció d’oxidació (compartiment anòdic) i la de reducció (compartiment catòdic) amb l’objectiu de generar un determinat corrent elèctric. En aquest estudi, bàsicament es mostra la posada en marxa d'una pila biològica per a l'eliminació de matèria orgànica i nitrogen de les aigües residuals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: The purpose of this study was to compare a novel compressed sensing (CS)-based single-breath-hold multislice magnetic resonance cine technique with the standard multi-breath-hold technique for the assessment of left ventricular (LV) volumes and function. BACKGROUND: Cardiac magnetic resonance is generally accepted as the gold standard for LV volume and function assessment. LV function is 1 of the most important cardiac parameters for diagnosis and the monitoring of treatment effects. Recently, CS techniques have emerged as a means to accelerate data acquisition. METHODS: The prototype CS cine sequence acquires 3 long-axis and 4 short-axis cine loops in 1 single breath-hold (temporal/spatial resolution: 30 ms/1.5 × 1.5 mm(2); acceleration factor 11.0) to measure left ventricular ejection fraction (LVEFCS) as well as LV volumes and LV mass using LV model-based 4D software. For comparison, a conventional stack of multi-breath-hold cine images was acquired (temporal/spatial resolution 40 ms/1.2 × 1.6 mm(2)). As a reference for the left ventricular stroke volume (LVSV), aortic flow was measured by phase-contrast acquisition. RESULTS: In 94% of the 33 participants (12 volunteers: mean age 33 ± 7 years; 21 patients: mean age 63 ± 13 years with different LV pathologies), the image quality of the CS acquisitions was excellent. LVEFCS and LVEFstandard were similar (48.5 ± 15.9% vs. 49.8 ± 15.8%; p = 0.11; r = 0.96; slope 0.97; p < 0.00001). Agreement of LVSVCS with aortic flow was superior to that of LVSVstandard (overestimation vs. aortic flow: 5.6 ± 6.5 ml vs. 16.2 ± 11.7 ml, respectively; p = 0.012) with less variability (r = 0.91; p < 0.00001 for the CS technique vs. r = 0.71; p < 0.01 for the standard technique). The intraobserver and interobserver agreement for all CS parameters was good (slopes 0.93 to 1.06; r = 0.90 to 0.99). CONCLUSIONS: The results demonstrated the feasibility of applying the CS strategy to evaluate LV function and volumes with high accuracy in patients. The single-breath-hold CS strategy has the potential to replace the multi-breath-hold standard cardiac magnetic resonance technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The Fragile X - associated Tremor Ataxia Syndrome (FXTAS) is a recently described, and under-diagnosed, late onset (≈ 60y) neurodegenerative disorder affecting male carriers of a premutation in the Fragile X Mental Retardation 1 (FMR1) gene. The premutation is an CGG (Cytosine-Guanine-Guanine) expansion (55 to 200 CGG repeats) in the proximal region of the FMR1 gene. Patients with FXTAS primarily present with cerebellar ataxia and intention tremor. Neuroradiological features of FXTAS include prominent white matter disease in the periventricular, subcortical, middle cerebellar peduncles and deep white matter of the cerebellum on T2-weighted or FLAIR MR imaging (Jacquemmont 2007, Loesch 2007, Brunberg 2002, Cohen 2006). We hypothesize that a significant white matter alteration is present in younger individuals many years prior to clinical symptoms and/or the presence of visible lesions on conventional MR sequences and might be detectable by magnetization transfer (MT) imaging. Methods: Eleven asymptomatic premutation carriers (mean age = 55 years) and seven intra-familial controls participated to the study. A standardized neurological examination was performed on all participants and a neuropsychological evaluation was carried out before MR scanning performed on a 3T Siemens Trio. The protocol included a sagittal T1-weighted 3D gradient-echo sequence (MPRAGE, 160 slices, 1 mm^3 isotropic voxels) and a gradient-echo MTI (FA 30, TE 15, matrix size 256*256, pixel size 1*1 mm, 36 slices (thickness 2mm), MT pulse duration 7.68 ms, FA 500, frequency offset 1.5 kHz). MTI was performed by acquiring consecutively two set of images; first with and then without the MT saturation pulse. MT images were coregistered to the T1 acquisition. The MTR for every intracranial voxel was calculated as follows: MTR = (M0 - MS)/M0*100%, creating a MTR map for each subject. As first analysis, the whole white matter (WM) was used to mask the MTR image in order to create an histogram of the MTR distribution in the whole tissue class over the two groups examined. Then, for each subject, we performed a segmentation and parcellation of the brain by means of Freesurfer software, starting from the high resolution T1-weighted anatomical acquisition. Cortical parcellations was used to assign a label to the underlying white matter by the construction of a Voronoi diagram in the WM voxels of the MR volume based on distance to the nearest cortical parcellation label. This procedure allowed us to subdivide the cerebral WM in 78 ROIs according to the cortical parcellation (see example in Fig 1). The cerebellum, by the same procedure, was subdivided in 5 ROIs (2 per each hemisphere and one corresponding to the brainstem). For each subject, we calculated the mean value of MTR within each ROI and averaged over controls and patients. Significant differences between the two groups were tested using a two sample T-test (p<0.01). Results: Neurological examination showed that no patient met the clinical criteria of Fragile X Tremor and Ataxia Syndrome yet. Nonetheless, premutation carriers showed some subtle neurological signs of the disorder. In fact, premutation carriers showed a significant increase of tremor (CRST, T-test p=0.007) and increase of ataxia (ICARS, p=0.004) when compared to controls. The neuropsychological evaluation was normal in both groups. To obtain general characterizations of myelination for each subject and premutation carriers, we first computed the distribution of MTR values across the total white matter volume and averaged for each group. We tested the equality of the two distributions with the non parametric Kolmogorov-Smirnov test and we rejected the null-hypothesis at a p=0.03 (fig. 2). As expected, when comparing the asymptomatic permutation carriers with control subjects, the peak value and peak position of the MTR values within the whole WM were decreased and the width of the distribution curve was increased (p<0.01). These three changes point to an alteration of the global myelin status of the premutation carriers. Subsequently, to analyze the regional myelination and white matter integrity of the same group, we performed a ROI analysis of MTR data. The ROI-based analysis showed a decrease of mean MTR value in premutation carriers compared to controls in bilateral orbito-frontal and inferior frontal WM, entorhinal and cingulum regions and cerebellum (Fig 3). The detection of these differences in these regions failed with other conventional MR techniques. Conclusions: These preliminary data confirm that in premutation carriers, there are indeed alterations in "normal appearing white matter" (NAWM) and these alterations are visible with the MT technique. These results indicate that MT imaging may be a relevant approach to detect both global and local alterations within NAWM in "asymptomatic" carriers of premutations in the Fragile X Mental Retardation 1 (FMR1) gene. The sensitivity of MT in the detection of these alterations might point towards a specific physiopathological mechanism linked to an underlying myelin disorder. ROI-based analyses show that the frontal, parahippocampal and cerebellar regions are already significantly affected before the onset of symptoms. A larger sample will allow us to determine the minimum CGG expansion and age associated with these subclinical white matter alterations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: The purposes of this study were to (1) develop a high-resolution 3-T magnetic resonance angiography (MRA) technique with an in-plane resolution approximate to that of multidetector coronary computed tomography (MDCT) and a voxel size of 0.35 × 0.35 × 1.5 mm³ and to (2) investigate the image quality of this technique in healthy participants and preliminarily in patients with known coronary artery disease (CAD). MATERIALS AND METHODS: A 3-T coronary MRA technique optimized for an image acquisition voxel as small as 0.35 × 0.35 × 1.5 mm³ (high-resolution coronary MRA [HRC]) was implemented and the coronary arteries of 22 participants were imaged. These included 11 healthy participants (average age, 28.5 years; 5 men) and 11 participants with CAD (average age, 52.9 years; 5 women) as identified on MDCT. In addition, the 11 healthy participants were imaged using a method with a more common spatial resolution of 0.7 × 1 × 3 mm³ (regular-resolution coronary MRA [RRC]). Qualitative and quantitative comparisons were made between the 2 MRA techniques. RESULTS: Normal vessels and CAD lesions were successfully depicted at 350 × 350 μm² in-plane resolution with adequate signal-to-noise ratio (SNR) and contrast-to-noise ratio. The CAD findings were consistent among MDCT and HRC. The HRC showed a 47% improvement in sharpness despite a reduction in SNR (by 72%) and in contrast-to-noise ratio (by 86%) compared with the regular-resolution coronary MRA. CONCLUSION: This study, as a first step toward substantial improvement in the resolution of coronary MRA, demonstrates the feasibility of obtaining at 3 T a spatial resolution that approximates that of MDCT. The acquisition in-plane pixel dimensions are as small as 350 × 350 μm² with a 1.5-mm slice thickness. Although SNR is lower, the images have improved sharpness, resulting in image quality that allows qualitative identification of disease sites on MRA consistent with MDCT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method to achieve the most relevant contours of an image. The presented method proposes to integrate the information of the local contours from chromatic components such as H, S and I, taking into account the criteria of coherence of the local contour orientation values obtained from each of these components. The process is based on parametrizing pixel by pixel the local contours (magnitude and orientation values) from the H, S and I images. This process is carried out individually for each chromatic component. If the criterion of dispersion of the obtained orientation values is high, this chromatic component will lose relevance. A final processing integrates the extracted contours of the three chromatic components, generating the so-called integrated contours image

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great number of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. For normalised correlation criteria, previous experiments shown that the result is not altered in presence of nonuniform illumination. Usually, hardware for motion estimation has been limited to simple correlation criteria. The main goal of this paper is to propose a VLSI architecture for motion estimation using a matching criteria more complex than Sum of Absolute Differences (SAD) criteria. Today hardware devices provide many facilities for the integration of more and more complex designs as well as the possibility to easily communicate with general purpose processors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fission-track and (40)Ar/(39)Ar ages place time constraints on the exhumation of the North Himalayan nappe stack, the Indus Suture Zone and Molasse, and the Transhimalayan Batholith in eastern Ladakh (NW India). Results from this and previous studies on a north-south transect passing near Tso Morari Lake suggest that the SW-directed North Himalayan nappe stack (comprising the Mata, Tetraogal and Tso Morari nappes) was emplaced and metamorphosed by c. 50-45 Ma, and exhumed to moderately shallow depths (c. 10 km) by c. 45-40 Ma. From the mid-Eocene to the present, exhumation continued at a steady and slow rate except for the root zone of the Tso Morari nappe, which cooled faster than the rest of the nappe stack. Rapid cooling occurred at c. 20 Ma and is linked to brittle deformation along the normal Ribil-Zildat Fault concomitant with extrusion of the Crystalline nappe in the south. Data from the Indus Molasse suggest that sediments were still being deposited during the Miocene.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Western Alpine Are has been created during the Cretaceous and the Tertiary orogenies. The interference patterns of the Tertiary structures suggest their formation during continental collision of the European and the Adriatic Plates, with an accompanying anticlockwise rotation of the Adriatic indenter. Extensional structures are mainly related to ductile deformation by simple shear. These structures developed at a deep tectonic level, in granitic crustal rocks, at depths in excess of 10 km. In the early Palaeogene period of the Tertiary Orogeny, the main Tertiary nappe emplacement resulted from a NW-thrusting of the Austroalpine, Penninic and Helvetic nappes. Heating of the deep zone of the Upper Cretaceous and Tertiary nappe stack by geothermal heat flow is responsible for the Tertiary regional metamorphism, reaching amphibolite-facies conditions in the Lepontine Gneiss Dome (geothermal gradient 25 degrees C/ km). The Tertiary thrusting occurred mainly during prograde metamorphic conditions with creation of a penetrative NW-SE-oriented stretching lineation, X(1) (finite extension), parallel to the direction of simple shear. Earliest cooling after the culmination of the Tertiary metamorphism, some 38 Ma ago, is recorded by the cooling curves of the Monte Rosa and Mischabel nappes to the west and the Suretta Nappe to the east of the Lepontine Gneiss Dome. The onset of dextral transpression, with a strong extension parallel to the mountain belt, and the oldest S-vergent `'backfolding'' took place some 35 to 30 Ma ago during retrograde amphibolite-facies conditions and before the intrusion of the Oligocene dikes north of the Periadriatic Line. The main updoming of the Lepontine Gneiss Dome started some 32-30 Ma ago with the intrusion of the Bergell tonalites and granodiorites, concomitant with S-vergent backfolding and backthrusting and dextral strike-slip movements along the Tonale and Canavese Lines (Argand's Insubric phase). Subsequently, the center of main updoming migrated slowly to the west, reaching the Simplon region some 20 Ma ago. This was contemporaneous with the westward migration of the Adriatic indenter. Between 20 Ma and the present, the Western Aar Massif-Toce culmination was the center of strong uplift. The youngest S-vergent backfolds, the Glishorn anticline and the Berisal syncline fold the 12 Ma Rb/Sr biotite isochron and are cut by the 11 Ma old Rhone-Simplon Line. The discrete Rhone-Simplon Line represents a late retrograde manifestation in the preexisting ductile Simplon Shear Zone. This fault zone is still active today. The Oligocene-Neogene dextral transpression and extension in the Simplon area were concurrent with thrusting to the northwest of the Helvetic nappes, the Prealpes (35-15 Ma) and with the Jura thin-skinned thrust (11-3 Ma). It was also contemporaneous with thrusting to the south of the Bergamasc (> 35-5 Ma) and Milan thrusts (16-5 Ma).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of traffic engineering is to optimize network resource utilization. Although several works have been published about minimizing network resource utilization, few works have focused on LSR (label switched router) label space. This paper proposes an algorithm that takes advantage of the MPLS label stack features in order to reduce the number of labels used in LSPs. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The described algorithm sets up NHLFE (next hop label forwarding entry) tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the described algorithm achieves a great reduction factor in the label space. The presented works apply for both types of connections: P2MP (point-to-multipoint) and P2P (point-to-point)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of traffic engineering is to optimise network resource utilization. Although several works on minimizing network resource utilization have been published, few works have focused on LSR label space. This paper proposes an algorithm that uses MPLS label stack features in order to reduce the number of labels used in LSPs forwarding. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The algorithm described sets up the NHLFE tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the algorithm achieves a large reduction factor in the label space. The work presented here applies for both types of connections: P2MP and P2P