86 resultados para Automated proof
Resumo:
The cardiac late Na (+) current is generated by a small fraction of voltage-dependent Na (+) channels that undergo a conformational change to a burst-gating mode, with repeated openings and closures during the action potential (AP) plateau. Its magnitude can be augmented by inactivation-defective mutations, myocardial ischemia, or prolonged exposure to chemical compounds leading to drug-induced (di)-long QT syndrome, and results in an increased susceptibility to cardiac arrhythmias. Using CytoPatch™ 2 automated patch-clamp equipment, we performed whole-cell recordings in HEK293 cells stably expressing human Nav1.5, and measured the late Na (+) component as average current over the last 100 ms of 300 ms depolarizing pulses to -10 mV from a holding potential of -100 mV, with a repetition frequency of 0.33 Hz. Averaged values in different steady-state experimental conditions were further corrected by the subtraction of current average during the application of tetrodotoxin (TTX) 30 μM. We show that ranolazine at 10 and 30 μM in 3 min applications reduced the late Na (+) current to 75.0 ± 2.7% (mean ± SEM, n = 17) and 58.4 ± 3.5% ( n = 18) of initial levels, respectively, while a 5 min application of veratridine 1 μM resulted in a reversible current increase to 269.1 ± 16.1% ( n = 28) of initial values. Using fluctuation analysis, we observed that ranolazine 30 μM decreased mean open probability p from 0.6 to 0.38 without modifying the number of active channels n, while veratridine 1 μM increased n 2.5-fold without changing p. In human iPSC-derived cardiomyocytes, veratridine 1 μM reversibly increased APD90 2.12 ± 0.41-fold (mean ± SEM, n = 6). This effect is attributable to inactivation removal in Nav1.5 channels, since significant inhibitory effects on hERG current were detected at higher concentrations in hERG-expressing HEK293 cells, with a 28.9 ± 6.0% inhibition (mean ± SD, n = 10) with 50 μM veratridine.
Resumo:
INTRODUCTION Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. METHODS AND MATERIALS Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. RESULTS Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm(3)) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm(3)) (P<0.001). CONCLUSIONS 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA.
Resumo:
BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.
Resumo:
This paper proposed an automated 3D lumbar intervertebral disc (IVD) segmentation strategy from MRI data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based approach. After that, a three-dimensional (3D) variable-radius soft tube model of the lumbar spine column is built to guide the 3D disc segmentation. The disc segmentation is achieved as a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.
Resumo:
In electroweak-boson production processes with a jet veto, higher-order corrections are enhanced by logarithms of the veto scale over the invariant mass of the boson system. In this paper, we resum these Sudakov logarithms at next-to-next-to-leading logarithmic accuracy and match our predictions to next-to-leading-order (NLO) fixed-order results. We perform the calculation in an automated way, for arbitrary electroweak final states and in the presence of kinematic cuts on the leptons produced in the decays of the electroweak bosons. The resummation is based on a factorization theorem for the cross sections into hard functions, which encode the virtual corrections to the boson production process, and beam functions, which describe the low-pT emissions collinear to the beams. The one-loop hard functions for arbitrary processes are calculated using the MadGraph5_aMC@NLO framework, while the beam functions are process independent. We perform the resummation for a variety of processes, in particular for W+W− pair production followed by leptonic decays of the W bosons.
Resumo:
In this study two commonly used automated methods to detect atmospheric fronts in the lower troposphere are compared in various synoptic situations. The first method is a thermal approach, relying on the gradient of equivalent potential temperature (TH), while the second method is based on temporal changes in the 10 m wind (WND). For a comprehensive objective comparison of the outputs of these methods of frontal identification, both schemes are firstly applied to an idealised strong baroclinic wave simulation in the absence of topography. Then, two case-studies (one in the Northern Hemisphere (NH) and one in the Southern Hemisphere (SH)) were conducted to contrast fronts detected by the methods. Finally, we obtain global winter and summer frontal occurrence climatologies (derived from ERA-Interim for 1979–2012) and compare the structure of these. TH is able to identify cold and warm fronts in strong baroclinic cases that are in good agreement with manual analyses. WND is particularly suited for the detection of strongly elongated, meridionally oriented moving fronts, but has very limited ability to identify zonally oriented warm fronts. We note that the areas of the main TH frontal activity are shifted equatorwards compared to the WND patterns and are located upstream of regions of main WND front activity. The number of WND fronts in the NH shows more interseasonal variations than TH fronts, decreasing by more than 50% from winter to summer. In the SH there is a weaker seasonal variation of the number of observed WND fronts, however TH front activity reduces from summer (DJF) to winter (JJA). The main motivation is to give an overview of the performance of these methods, such that researchers can choose the appropriate one for their particular interest.
Resumo:
PURPOSE Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. METHODS Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b(+)Prph2(Rd2) /J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. RESULTS Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. CONCLUSIONS Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. TRANSLATIONAL RELEVANCE The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions.
Resumo:
Elicitability has recently been discussed as a desirable property for risk measures. Kou and Peng (2014) showed that an elicitable distortion risk measure is either a Value-at-Risk or the mean. We give a concise alternative proof of this result, and discuss the conflict between comonotonic additivity and elicitability.
Resumo:
AMS-14C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible 14C values for masses ranging from 50 to 300 lg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM10 samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained 14C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.
Resumo:
Several lake ice phenology studies from satellite data have been undertaken. However, the availability of long-term lake freeze-thaw-cycles, required to understand this proxy for climate variability and change, is scarce for European lakes. Long time series from space observations are limited to few satellite sensors. Data of the Advanced Very High Resolution Radiometer (AVHRR) are used in account of their unique potential as they offer each day global coverage from the early 1980s expectedly until 2022. An automatic two-step extraction was developed, which makes use of near-infrared reflectance values and thermal infrared derived lake surface water temperatures to extract lake ice phenology dates. In contrast to other studies utilizing thermal infrared, the thresholds are derived from the data itself, making it unnecessary to define arbitrary or lake specific thresholds. Two lakes in the Baltic region and a steppe lake on the Austrian–Hungarian border were selected. The later one was used to test the applicability of the approach to another climatic region for the time period 1990 to 2012. A comparison of the extracted event dates with in situ data provided good agreements of about 10 d mean absolute error. The two-step extraction was found to be applicable for European lakes in different climate regions and could fill existing data gaps in future applications. The extension of the time series to the full AVHRR record length (early 1980 until today) with adequate length for trend estimations would be of interest to assess climate variability and change. Furthermore, the two-step extraction itself is not sensor-specific and could be applied to other sensors with equivalent near- and thermal infrared spectral bands.
Resumo:
The limitations of diagnostic echo ultrasound have motivated research into novel modalities that complement ultrasound in a multimodal device. One promising candidate is speed of sound imaging, which has been found to reveal structural changes in diseased tissue. Transmission ultrasound tomography shows speed of sound spatially resolved, but is limited to the acoustically transparent breast. We present a novel method by which speed-of-sound imaging is possible using classic pulse-echo equipment, facilitating new clinical applications and the combination with state-of-the art diagnostic ultrasound. Pulse-echo images are reconstructed while scanning the tissue under various angles using transmit beam steering. Differences in average sound speed along different transmit directions are reflected in the local echo phase, which allows a 2-D reconstruction of the sound speed. In the present proof-of-principle study, we describe a contrast resolution of 0.6% of average sound speed and a spatial resolution of 1 mm (laterally) × 3 mm (axially), suitable for diagnostic applications.