991 resultados para Quantitative micrographic parameters
Resumo:
Hypertension and congenital aortic valve malformations are frequent causes of ascending aortic aneurysms. The molecular mechanisms of aneurysm formation under these circumstances are not well understood. Reference genes for gene activity studies in aortic tissue that are not influenced by aortic valve morphology and its hemodynamic consequences, aortic dilatation, hypertension, or antihypertensive medication are not available so far. This study determines genes in ascending aortic tissue that are independent of these parameters. Tissue specimens from dilated and undilated ascending aortas were obtained from 60 patients (age ≤70 years) with different morphologies of the aortic valve (tricuspid undilated n = 24, dilated n = 11; bicuspid undilated n = 6, dilated n = 15; unicuspid dilated n = 4). Of the studied individuals, 36 had hypertension, and 31 received ACE inhibitors or AT1 receptor antagonists. The specimens were obtained intraoperatively from the wall of the ascending aorta. We analyzed the expression levels of 32 candidate reference genes by quantitative RT-PCR (RT-qPCR). Differential expression levels were assessed by parametric statistics. The expression analysis of these 32 genes by RT-qPCR showed that EIF2B1, ELF1, and PPIA remained constant in their expression levels in the different specimen groups, thus being insensitive to aortic valve morphology, aortic dilatation, hypertension, and medication with ACE inhibitors or AT1 receptor antagonists. Unlike many other commonly used reference genes, the genes EIF2B1, ELF1, and PPIA are neither confounded by aortic comorbidities nor by antihypertensive medication and therefore are most suitable for gene expression analysis of ascending aortic tissue.
Resumo:
Quantitative ultrasound of bone is a promising method for bone assessment: radiation-free, portable and predictive of hip fracture. Its portability allowed us to study the relationships between ultrasonic parameters of bone with age and with non-vertebral fractures in elderly women living in 19 nursing homes. Broadband ultrasound attenuation (BUA) and speed of sound (SOS) of the calcaneus were measured (and the stiffness index calculated) in a sample of 270 institutionalized women, aged 85 +/- 7 years, using an Achilles bone densitometer (Lunar). The effects of age, history of non-vertebral and non-traumatic fractures, body mass index, triceps skinfold and arm circumference were assessed on BUA, SOS and stiffness index. Furthermore, to evaluate longitudinally the influence of aging on the ultrasound parameters of bone, 60 subjects from the same group had a second ultrasound measurement after 1 year. The cross-sectional analysis of the data on all 270 women showed a significant decrease (p < 0.001) with age in BUA, SOS and stiffness index (-0.47%, -0.06%, and -1.01% respectively per year). In the 94 women, (35%) with a history of previous non-vertebral fractures, ultrasound parameters were significantly lower (p < 0.0001) than in the 176 women with no history of fracture (-8.3% for BUA, -1.3% for SOS, -18.9% for stiffness index). In contrast, there was no significant difference in anthropometric measurements between the groups with and without previous non-vertebral fractures, although the measurements decreased significantly with age. In the longitudinal study, repeated quantitative ultrasound after 11.4 +/- 0.8 months showed no significant decrease in BUA (-1%) but a significant decrease in SOS (-0.3%, p < 0.0001) and in stiffness index (-3.6%, p < 0.0002). In conclusion, quantitative ultrasound of the calcaneus measures properties of bone which continue to decline in institutionalized elderly women, and is able to discriminate women with previous non-vertebral fractures.
Resumo:
OBJECTIVE To better define the concordance of visual loss in patients with nonarteritic anterior ischemic optic neuropathy (NAION). METHODS The medical records of 86 patients with bilateral sequential NAION were reviewed retrospectively, and visual function was assessed using visual acuity, Goldmann visual fields, color vision, and relative afferent papillary defect. A quantitative total visual field score and score per quadrant were analyzed for each eye using the numerical Goldmann visual field scoring method. RESULTS Outcome measures were visual acuity, visual field, color vision, and relative afferent papillary defect. A statistically significant correlation was found between fellow eyes for multiple parameters, including logMAR visual acuity (P = .01), global visual field (P < .001), superior visual field (P < .001), and inferior visual field (P < .001). The mean deviation of total (P < .001) and pattern (P < .001) deviation analyses was significantly less between fellow eyes than between first and second eyes of different patients. CONCLUSIONS Visual function between fellow eyes showed a fair to moderate correlation that was statistically significant. The pattern of vision loss was also more similar in fellow eyes than between eyes of different patients. These results may help allow better prediction of visual outcome for the second eye in patients with NAION.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
PURPOSE: To objectively compare quantitative parameters related to image quality attained at coronary magnetic resonance (MR) angiography of the right coronary artery (RCA) performed at 7 T and 3 T. MATERIALS AND METHODS: Institutional review board approval was obtained, and volunteers provided signed informed consent. Ten healthy adult volunteers (mean age ± standard deviation, 25 years ± 4; seven men, three women) underwent navigator-gated three-dimensional MR angiography of the RCA at 7 T and 3 T. For 7 T, a custom-built quadrature radiofrequency transmit-receive surface coil was used. At 3 T, a commercial body radiofrequency transmit coil and a cardiac coil array for signal reception were used. Segmented k-space gradient-echo imaging with spectrally selective adiabatic fat suppression was performed, and imaging parameters were similar at both field strengths. Contrast-to-noise ratio between blood and epicardial fat; signal-to-noise ratio of the blood pool; RCA vessel sharpness, diameter, and length; and navigator efficiency were quantified at both field strengths and compared by using a Mann-Whitney U test. RESULTS: The contrast-to-noise ratio between blood and epicardial fat was significantly improved at 7 T when compared with that at 3 T (87 ± 34 versus 52 ± 13; P = .01). Signal-to-noise ratio of the blood pool was increased at 7 T (109 ± 47 versus 67 ± 19; P = .02). Vessel sharpness obtained at 7 T was also higher (58% ± 9 versus 50% ± 5; P = .04). At the same time, RCA vessel diameter and length and navigator efficiency showed no significant field strength-dependent difference. CONCLUSION: In our quantitative and qualitative study comparing in vivo human imaging of the RCA at 7 T and 3 T in young healthy volunteers, parameters related to image quality attained at 7 T equal or surpass those from 3 T.
Resumo:
Population viability analyses (PVA) are increasingly used in metapopulation conservation plans. Two major types of models are commonly used to assess vulnerability and to rank management options: population-based stochastic simulation models (PSM such as RAMAS or VORTEX) and stochastic patch occupancy models (SPOM). While the first set of models relies on explicit intrapatch dynamics and interpatch dispersal to predict population levels in space and time, the latter is based on spatially explicit metapopulation theory where the probability of patch occupation is predicted given the patch area and isolation (patch topology). We applied both approaches to a European tree frog (Hyla arborea) metapopulation in western Switzerland in order to evaluate the concordances of both models and their applications to conservation. Although some quantitative discrepancies appeared in terms of network occupancy and equilibrium population size, the two approaches were largely concordant regarding the ranking of patch values and sensitivities to parameters, which is encouraging given the differences in the underlying paradigms and input data.
Resumo:
OBJECTIVE: To demonstrate the validity and reliability of volumetric quantitative computed tomography (vQCT) with multi-slice computed tomography (MSCT) and dual energy X-ray absorptiometry (DXA) for hip bone mineral density (BMD) measurements, and to compare the differences between the two techniques in discriminating postmenopausal women with osteoporosis-related vertebral fractures from those without. METHODS: Ninety subjects were enrolled and divided into three groups based on the BMD values of the lumbar spine and/or the femoral neck by DXA. Groups 1 and 2 consisted of postmenopausal women with BMD changes <-2SD, with and without radiographically confirmed vertebral fracture (n=11 and 33, respectively). Group 3 comprised normal controls with BMD changes > or =-1SD (n=46). Post-MSCT (GE, LightSpeed16) scan reconstructed images of the abdominal-pelvic region, 1.25 mm thick per slice, were processed by OsteoCAD software to calculate the following parameters: volumetric BMD values of trabecular bone (TRAB), cortical bone (CORT), and integral bone (INTGL) of the left femoral neck, femoral neck axis length (NAL), and minimum cross-section area (mCSA). DXA BMD measurements of the lumbar spine (AP-SPINE) and the left femoral neck (NECK) also were performed for each subject. RESULTS: The values of all seven parameters were significantly lower in subjects of Groups 1 and 2 than in normal postmenopausal women (P<0.05, respectively). Comparing Groups 1 and 2, 3D-TRAB and 3D-INTGL were significantly lower in postmenopausal women with vertebral fracture(s) [(109.8+/-9.61) and (243.3+/-33.0) mg/cm3, respectively] than in those without [(148.9+/-7.47) and (285.4+/-17.8) mg/cm(3), respectively] (P<0.05, respectively), but no significant differences were evident in AP-SPINE or NECK BMD. CONCLUSION: the femoral neck-derived volumetric BMD parameters using vQCT appeared better than the DXA-derived ones in discriminating osteoporotic postmenopausal women with vertebral fractures from those without. vQCT might be useful to evaluate the effect of osteoporotic vertebral fracture status on changes in bone mass in the femoral neck.
Resumo:
A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). METHODS: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. RESULTS: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. CONCLUSION: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
A fundamental tenet of neuroscience is that cortical functional differentiation is related to the cross-areal differences in cyto-, receptor-, and myeloarchitectonics that are observed in ex-vivo preparations. An ongoing challenge is to create noninvasive magnetic resonance (MR) imaging techniques that offer sufficient resolution, tissue contrast, accuracy and precision to allow for characterization of cortical architecture over an entire living human brain. One exciting development is the advent of fast, high-resolution quantitative mapping of basic MR parameters that reflect cortical myeloarchitecture. Here, we outline some of the theoretical and technical advances underlying this technique, particularly in terms of measuring and correcting for transmit and receive radio frequency field inhomogeneities. We also discuss new directions in analytic techniques, including higher resolution reconstructions of the cortical surface. We then discuss two recent applications of this technique. The first compares individual and group myelin maps to functional retinotopic maps in the same individuals, demonstrating a close relationship between functionally and myeloarchitectonically defined areal boundaries (as well as revealing an interesting disparity in a highly studied visual area). The second combines tonotopic and myeloarchitectonic mapping to localize primary auditory areas in individual healthy adults, using a similar strategy as combined electrophysiological and post-mortem myeloarchitectonic studies in non-human primates.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.