944 resultados para quantitative methods
Resumo:
BACKGROUND: Sedation and therapeutic hypothermia (TH) delay neurological responses and might reduce the accuracy of clinical examination to predict outcome after cardiac arrest (CA). We examined the accuracy of quantitative pupillary light reactivity (PLR), using an automated infrared pupillometry, to predict outcome of post-CA coma in comparison to standard PLR, EEG, and somato-sensory evoked potentials (SSEP). METHODS: We prospectively studied over a 1-year period (June 2012-June 2013) 50 consecutive comatose CA patients treated with TH (33 °C, 24 h). Quantitative PLR (expressed as the % of pupillary response to a calibrated light stimulus) and standard PLR were measured at day 1 (TH and sedation; on average 16 h after CA) and day 2 (normothermia, off sedation: on average 46 h after CA). Neurological outcome was assessed at 90 days with Cerebral Performance Categories (CPC), dichotomized as good (CPC 1-2) versus poor (CPC 3-5). Predictive performance was analyzed using area under the ROC curves (AUC). RESULTS: Patients with good outcome [n = 23 (46 %)] had higher quantitative PLR than those with poor outcome [n = 27; 16 (range 9-23) vs. 10 (1-30) % at day 1, and 20 (13-39) vs. 11 (1-55) % at day 2, both p < 0.001]. Best cut-off for outcome prediction of quantitative PLR was <13 %. The AUC to predict poor outcome was higher for quantitative than for standard PLR at both time points (day 1, 0.79 vs. 0.56, p = 0.005; day 2, 0.81 vs. 0.64, p = 0.006). Prognostic accuracy of quantitative PLR was comparable to that of EEG and SSEP (0.81 vs. 0.80 and 0.73, respectively, both p > 0.20). CONCLUSIONS: Quantitative PLR is more accurate than standard PLR in predicting outcome of post-anoxic coma, irrespective of temperature and sedation, and has comparable prognostic accuracy than EEG and SSEP.
Resumo:
BACKGROUND: Despite major advances in care of premature infants, survivors exhibit mild cognitive deficits in around 40%. Beside severe intraventricular haemorrhages (IVH) and cystic periventricular leucomalacia (PVL), more subtle patterns such as grade I and II IVH, punctuate WM lesions and diffuse PVL might be linked to the cognitive deficits. Grey matter disease is also recognized to contribute to long-term cognitive impairment.¦OBJECTIVE: We intend to use novel MR techniques to study more precisely the different injury patterns. In particular MP2RAGE (magnetization prepared dual rapid echo gradient) produces high-resolution quantitative T1 relaxation maps. This contrast is known to reflect tissue anomalies such as white matter injury in general and dysmyelination in particular. We also used diffusion tensor imaging, a quantitative technique known to reflect white matter maturation and disease.¦DESIGN/METHODS: All preterm infants born under 30 weeks of GA were included. Serial 3T MR-imaging using a neonatal head-coil at DOL 3, 10 and at term equivalent age (TEA), using DTI and MP2RAGE sequences was performed. MP2RAGE generates a T1 map and allows calculating the relaxation time T1. Multiple measurements were performed for each exam in 12 defined white and grey matter ROIs.¦RESULTS: 16 patients were recruited: mean GA 27 2/7 w (191,2d SD±10,8), mean BW 999g (SD±265). 39 MRIs were realized (12 early: mean 4,83d±1,75, 13 late: mean 18,77d±8,05 and 14 at TEA: 88,91d±8,96). Measures of relaxation time T1 show a gradual and significant decrease over time (for ROI PLIC mean±SD in ms: 2100.53±102,75, 2116,5±41,55 and 1726,42±51,31 and for ROI central WM: 2302,25±79,02, 2315,02±115,02 and 1992,7±96,37 for early, late and TEA MR respectively). These trends are also observed in grey matter area, especially in thalamus. Measurements of ADC values show similar monotonous decrease over time.¦CONCLUSIONS: From these preliminary results, we conclude that quantitative MR imaging in very preterm infants is feasible. On the successive MP2RAGE and DTI sequences, we observe a gradual decrease over time in the described ROIs, representing the progressive maturation of the WM micro-structure and interestingly the same evolution is observed in the grey matter. We speculate that our study will provide normative values for T1map and ADC and might be a predictive factor for favourable or less favourable outcome.
Resumo:
Two concentration methods for fast and routine determination of caffeine (using HPLC-UV detection) in surface, and wastewater are evaluated. Both methods are based on solid-phase extraction (SPE) concentration with octadecyl silica sorbents. A common “offline” SPE procedure shows that quantitative recovery of caffeine is obtained with 2 mL of an elution mixture solvent methanol-water containing at least 60% methanol. The method detection limit is 0.1 μg L−1 when percolating 1 L samples through the cartridge. The development of an “online” SPE method based on a mini-SPE column, containing 100 mg of the same sorbent, directly connected to the HPLC system allows the method detection limit to be decreased to 10 ng L−1 with a sample volume of 100 mL. The “offline” SPE method is applied to the analysis of caffeine in wastewater samples, whereas the “on-line” method is used for analysis in natural waters from streams receiving significant water intakes from local wastewater treatment plants
Resumo:
Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count nonculturableor non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescencemicroscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the ''impaction on nutrient agar'' method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria. [Authors]
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
Tractography algorithms provide us with the ability to non-invasively reconstruct fiber pathways in the white matter (WM) by exploiting the directional information described with diffusion magnetic resonance. These methods could be divided into two major classes, local and global. Local methods reconstruct each fiber tract iteratively by considering only directional information at the voxel level and its neighborhood. Global methods, on the other hand, reconstruct all the fiber tracts of the whole brain simultaneously by solving a global energy minimization problem. The latter have shown improvements compared to previous techniques but these algorithms still suffer from an important shortcoming that is crucial in the context of brain connectivity analyses. As no anatomical priors are usually considered during the reconstruction process, the recovered fiber tracts are not guaranteed to connect cortical regions and, as a matter of fact, most of them stop prematurely in the WM; this violates important properties of neural connections, which are known to originate in the gray matter (GM) and develop in the WM. Hence, this shortcoming poses serious limitations for the use of these techniques for the assessment of the structural connectivity between brain regions and, de facto, it can potentially bias any subsequent analysis. Moreover, the estimated tracts are not quantitative, every fiber contributes with the same weight toward the predicted diffusion signal. In this work, we propose a novel approach for global tractography that is specifically designed for connectivity analysis applications which: (i) explicitly enforces anatomical priors of the tracts in the optimization and (ii) considers the effective contribution of each of them, i.e., volume, to the acquired diffusion magnetic resonance imaging (MRI) image. We evaluated our approach on both a realistic diffusion MRI phantom and in vivo data, and also compared its performance to existing tractography algorithms.
Resumo:
Changes in bone mineral density and bone strength following treatment with zoledronic acid (ZOL) were measured by quantitative computed analysis (QCT) or dual-energy X-ray absorptiometry (DXA). ZOL treatment increased spine and hip BMD vs placebo, assessed by QCT and DXA. Changes in trabecular bone resulted in increased bone strength. INTRODUCTION: To investigate bone mineral density (BMD) changes in trabecular and cortical bone, estimated by quantitative computed analysis (QCT) or dual-energy X-ray absorptiometry (DXA), and whether zoledronic acid 5 mg (ZOL) affects bone strength. METHODS: In 233 women from a randomized, controlled trial of once-yearly ZOL, lumbar spine, total hip, femoral neck, and trochanter were assessed by DXA and QCT (baseline, Month 36). Mean percentage changes from baseline and between-treatment differences (ZOL vs placebo, t-test) were evaluated. RESULTS: Mean between-treatment differences for lumbar spine BMD were significant by DXA (7.0%, p < 0.01) and QCT (5.7%, p < 0.0001). Between-treatment differences were significant for trabecular spine (p = 0.0017) [non-parametric test], trabecular trochanter (10.7%, p < 0.0001), total hip (10.8%, p < 0.0001), and compressive strength indices at femoral neck (8.6%, p = 0.0001), and trochanter (14.1%, p < 0.0001). CONCLUSIONS: Once-yearly ZOL increased hip and spine BMD vs placebo, assessed by QCT vs DXA. Changes in trabecular bone resulted in increased indices of compressive strength.
Resumo:
Identification and relative quantification of hundreds to thousands of proteins within complex biological samples have become realistic with the emergence of stable isotope labeling in combination with high throughput mass spectrometry. However, all current chemical approaches target a single amino acid functionality (most often lysine or cysteine) despite the fact that addressing two or more amino acid side chains would drastically increase quantifiable information as shown by in silico analysis in this study. Although the combination of existing approaches, e.g. ICAT with isotope-coded protein labeling, is analytically feasible, it implies high costs, and the combined application of two different chemistries (kits) may not be straightforward. Therefore, we describe here the development and validation of a new stable isotope-based quantitative proteomics approach, termed aniline benzoic acid labeling (ANIBAL), using a twin chemistry approach targeting two frequent amino acid functionalities, the carboxylic and amino groups. Two simple and inexpensive reagents, aniline and benzoic acid, in their (12)C and (13)C form with convenient mass peak spacing (6 Da) and without chromatographic discrimination or modification in fragmentation behavior, are used to modify carboxylic and amino groups at the protein level, resulting in an identical peptide bond-linked benzoyl modification for both reactions. The ANIBAL chemistry is simple and straightforward and is the first method that uses a (13)C-reagent for a general stable isotope labeling approach of carboxylic groups. In silico as well as in vitro analyses clearly revealed the increase in available quantifiable information using such a twin approach. ANIBAL was validated by means of model peptides and proteins with regard to the quality of the chemistry as well as the ionization behavior of the derivatized peptides. A milk fraction was used for dynamic range assessment of protein quantification, and a bacterial lysate was used for the evaluation of relative protein quantification in a complex sample in two different biological states
Resumo:
INTRODUCTION: Quantitative sensory testing (QST) is widely used in human research to investigate the integrity of the sensory function in patients with pain of neuropathic origin, or other causes such as low back pain. Reliability of QST has been evaluated on both sides of the face, hands and feet as well as on the trunk (Th3-L3). In order to apply these tests on other body-parts such as the lower lumbar spine, it is important first to establish reliability on healthy individuals. The aim of this study was to investigate intra-rater reliability of thermal QST in healthy adults, on two sites within the L5 dermatome of the lumbar spine and lower extremity. METHODS: Test-retest reliability of thermal QST was determined at the L5-level of the lumbar spine and in the same dermatome on the lower extremity in 30 healthy persons under 40 years of age. Results were analyzed using descriptive statistics and intraclass correlation coefficient (ICC). Values were compared to normative data, using Z-transformation. RESULTS: Mean intraindividual differences were small for cold and warm detection thresholds but larger for pain thresholds. ICC values showed excellent reliability for warm detection and heat pain threshold, good-to-excellent reliability for cold pain threshold and fair-to-excellent reliability for cold detection threshold. ICC had large ranges of confidence interval (95%). CONCLUSION: In healthy adults, thermal QST on the lumbar spine and lower extremity demonstrated fair-to-excellent test-retest reliability.
Resumo:
OBJECTIVE: Evaluation of the quantitative antibiogram as an epidemiological tool for the prospective typing of methicillin-resistant Staphylococcus aureus (MRSA), and comparison with ribotyping. METHODS: The method is based on the multivariate analysis of inhibition zone diameters of antibiotics in disk diffusion tests. Five antibiotics were used (erythromycin, clindamycin, cotrimoxazole, gentamicin, and ciprofloxacin). Ribotyping was performed using seven restriction enzymes (EcoRV, HindIII, KpnI, PstI, EcoRI, SfuI, and BamHI). SETTING: 1,000-bed tertiary university medical center. RESULTS: During a 1-year period, 31 patients were found to be infected or colonized with MRSA. Cluster analysis of antibiogram data showed nine distinct antibiotypes. Four antibiotypes were isolated from multiple patients (2, 4, 7, and 13, respectively). Five additional antibiotypes were isolated from the remaining five patients. When analyzed with respect to the epidemiological data, the method was found to be equivalent to ribotyping. Among 206 staff members who were screened, six were carriers of MRSA. Both typing methods identified concordant of MRSA types in staff members and in the patients under their care. CONCLUSIONS: The quantitative antibiogram was found to be equivalent to ribotyping as an epidemiological tool for typing of MRSA in our setting. Thus, this simple, rapid, and readily available method appears to be suitable for the prospective surveillance and control of MRSA for hospitals that do not have molecular typing facilities and in which MRSA isolates are not uniformly resistant or susceptible to the antibiotics tested.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
The aims of this study were to determine whether responses in myocardial blood flow (MBF) to the cold pressor testing (CPT) method noninvasively with PET correlate with an established and validated index of flow-dependent coronary vasomotion on quantitative angiography. METHODS: Fifty-six patients (57 +/- 6 y; 16 with hypertension, 10 with hypercholesterolemia, 8 smokers, and 22 without coronary risk factors) with normal coronary angiograms were studied. Biplanar end-diastolic images of a selected proximal segment of the left anterior descending artery (LAD) (n = 27) or left circumflex artery (LCx) (n = 29) were evaluated with quantitative coronary angiography in order to determine the CPT-induced changes of epicardial luminal area (LA, mm(2)). Within 20 d of coronary angiography, MBF in the LAD, LCx, and right coronary artery territory was measured with (13)N-ammonia and PET at baseline and during CPT. RESULTS: CPT induced on both study days comparable percent changes in the rate x pressure product (%DeltaRPP, 37% +/- 13% and 40% +/- 17%; P = not significant [NS]). For the entire study group, the epicardial LA decreased from 5.07 +/- 1.02 to 4.88 +/- 1.04 mm(2) (DeltaLA, -0.20 +/- 0.89 mm(2)) or by -2.19% +/- 17%, while MBF in the corresponding epicardial vessel segment increased from 0.76 +/- 0.16 to 1.03 +/- 0.33 mL x min(-1) x g(-1) (DeltaMBF, 0.27 +/- 0.25 mL x min(-1) x g(-1)) or 36% +/- 31% (P <or= 0.0001). However, in normal controls without coronary risk factors (n = 22), the epicardial LA increased from 5.01 +/- 1.07 to 5.88 +/- 0.89 mm(2) (19.06% +/- 8.9%) and MBF increased from 0.77 +/- 0.16 to 1.34 +/- 0.34 mL x min(-1) x g(-1) (74.08% +/- 23.5%) during CPT, whereas patients with coronary risk factors (n = 34) revealed a decrease of epicardial LA from 5.13 +/- 1.48 to 4.24 +/- 1.12 mm(2) (-15.94% +/- 12.2%) and a diminished MBF increase (from 0.76 +/- 0.20 to 0.83 +/- 0.25 mL x min(-1) x g(-1) or 10.91% +/- 19.8%) as compared with controls (P < 0.0001, respectively), despite comparable changes in the RPP (P = NS). In addition, there was a significant correlation (r = 0.87; P <or= 0.0001) between CPT-related percent changes in LA on quantitative angiography and in MBF as measured with PET. CONCLUSION: The observed close correlation between an angiographically established parameter of flow-dependent and, most likely, endothelium-mediated coronary vasomotion and PET-measured MBF further supports the validity and value of MBF responses to CPT as a noninvasively available index of coronary circulatory function.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.