53 resultados para pacs: data handling techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

MicroRNAs (miRNAs) constitute an important class of gene regulators. While models have been proposed to explain their appearance and expansion, the validation of these models has been difficult due to the lack of comparative studies. Here, we analyze miRNA evolutionary patterns in two mammals, human and mouse, in relation to the age of miRNA families. In this comparative framework, we confirm some predictions of previously advanced models of miRNA evolution, e.g. that miRNAs arise more frequently de novo than by duplication, or that the number of protein-coding gene targeted by miRNAs decreases with evolutionary time. We also corroborate that miRNAs display an increase in expression level with evolutionary time, however we show that this relation is largely tissue-dependent, and especially low in embryonic or nervous tissues. We identify a bias of tag-sequencing techniques regarding the assessment of breadth of expression, leading us, contrary to predictions, to find more tissue-specific expression of older miRNAs. Together, our results refine the models used so far to depict the evolution of miRNA genes. They underline the role of tissue-specific selective forces on the evolution of miRNAs, as well as the potential co-evolution patterns between miRNAs and the protein-coding genes they target.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chest physiotherapy (CP) using passive expiratory manoeuvres is widely used in Western Europe for the treatment of bronchiolitis, despite lacking evidence for its efficacy. We undertook an open randomised trial to evaluate the effectiveness of CP in infants hospitalised for bronchiolitis by comparing the time to clinical stability, the daily improvement of a severity score and the occurrence of complications between patients with and without CP. Children <1 year admitted for bronchiolitis in a tertiary hospital during two consecutive respiratory syncytial virus seasons were randomised to group 1 with CP (prolonged slow expiratory technique, slow accelerated expiratory flow, rarely induced cough) or group 2 without CP. All children received standard care (rhinopharyngeal suctioning, minimal handling, oxygen for saturation ≥92%, fractionated meals). Ninety-nine eligible children (mean age, 3.9 months), 50 in group 1 and 49 in group 2, with similar baseline variables and clinical severity at admission. Time to clinical stability, assessed as primary outcome, was similar for both groups (2.9 ± 2.1 vs. 3.2 ± 2.8 days, P = 0.45). The rate of improvement of a clinical and respiratory score, defined as secondary outcome, only showed a slightly faster improvement of the respiratory score in the intervention group when including stethoacoustic properties (P = 0.044). Complications were rare but occurred more frequently, although not significantly (P = 0.21), in the control arm. In conclusion, this study shows the absence of effectiveness of CP using passive expiratory techniques in infants hospitalised for bronchiolitis. It seems justified to recommend against the routine use of CP in these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l(1) and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l(2) model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l(1)-norm solutions as they can resolve both sharp and smooth interfaces within the same model. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To compare examination time with radiologist time and to measure radiation dose of computed tomographic (CT) fluoroscopy, conventional CT, and conventional fluoroscopy as guiding modalities for shoulder CT arthrography. MATERIALS AND METHODS: Glenohumeral injection of contrast material for CT arthrography was performed in 64 consecutive patients (mean age, 32 years; age range, 16-74 years) and was guided with CT fluoroscopy (n = 28), conventional CT (n = 14), or conventional fluoroscopy (n = 22). Room times (arthrography, room change, CT, and total examination times) and radiologist times (time the radiologist spent in the fluoroscopy or CT room) were measured. One-way analysis of variance and Bonferroni-Dunn posthoc tests were performed for comparison of mean times. Mean effective radiation dose was calculated for each method with examination data, phantom measurements, and standard software. RESULTS: Mean total examination time was 28.0 minutes for CT fluoroscopy, 28.6 minutes for conventional CT, and 29.4 minutes for conventional fluoroscopy; mean radiologist time was 9.9 minutes, 10.5 minutes, and 9.0 minutes, respectively. These differences were not statistically significant. Mean effective radiation dose was 0.0015 mSv for conventional fluoroscopy (mean, nine sections), 0.22 mSv for CT fluoroscopy (120 kV; 50 mA; mean, 15 sections), and 0.96 mSv for conventional CT (140 kV; 240 mA; mean, six sections). Effective radiation dose can be reduced to 0.18 mSv for conventional CT by changing imaging parameters to 120 kV and 100 mA. Mean effective radiation dose of the diagnostic CT arthrographic examination (140 kV; 240 mA; mean, 25 sections) was 2.4 mSv. CONCLUSION: CT fluoroscopy and conventional CT are valuable alternative modalities for glenohumeral CT arthrography, as examination and radiologist times are not significantly different. CT guidance requires a greater radiation dose than does conventional fluoroscopy, but with adequate parameters CT guidance constitutes approximately 8% of the radiation dose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yosemite Valley poses significant rockfall hazard and related risk due to its glacially steepened walls and approximately 4 million visitors annually. To assess rockfall hazard, it is necessary to evaluate the geologic structure that contributes to the destabilization of rockfall sources and locate the most probable future source areas. Coupling new remote sensing techniques (Terrestrial Laser Scanning, Aerial Laser Scanning) and traditional field surveys, we investigated the regional geologic and structural setting, the orientation of the primary discontinuity sets for large areas of Yosemite Valley, and the specific discontinuity sets present at active rockfall sources. This information, combined with better understanding of the geologic processes that contribute to the progressive destabilization and triggering of granitic rock slabs, contributes to a more accurate rockfall susceptibility assessment for Yosemite Valley and elsewhere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining can be defined as the extraction of previously unknown and potentially useful information from large datasets. The main principle is to devise computer programs that run through databases and automatically seek deterministic patterns. It is applied in different fields of application, e.g., remote sensing, biometry, speech recognition, but has seldom been applied to forensic case data. The intrinsic difficulty related to the use of such data lies in its heterogeneity, which comes from the many different sources of information. The aim of this study is to highlight potential uses of pattern recognition that would provide relevant results from a criminal intelligence point of view. The role of data mining within a global crime analysis methodology is to detect all types of structures in a dataset. Once filtered and interpreted, those structures can point to previously unseen criminal activities. The interpretation of patterns for intelligence purposes is the final stage of the process. It allows the researcher to validate the whole methodology and to refine each step if necessary. An application to cutting agents found in illicit drug seizures was performed. A combinatorial approach was done, using the presence and the absence of products. Methods coming from the graph theory field were used to extract patterns in data constituted by links between products and place and date of seizure. A data mining process completed using graphing techniques is called ``graph mining''. Patterns were detected that had to be interpreted and compared with preliminary knowledge to establish their relevancy. The illicit drug profiling process is actually an intelligence process that uses preliminary illicit drug classes to classify new samples. Methods proposed in this study could be used \textit{a priori} to compare structures from preliminary and post-detection patterns. This new knowledge of a repeated structure may provide valuable complementary information to profiling and become a source of intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Aim - Concerns over how global change will influence species distributions, in conjunction with increased emphasis on understanding niche dynamics in evolutionary and community contexts, highlight the growing need for robust methods to quantify niche differences between or within taxa. We propose a statistical framework to describe and compare environmental niches from occurrence and spatial environmental data.¦2. Location - Europe, North America, South America¦3. Methods - The framework applies kernel smoothers to densities of species occurrence in gridded environmental space to calculate metrics of niche overlap and test hypotheses regarding niche conservatism. We use this framework and simulated species with predefined distributions and amounts of niche overlap to evaluate several ordination and species distribution modeling techniques for quantifying niche overlap. We illustrate the approach with data on two well-studied invasive species.¦4. Results - We show that niche overlap can be accurately detected with the framework when variables driving the distributions are known. The method is robust to known and previously undocumented biases related to the dependence of species occurrences on the frequency of environmental conditions that occur across geographic space. The use of a kernel smoother makes the process of moving from geographical space to multivariate environmental space independent of both sampling effort and arbitrary choice of resolution in environmental space. However, the use of ordination and species distribution model techniques for selecting, combining and weighting variables on which niche overlap is calculated provide contrasting results.¦5. Main conclusions - The framework meets the increasing need for robust methods to quantify niche differences. It is appropriate to study niche differences between species, subspecies or intraspecific lineages that differ in their geographical distributions. Alternatively, it can be used to measure the degree to which the environmental niche of a species or intraspecific lineage has changed over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Difficult tracheal intubation remains a constant and significant source of morbidity and mortality in anaesthetic practice. Insufficient airway assessment in the preoperative period continues to be a major cause of unanticipated difficult intubation. Although many risk factors have already been identified, preoperative airway evaluation is not always regarded as a standard procedure and the respective weight of each risk factor remains unclear. Moreover the predictive scores available are not sensitive, moderately specific and often operator-dependant. In order to improve the preoperative detection of patients at risk for difficult intubation, we developed a system for automated and objective evaluation of morphologic criteria of the face and neck using video recordings and advanced techniques borrowed from face recognition. Method and results: Frontal video sequences were recorded in 5 healthy volunteers. During the video recording, subjects were requested to perform maximal flexion-extension of the neck and to open wide the mouth with tongue pulled out. A robust and real-time face tracking system was then applied, allowing to automatically identify and map a grid of 55 control points on the face, which were tracked during head motion. These points located important features of the face, such as the eyebrows, the nose, the contours of the eyes and mouth, and the external contours, including the chin. Moreover, based on this face tracking, the orientation of the head could also be estimated at each frame of the video sequence. Thus, we could infer for each frame the pitch angle of the head pose (related to the vertical rotation of the head) and obtain the degree of head extension. Morphological criteria used in the most frequent cited predictive scores were also extracted, such as mouth opening, degree of visibility of the uvula or thyreo-mental distance. Discussion and conclusion: Preliminary results suggest the high feasibility of the technique. The next step will be the application of the same automated and objective evaluation to patients who will undergo tracheal intubation. The difficulties related to intubation will be then correlated to the biometric characteristics of the patients. The objective in mind is to analyze the biometrics data with artificial intelligence algorithms to build a highly sensitive and specific predictive test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac hypertrophy is associated with alterations in cardiomyocyte excitation-contraction coupling (ECC) and Ca(2+) handling. Chronic elevation of plasma angiotensin II (Ang II) is a major determinant in the pathogenesis of cardiac hypertrophy and congestive heart failure. However, the molecular mechanisms by which the direct actions of Ang II on cardiomyocytes contribute to ECC remodeling are not precisely known. This question was addressed using cardiac myocytes isolated from transgenic (TG1306/1R [TG]) mice exhibiting cardiac specific overexpression of angiotensinogen, which develop Ang II-mediated cardiac hypertrophy in the absence of hemodynamic overload. Electrophysiological techniques, photolysis of caged Ca(2+) and confocal Ca(2+) imaging were used to examine ECC remodeling at early ( approximately 20 weeks of age) and late ( approximately 60 weeks of age) time points during the development of cardiac dysfunction. In young TG mice, increased cardiac Ang II levels induced a hypertrophic response in cardiomyocyte, which was accompanied by an adaptive change of Ca(2+) signaling, specifically an upregulation of the Na(+)/Ca(2+) exchanger-mediated Ca(2+) transport. In contrast, maladaptation was evident in older TG mice, as suggested by reduced sarcoplasmic reticulum Ca(2+) content resulting from a shift in the ratio of plasmalemmal Ca(2+) removal and sarcoplasmic reticulum Ca(2+) uptake. This was associated with a conserved ECC gain, consistent with a state of hypersensitivity in Ca(2+)-induced Ca(2+) release. Together, our data suggest that chronic elevation of cardiac Ang II levels significantly alters cardiomyocyte ECC in the long term, and thereby contractility, independently of hemodynamic overload and arterial hypertension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 2009 International Society of Urological Pathology Consensus Conference in Boston made recommendations regarding the standardization of pathology reporting of radical prostatectomy specimens. Issues relating to the infiltration of tumor into the seminal vesicles and regional lymph nodes were coordinated by working group 4. There was a consensus that complete blocking of the seminal vesicles was not necessary, although sampling of the junction of the seminal vesicles and prostate was mandatory. There was consensus that sampling of the vas deferens margins was not obligatory. There was also consensus that muscular wall invasion of the extraprostatic seminal vesicle only should be regarded as seminal vesicle invasion. Categorization into types of seminal vesicle spread was agreed by consensus to be not necessary. For examination of lymph nodes, there was consensus that special techniques such as frozen sectioning were of use only in high-risk cases. There was no consensus on the optimal sampling method for pelvic lymph node dissection specimens, although there was consensus that all lymph nodes should be completely blocked as a minimum. There was also a consensus that a count of the number of lymph nodes harvested should be attempted. In view of recent evidence, there was consensus that the diameter of the largest lymph node metastasis should be measured. These consensus decisions will hopefully clarify the difficult areas of pathological assessment in radical prostatectomy evaluation and improve the concordance of research series to allow more accurate assessment of patient prognosis.