885 resultados para Methods for Multi-criteria Evaluation
Resumo:
The aim of this work is to compare two methods used for determining the proper shielding of computed tomography (CT) rooms while considering recent technological advances in CT scanners. The approaches of the German Institute for Standardisation and the US National Council on Radiation Protection and Measurements were compared and a series of radiation measurements were performed in several CT rooms at the Lausanne University Hospital. The following three-step procedure is proposed for assuring sufficient shielding of rooms hosting new CT units with spiral mode acquisition and various X-ray beam collimation widths: (1) calculate the ambient equivalent dose for a representative average weekly dose length product at the position where shielding is required; (2) from the maximum permissible weekly dose at the location of interest, calculate the transmission factor F that must be taken to ensure proper shielding and (3) convert the transmission factor into a thickness of lead shielding. A similar approach could be adopted to use when designing shielding for fluoroscopy rooms, where the basic quantity would be the dose area product instead of the load of current (milliampere-minute).
Resumo:
This work has highlighted a number of areas of prescribing concern, for example, the long term use of both benzodiazepines and hypnotics, in older residents residing in long term care facilities. Each of these individual areas should be further investigated to determine the underlying reason(s) for the prescribing concerns in these areas and strategic methods of addressing and preventing further issues should be developed on a national level.This resource was contributed by The National Documentation Centre on Drug Use.
Resumo:
The goal of this study was to demonstrate the usefulness of an enzyme-linked immunosorbent assay (ELISA) for the serodiagnosis of pulmonary tuberculosis (PTB) and extrapulmonary TB (EPTB). This assay used 20 amino acid-long, non-overlapped synthetic peptides that spanned the complete Mycobacterium tuberculosis ESAT-6 and Ag85A sequences. The validation cohort consisted of 1,102 individuals who were grouped into the following five diagnostic groups: 455 patients with PTB, 60 patients with EPTB, 40 individuals with non-EPTB, 33 individuals with leprosy and 514 healthy controls. For the PTB group, two ESAT-6 peptides (12033 and 12034) had the highest sensitivity levels of 96.9% and 96.2%, respectively, and an Ag85A-peptide (29878) was the most specific (97.4%) in the PTB groups. For the EPTB group, two Ag85A peptides (11005 and 11006) were observed to have a sensitivity of 98.3% and an Ag85A-peptide (29878) was also the most specific (96.4%). When combinations of peptides were used, such as 12033 and 12034 or 11005 and 11006, 99.5% and 100% sensitivities in the PTB and EPTB groups were observed, respectively. In conclusion, for a cohort that consists entirely of individuals from Venezuela, a multi-antigen immunoassay using highly sensitive ESAT-6 and Ag85A peptides alone and in combination could be used to more rapidly diagnose PTB and EPTB infection.
Resumo:
Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.
Resumo:
PURPOSE: From February 2001 to February 2002, 946 patients with advanced GI stromal tumors (GISTs) treated with imatinib were included in a controlled EORTC/ISG/AGITG (European Organisation for Research and Treatment of Cancer/Italian Sarcoma Group/Australasian Gastro-Intestinal Trials Group) trial. This analysis investigates whether the response classification assessed by RECIST (Response Evaluation Criteria in Solid Tumors), predicts for time to progression (TTP) and overall survival (OS). PATIENTS AND METHODS: Per protocol, the first three disease assessments were done at 2, 4, and 6 months. For the purpose of the analysis (landmark method), disease response was subclassified in six categories: partial response (PR; > 30% size reduction), minor response (MR; 10% to 30% reduction), no change (NC) as either NC- (0% to 10% reduction) or NC+ (0% to 20% size increase), progressive disease (PD; > 20% increase/new lesions), and subjective PD (clinical progression). RESULTS: A total of 906 patients had measurable disease at entry. At all measurement time points, complete response (CR), PR, and MR resulted in similar TTP and OS; this was also true for NC- and NC+, and for PD and subjective PD. Patients were subsequently classified as responders (CR/PR/MR), NC (NC+/NC-), or PD. This three-class response categorization was found to be highly predictive of further progression or survival for the first two measurement points. After 6 months of imatinib, responders (CR/PR/MR) had the same survival prognosis as patients classified as NC. CONCLUSION: RECIST perfectly enables early discrimination between patients who benefited long term from imatinib and those who did not. After 6 months of imatinib, if the patient is not experiencing PD, the pattern of radiologic response by tumor size criteria has no prognostic value for further outcome. Imatinib needs to be continued as long as there is no progression according to RECIST.
Resumo:
OBJECTIVE: To develop a provisional definition for the evaluation of response to therapy in juvenile dermatomyositis (DM) based on the Paediatric Rheumatology International Trials Organisation juvenile DM core set of variables. METHODS: Thirty-seven experienced pediatric rheumatologists from 27 countries achieved consensus on 128 difficult patient profiles as clinically improved or not improved using a stepwise approach (patient's rating, statistical analysis, definition selection). Using the physicians' consensus ratings as the "gold standard measure," chi-square, sensitivity, specificity, false-positive and-negative rates, area under the receiver operating characteristic curve, and kappa agreement for candidate definitions of improvement were calculated. Definitions with kappa values >0.8 were multiplied by the face validity score to select the top definitions. RESULTS: The top definition of improvement was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 1 of the remaining worsening by more than 30%, which cannot be muscle strength. The second-highest scoring definition was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 2 of the remaining worsening by more than 25%, which cannot be muscle strength (definition P1 selected by the International Myositis Assessment and Clinical Studies group). The third is similar to the second with the maximum amount of worsening set to 30%. This indicates convergent validity of the process. CONCLUSION: We propose a provisional data-driven definition of improvement that reflects well the consensus rating of experienced clinicians, which incorporates clinically meaningful change in core set variables in a composite end point for the evaluation of global response to therapy in juvenile DM.
Resumo:
Highway noise is one of the most pressing of the surface characteristics issues facing the concrete paving industry. This is particularly true in urban areas, where not only is there a higher population density near major thoroughfares, but also a greater volume of commuter traffic (Sandberg and Ejsmont 2002; van Keulen 2004). To help address this issue, the National Concrete Pavement Technology Center (CP Tech Center) at Iowa State University (ISU), Federal Highway Administration (FHWA), American Concrete Pavement Association (ACPA), and other organizations have partnered to conduct a multi-part, seven-year Concrete Pavement Surface Characteristics Project. This document contains the results of Part 1, Task 2, of the ISU-FHWA project, addressing the noise issue by evaluating conventional and innovative concrete pavement noise reduction methods. The first objective of this task was to determine what if any concrete surface textures currently constructed in the United States or Europe were considered quiet, had long-term friction characteristics, could be consistently built, and were cost effective. Any specifications of such concrete textures would be included in this report. The second objective was to determine whether any promising new concrete pavement surfaces to control tire-pavement noise and friction were in the development stage and, if so, what further research was necessary. The final objective was to identify measurement techniques used in the evaluation.
Resumo:
RÉSUMÉ EN FRANCAIS : Introduction: Le pseudoxanthome élastique (PXE) est une maladie génétique. Les mutations responsables ont été localisées au niveau du gène codant le transporteur transmembranaire ABC-C6. Des calcifications pathologiques des fibres élastiques de la peau, des yeux et du système cardiovasculaire en sont la conséquence. Buts: Evaluer les critères diagnostiques actuels du PXE en se basant sur les données moléculaires. Méthodes: 142 sujets provenant de 10 familles avec une anamnèse familiale positive pour le PXE ont été investiguées sur le plan clinique, histopathologique et génétique. Résultats: 25 sujets se sont avérés être homozygotes pour le gène PXE muté. 23 d'entre eux ont présenté les manifestations cliniques et histopathologique typiques. Les deux autres souffraient d'une élastose et d'une dégénérescence maculaire si importante qu'un diagnostic de PXE ne pouvait pas être confirmé cliniquement. 67 sujets se sont révélés être des porteurs hétérozygotes et 50 ne présentaient pas de mutation. De ces 117 sujets, 116 n'ont montré aucune lésion cutanée ou ophtalmique pouvant correspondre au PXE. Un seul des sujets sans mutation a présenté une importante élastose solaire ainsi qu'une cicatrisation de la rétine, imitant les lésions typiques du PXE. Quatre des 67 sujets hétérozygotes ont eu une biopsie de peau, dont les analyses histopathologique se sont avérées normales. Conclusion: Dans notre cohorte de patients, le PXE était transmis exclusivement de façoh autosomique récessive. La corrélation retrouvée entre le génotype et le phénotype a permis de confirmer les critères diagnostiques majeurs actuels. Le diagnostic clinique peut être difficile, voir impossible, chez des patients atteints d'une élastose solaire importante et/ou d'une dégénérescence maculaire étendue. Dans ces cas, un test moléculaire est nécessaire afin de confirmer le diagnostic de PXE. A notre connaissance, notre étude présentée ici est le premier travail comparant des données cliniques à des données moléculaires dans le domaine du PXE. ABSTRACT : Background: Pseudoxanthoma elasticum (PXE) is a genetic disorder due to mutations in the gene encoding the transmembrane transporter protein adenosine triphosphate binding cassette (ABC)-C6, resulting in calcifications of elastic fibers in the skin, eyes and cardiovascular system. Objectives: To evaluate the diagnostic criteria for PXE based on molecular data. Methods: Of 10 families with a positive history of PXE 142 subjects were investigated for clinical symptoms, histological findings and genetic haplotype analysis. Results: Of these, 25 subjects were haplotypic homozygous for PXE and 23 had typical clinical and histopathological manifestations. Two of the 25 patients showed such marked solar elastosis and macular degeneration that PXE could not be confirmed clinically. Sixty-seven subject were haplotypic heterozygous carriers and 50 haplotypic homozygous unaffected. Of these 117 subjects, 116 showed no cutaneous or ophthalmologic signs of PXE. In one of the 50 haplotypic homozygous unaffected patients important solar elastosis and scaring of the retina mimicked PXE lesions. Only four of the 67 haplotypic heterozygous carriers had biopsies of nonlesional skin; all were histopathologically normal. Conclusions: In our patients, PXE presents as an autosomal recessive genodermatosis. Correlation of haplotype and phenotype confirmed actual major diagnostic criteria. In patients with marked solar elastosis and/ or severe macular degeneration clinical diagnosis can be impossible and molecular testing is needed to confirm the presence of PXE. To the best of our knowledge our large study compares for the first time clinical findings with molecular data.
Resumo:
Introduction: Ankle arthrodesis (AD) and total ankle replacement (TAR) are typical treatments for ankle osteoarthritis (AO). Despite clinical interest, there is a lack of their outcome evaluation using objective criteria. Gait analysis and plantar pressure assessment are appropriate to detect pathologies in orthopaedics but they are mostly used in lab with few gait cycles. In this study, we propose an ambulatory device based on inertial and plantar pressure sensors to compare the gait during long-distance trials between healthy subjects (H) and patients with AO or treated by AD and TAR. Methods: Our study included four groups: 11 patients with AO, 9 treated by TAR, 7 treated by AD and 6 control subjects. An ambulatory system (Physilog®, CH) was used for gait analysis; plantar pressure measurements were done using a portable insole (Pedar®-X, DE). The subjects were asked to walk 50 meters in two trials. Mean value and coefficient of variation of spatio-temporal gait parameters were calculated for each trial. Pressure distribution was analyzed in ten subregions of foot. All parameters were compared among the four groups using multi-level model-based statistical analysis. Results: Significant difference (p <0.05) with control was noticed for AO patients in maximum force in medial hindfoot and forefoot and in central forefoot. These differences were no longer significant in TAR and AD groups. Cadence and speed of all pathologic groups showed significant difference with control. Both treatments showed a significant improvement in double support and stance. TAR decreased variability in speed, stride length and knee ROM. Conclusions: In spite of a small sample size, this study showed that ankle function after AO treatments can be evaluated objectively based on plantar pressure and spatio-temporal gait parameters measured during unconstrained walking outside the lab. The combination of these two ambulatory techniques provides a promising way to evaluate foot function in clinics.
Resumo:
PURPOSE: To assess the technical feasibility of multi-detector row computed tomographic (CT) angiography in the assessment of peripheral arterial bypass grafts and to evaluate its accuracy and reliability in the detection of graft-related complications, including graft stenosis, aneurysmal changes, and arteriovenous fistulas. MATERIALS AND METHODS: Four-channel multi-detector row CT angiography was performed in 65 consecutive patients with 85 peripheral arterial bypass grafts. Each bypass graft was divided into three segments (proximal anastomosis, course of the graft body, and distal anastomosis), resulting in 255 segments. Two readers evaluated all CT angiograms with regard to image quality and the presence of bypass graft-related abnormalities, including graft stenosis, aneurysmal changes, and arteriovenous fistulas. The results were compared with McNemar test with Bonferroni correction. CT attenuation values were recorded at five different locations from the inflow artery to the outflow artery of the bypass graft. These findings were compared with the findings at duplex ultrasonography (US) in 65 patients and the findings at conventional digital subtraction angiography (DSA) in 27. RESULTS: Image quality was rated as good or excellent in 250 (98%) and in 252 (99%) of 255 bypass segments, respectively. There was excellent agreement both between readers and between CT angiography and duplex US in the detection of graft stenosis, aneurysmal changes, and arteriovenous fistulas (kappa = 0.86-0.99). CT angiography and duplex US were compared with conventional DSA, and there was no statistically significant difference (P >.25) in sensitivity or specificity between CT angiography and duplex US for both readers for detection of hemodynamically significant bypass stenosis or occlusion, aneurysmal changes, or arteriovenous fistulas. Mean CT attenuation values ranged from 232 HU in the inflow artery to 281 HU in the outflow artery of the bypass graft. CONCLUSION: Multi-detector row CT angiography may be an accurate and reliable technique after duplex US in the assessment of peripheral arterial bypass grafts and detection of graft-related complications, including stenosis, aneurysmal changes, and arteriovenous fistulas.
Resumo:
Introduction: In periapical surgery, the absence of standardization between different studies makes it difficult to compare the outcomes. Objective: To compare the healing classification of different authors and evaluate the prognostic criteria of periapical surgery at 12 months. Material and methods: 278 patients (101 men and 177 women) with a mean age of 38.1 years (range 11 to 77) treated with periapical surgery using the ultrasound technique and a 2.6x magnifying glass, and silver amalgam as root-end filling material were included in the study. Evolution was analyzed using the clinical criteria of Mikkonen et al., 1983; radiographic criteria of Rud et al., 1972; the overall combined clinical and radiographic criteria of von Arx and Kurt, 1999; and the Friedman (2005) concept of functional tooth at 12 months of surgery. Results: After 12 months, 87.2% clinical success was obtained according to the Mikkonen et al., 1983 criteria; 73.9% complete radiographic healing using Rud et al. criteria; 62.1% overall success, following the clinical and radiographic parameters of von Arx and Kurt, and 91.9% of teeth were functional. The von Arx and Kurt criteria was found to be the most reliable. Conclusion: Overall evolution according to von Arx and Kurt agreed most closely with the other scales
Resumo:
Introduction: In periapical surgery, the absence of standardization between different studies makes it difficult to compare the outcomes. Objective: To compare the healing classification of different authors and evaluate the prognostic criteria of periapical surgery at 12 months. Material and methods: 278 patients (101 men and 177 women) with a mean age of 38.1 years (range 11 to 77) treated with periapical surgery using the ultrasound technique and a 2.6x magnifying glass, and silver amalgam as root-end filling material were included in the study. Evolution was analyzed using the clinical criteria of Mikkonen et al., 1983; radiographic criteria of Rud et al., 1972; the overall combined clinical and radiographic criteria of von Arx and Kurt, 1999; and the Friedman (2005) concept of functional tooth at 12 months of surgery. Results: After 12 months, 87.2% clinical success was obtained according to the Mikkonen et al., 1983 criteria; 73.9% complete radiographic healing using Rud et al. criteria; 62.1% overall success, following the clinical and radiographic parameters of von Arx and Kurt, and 91.9% of teeth were functional. The von Arx and Kurt criteria was found to be the most reliable. Conclusion: Overall evolution according to von Arx and Kurt agreed most closely with the other scales
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.